• Home
  • About us
  • How We Work
    • Make
    • Buy
    • Industry
    • Healthcare
  • Journal
  • Contacts
 
  • Home
  • About us
  • How We Work
    • Make
    • Buy
    • Industry
    • Healthcare
  • Journal
  • Contacts

The EU AI Act, a practical guide for CEOs and IT Managers

The EU AI Act, what changes in software projects, contracts, roles, logs, accountability, a practical guide for CEOs and IT Managers. If you are a CEO, or an IT Manager, the EU AI Act is not a law that lives in the legal department, it is a law that will show up in your product…

Steve Luccisano Avatar
Steve Luccisano
March 2, 2026

The EU AI Act, what changes in software projects, contracts, roles, logs, accountability, a practical guide for CEOs and IT Managers.
If you are a CEO, or an IT Manager, the EU AI Act is not a law that lives in the legal department, it is a law that will show up in your product roadmap, your procurement templates, your incident process, your vendor negotiations, and your operating model.

The biggest shift is simple to say, and surprisingly hard to implement, AI stops being a nice feature, and becomes a governed decision component, meaning you must be able to answer, at any moment, who owns it, how it is controlled, how it is monitored, and how you prove what happened.



The timeline, because it drives budgets and priorities
The AI Act entered into force on 1 August 2024, it will be fully applicable from 2 August 2026, with staged milestones, prohibited practices and AI literacy measures started applying from 2 February 2025, the governance rules and obligations for general purpose AI models became applicable on 2 August 2025, and certain high risk systems embedded in regulated products have a longer transition until 2 August 2027.

For most companies, this means 2026 is not a far away compliance story, it is a current delivery story, anything you launch now that will be still running in 2026 should be built with these expectations in mind.



The first question that changes everything, what is your role in the chain
Before you debate models, accuracy, or cloud choices, ask one question, are we a provider, or a deployer, for this AI capability. In AI Act language, the provider is the actor placing an AI system on the market or putting it into service under its name, the deployer is the actor using it inside an organisation and process, many companies play both roles, deployer when they use third party AI, provider when they ship AI enabled products to customers.
Why this matters for a CEO and an IT Manager is that the role determines what you must be able to demonstrate, and what you must demand from vendors, if you get this wrong, you end up with a compliance gap that no amount of engineering can patch at the last minute.



Logging becomes architecture, because traceability becomes a requirement
In most software projects, logging is treated as an engineering preference, in many AI projects, it is treated as a nice to have, under the AI Act, for high risk systems, record keeping is an explicit expectation, high risk AI systems must technically allow for the automatic recording of events, across the lifetime of the system. On the deployer side, there is also a very concrete operational obligation, deployers of high risk AI systems must keep the logs automatically generated by the system, to the extent the logs are under their control, for a period appropriate to the intended purpose, and at least six months, unless other EU or national law provides otherwise.



Now, what does that mean in a real project, without technical theatre.
It means you must be able to reconstruct a decision episode, later, with confidence, what version was running, what input was used, what the system produced, what the human did with it, accepted, edited, rejected, and why.

This is the part most teams miss, the AI output alone is not enough, the business risk is usually in the human decision that followed, if your interface has no simple way to capture an override, or a correction, or a reason, you will lose two things at once, auditability, and learning.So the AI Act pushes you, indirectly but strongly, to design supervision into the product experience, not bolt it on afterwards.



Accountability changes, because AI cannot be owned by nobody
In many organisations, the first pilot works because a small team cares, then the system grows, and ownership becomes fuzzy, operations think product owns it, product thinks IT owns it, IT thinks the vendor owns it, and legal appears only when something goes wrong.

The AI Act pulls the opposite direction, it rewards clarity of accountability, especially for higher risk uses, in practice, CEOs and IT Managers need to ensure there is a clear operating model, someone owns business outcomes, someone owns the process and user experience, someone owns technical operations and change, someone owns risk and compliance, even if that risk owner is lightweight.

This is not bureaucracy, it is a scaling mechanism, when ownership is clear, the AI can be adopted broadly, when ownership is unclear, the AI will be quarantined, or switched off at the first incident.



Contracts change the most, because compliance lives inside vendor relationships
Most AI projects are not built from scratch, they are assembled, you use a foundation model, a platform, a system integrator, a data provider, an annotation service, a monitoring tool, compliance becomes as strong as the weakest supplier link, and the contract is where you either secure that link, or ignore it.

There are two common scenarios, you buy or integrate AI as a deployer, or you ship AI as a provider.



If you are a deployer buying or integrating AI, what must appear in the contract
First, operational transparency, not marketing explanation, you need practical instructions for use, limits, where it is reliable, where it is not, and what operators should do when they are unsure, otherwise you cannot claim safe and appropriate use at scale.

Second, logging and evidence access, if you must keep logs for at least six months in high risk cases, you must be able to access and export them, and you must know what they contain, where they are stored, and under what retention rules.

Third, change management, AI changes, models are updated, prompts are tuned, guardrails are adjusted, retrieval bases are refreshed, a traditional software clause that says updates may happen is not enough, you want explicit expectations, how updates are communicated, how they are tested, how regressions are handled, how rollback works, how you can freeze a version for a critical period.

Fourth, incident management, the contract must define how you handle serious malfunctions, who is notified, in what time, what evidence is provided, how you coordinate remediation, without this, every incident becomes a blame discussion, not a recovery process.

A useful pattern is to attach an AI governance annex to procurement, written in operational language, specifying logging, supervision, updates, incident process, and responsibilities, as standard.



If you are a provider shipping AI enabled software, what changes in delivery and liability posture
When you ship under your name, you are closer to provider obligations, especially if your system is in a regulated or high risk context, in that case you need technical documentation and a lifecycle approach to risk, before deployment, not after, and you need to maintain those artefacts as the system evolves. Even if you are not high risk, the market expectation will move, customers will ask, show me your logging, show me your controls, show me your update discipline, because they will be deployers with their own obligations.
This is why AI governance becomes a competitive advantage, not just a compliance cost.



AI literacy is not a training slide deck, it is an operating requirement
From 2 February 2025, the AI Act includes the obligation to take measures to ensure AI literacy ofstaff and others operating AI systems on your behalf, the Commission has published FAQs clarifying that the obligation already applies.

For CEOs and IT Managers, the practical meaning is that safe use is not only a policy, it is a set of habits and interfaces.

If you want this to work in real life, you combine three things, light training that explains typical failure modes, simple rules that define when human review is required, and product design that makes the right behaviour the easy behaviour, for example, clear confidence cues, escalation paths, and friction for sensitive actions.



How this shows up in your project plan, what to change starting next week
If you want a practical approach, treat the AI Act as a delivery pattern, not a legal event.

Start with an inventory, where AI is used today, including third party copilots, classify use cases by impact and sensitivity, clarify whether you are deployer or provider, introduce logging by design, define your operating ownership, update your procurement template with an AI governance annex, and define an incident and rollback playbook.

This sequence aligns with the Act timeline and its emphasis on literacy, traceability, and staged applicability.



The new mental model, governance is not control, it is trust you can scale
The AI Act is often described as regulation, but for most businesses the practical effect is standardisation, it pushes organisations toward predictable AI, traceable AI, and accountable AI. If you do this well, you get more than compliance, you get systems that your organisation dares to use, widely, in real workflows, because they are understandable, controllable, and recoverable.


Steve Luccisano

CEO @ MACOEV Presidente Pratofutura Designed in Europe. Assembled in Prato 🇪🇺🇮🇹

Do you have questions or a project in mind?

Get in touch, we’re ready to help!

Contact us

HEADQUARTERS Macoev S.r.l. Società Benefit

info@macoev.it

+39 0574 878835

Via Antonio Angiolini, 23 – 59100 Prato (PO) – Italia

VAT nr IT02369730979 | REA nr PO – 528843 | Share capital €10,000.00 fully paid up | Recipient code M5UXCR1

Privacy policy

Cookie policy

MACOEV S.r.l. SB © 2026. All rights reserved.

OUR GROUP

Over the years, MACOEV has sought to expand beyond its own organization by creating a strong and stable ecosystem, structuring strategic and long-lasting collaborations, to explore new business opportunities and to fuel its ambition and desire to excel.

Our certifications

institutional partnerships

corporate sustainability

Schedule a call

Check availability and book a brief introductory call. We would be delighted to hear your ideas or answer your questions.

Schedule a call

2026 Project Title: AI4Quality (AI4Q)
The AI 4 Quality project aims to revolutionize the field of surveys through the adoption of advanced AI technologies for the analysis and extrapolation of data from multiple and varied sources, for the creation and management of surveys, performance analysis, and data collection.

This project was funded by the PR FESR 2021-2027 Research and Development for Businesses.