Regulatory compliance. Words that bring terror to the unsuspecting and naive.
I was one of them, until I moved to Europe, and realized that being compliant is considered an asset. But the horror of compliance is still there.
The EU has implemented the EU AI Act, regulating AI and preventing growth before it can happen, and putting restrictions on AI before it really can take off.
The AI Act was put in place at the beginning 2025, with parts of it put into place now, and the rest staggered until the summer of 2027 (some rumors suggest parts of it are being pushed to the summer of 2030).
It is administered by the EU, with each member country having its own regulatory body and its own priorities. The regulatory bodies per country have not been properly established yet, but there are few requirements we all need to pay attention to.
One of them is “AI Literacy,” a rule that has been in force since February, that the Dutch and the Danes are obsessed with.
There are a bunch of requirements upon the EU, and member states, and the individual regulatory bodies. But I suspect you may be, or may work for, one of the following:
- Providers – Entities responsible for developing AI systems
- Deployers – Entities responsible for operating or using AI systems
AI Literacy refers to the measures taken by providers and deployers of AI systems to ensure, to the best extent possible, a sufficient level of understanding of AI among their staff and other persons involved in the operation and use of AI systems on their behalf.
This means you have to train your staff to understand their AI systems, and the impact of their systems upon others.
When it comes to literacy – to sum it up:
- Equips providers, deployers, and affected persons with the necessary notions to make informed decisions regarding AI systems.
- Notions may vary depending on the relevant context and can include:
- Understanding the correct application of technical elements during AI system development.
- Measures to be applied during AI system use.
- Suitable ways to interpret the AI system’s output.
To sum it up, there are innocent bystanders, called “Affected Persons” in the AI Act – the entity that the AI Act its trying to protect. Your staff needs to understand how their systems work, and what the impact is of the output on these unsuspecting bystanders. You and your colleagues have to know it, and be able to explain it.
At Merceros, we have developed a framework of extracting requirement statements from regulatory documents. These statements become the sources of policies, and controls, the backbone of regulatory governance. Clear, explainable, and especially traceable across the enterprise.
I will get into the intricacies of the AI governance over the next weeks, but feel free to send me a message if you would like to know more now.