AI regulations
Picture of OSAM FORMATIONS

OSAM FORMATIONS

European AI Regulation 2026: key obligations to be aware of

Article written by Elisa Bauer

Why is 2026 a game changer?

The European Union has laid the foundation for a comprehensive legal framework for artificial intelligence: the AI Act, which came into force in 2024, with the main obligations becoming fully applicable in 2026. This timetable means that companies and training managers must anticipate effective obligations, compliance, transparency and governance, or face significant penalties.

Artificial intelligence is now deeply integrated into professional practices across Europe, with a variety of technologies already being used by businesses. According to a 2025 Eurostat study on types of AI technologies, Machine learning, predictive analytics systems, conversational assistants and computer vision are among the most widely deployed tools. in European companies.

This technological diversity illustrates the growing penetration of AI into operational and strategic processes, highlighting its key role in the digital transformation of organisations.

types of AI technologies 

(Source: Eurostat — 2025 / European Commission — Eurobarometer 2025).

What the European diet essentially contains

  • Prohibitions and risky practices Certain practices deemed unacceptable (e.g. manipulative techniques or social rating systems) are prohibited or strictly regulated. 
  • Rules for high-risk individuals“ Systems classified as high risk (recruitment, health, critical infrastructure) will have to meet documentation, impact assessment and human oversight requirements.
  • Transparency of general models (GPAI) Specific obligations regarding documentation, testing and traceability of so-called «general» models have already been in force since 2025 and will be strengthened.
Further information.

 

What this means for your organisation

  1. Mapping your AI usage (who does what, where, and with what data).

  2. Prioritise the systems to be assessed (all high-risk systems require audits and Data Protection Impact Assessment).

  3. Implement governance procedures (AI officer, processing register, appeal procedures).

  4. Train HR teams and managers to obligations of transparency and respect for rights.
    These best practices are in line with international recommendations (OECD) and on the European orientation towards «trustworthy» AI.

 

Recent figures to convince your decision-makers

  • The adoption of AI is progressing rapidly: 13,5 % EU companies (≥10 employees) used at least one AI technology in 2024, compared to 8.0% in 2023: a jump that highlights the urgent need for an operational legal framework.
    (source: Eurostat).
    companies using AI in 2023 and 2024
  • Citizens support the use of AI at work but call for safeguards: 62 % view AI positively at work, 70 % believe that it improves productivity, and 84 % require strict management to protect privacy and transparency. These public expectations reinforce the legitimacy of regulatory obligations. (source: European data).

 

Concrete actions to be launched before 2026

  • Compliance audit : inventory of AI systems and classification of risk levels
  • Documentation : technical data sheets, data sets, robustness and traceability tests.
  • Governance & training : clear accountability (DPO/RPD/AI Manager), training for HR and technical teams, appeal procedures for affected individuals.
  • Mitigation plan : when using third-party tools (GPAI), verify contractual clauses, audit rights and compliance with European standards.

The year 2026 marks the transition to strict enforcement of compliance programmes: the law exists, and public expectations and the deployment of technologies confirm the urgency of taking action. Prioritise the identification of high-risk systems, documentation and internal training. This three-pronged approach reduces legal risks and facilitates responsible innovation.

OSAM Training supports you through these changes with certified, tailor-made training in AI. 

  •  

Share