The concepts of Speak Up and Quality Culture have received renewed attention with recent developments of the European Union’s Artificial Intelligence Act.  Lachman recently blogged about the importance of a healthy quality culture (Speak Up! Does Your Quality Culture Have a Voice?), and new proposals under the Digital Omnibus package in the EU introduce three major changes that emphasize key elements of quality culture when applied to data governance; these are regulatory simplification, a new whistleblower tool, and global alignment on AI incident reporting.  These updates reinforce a continuous improvement mindset and demonstrate an openness to adapt the thinking and approach for regulating the use of AI by industry.  As you read the following updates, ask yourself “are we (as a company) prepared?”

  1. Simplification Efforts: Cutting Red Tape for Innovation

The European Commission’s drive for simplification seeks to ease compliance burdens for businesses while maintaining high standards for safety.  Key changes include:

  • Extended Deadlines for High-Risk AI Systems:  Compliance timelines for high-risk AI applications—such as those in law enforcement, healthcare, and critical infrastructure—are pushed back by up to sixteen months.  Instead of August 2026, these rules will now apply from December 2027, with some provisions delayed until August 2028 (AI Act Service Desk).
  • Reduced Administrative Burden:  This update is anticipated to save up to €5 billion in administrative costs by 2029.  Additionally, the removal of mandatory registration for non-high-risk AI systems and expanded access to regulatory sandboxes could foster real-world testing and innovation (Shaping Europe’s digital future: Simpler EU digital rules and new digital wallets to save billions for businesses and boost innovation).
  1. Whistleblower Tool: Strengthening Reporting Systems

The European Commission has launched a secure whistleblower platform to report suspected breaches of the AI Act.  This tool allows individuals to confidentially submit information about violations that could endanger health, safety, or fundamental rights (AI Act Whistleblower Tool).  These reports are directed to the EU AI Office, ensuring centralized oversight and follow-up.

  1. Global Reporting of AI Incidents: Toward International Alignment

Similar to the requirements of pharmacovigilance processes in the pharmaceutical industry, the AI Act introduces mandatory reporting of serious incidents involving high-risk AI systems under Article 73.  Providers must notify national authorities within strict timelines—two days for widespread incidents and ten days in cases involving fatalities.

Call to Action: How Lachman Consultants Can Help

Navigating these new requirements and integrating them into your data governance strategy can be complicated, particularly for organizations operating globally.  Lachman Consultants brings over forty years of global expertise in regulatory compliance and compliance consulting across life sciences and emerging technologies.  Their team of seasoned professionals helps companies develop robust compliance strategies, conduct risk assessments, and implement effective governance frameworks to meet these evolving AI regulations.

Learn more about how Lachman Consultants can support your compliance journey by emailing us at LCS@LachmanConsultants.com.