Protecting us from our own technology has never been easy.

On December 8, 2023, the EU reached a milestone to help us do just that, by reaching a provisional agreement on the first dedicated law on the use of artificial intelligence (AI). This breakthrough legislation is one of the first comprehensive attempts globally to regulate the use of AI. The aim is to ensure that fundamental rights, democracy, the rule of law, and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field.

Safeguards were agreed to for general purpose AI systems (GPAI) in order to adhere to EU transparency requirements. AI systems were classified as high risk due to significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law. There are mandatory requirements outlined for systems that fall into this category. Citizens will have the right to launch complaints and receive explanations about decisions based on high-risk AI systems that impact their rights (all of this is contained in the provisional agreement referenced above).

Appropriate implementation and governance of the act will be key to ensure that innovation thrives and that small to medium enterprises (SMEs) can develop, test, and train AI solutions before they enter the market. EMA and the Heads of Medicines Agencies (HMAs) have set out a collaborative AI workplan to embrace AI for productivity, automating processes and systems, increasing insights into data and to support robust decision making to benefit public health. The workplan was adopted by EMA’s Management Board at its December Meeting.

There is a clear “carrot and stick” approach outlined in the agreement. The “carrot” is that the Act should be a catalyst in the development and innovation of this technology particularly in a risk adverse environment. The “stick” comes with the financial penalties for non-compliance. These are defined as up to 35 million euros or 7% of a company’s global turnover. No one ever said protecting us from ourselves would be cheap.

One key impact of the Act is how it guides the pharma and MedTech industry’s maturity in Data Governance. The key in our industry is how we integrate it into our own data governance programs. Understanding the impact on each company’s Quality Management System is a natural first step. There are no shortage of draft papers or reflection publications from various regulatory bodies. This act contains many items that are analogous to the industry’s journey to date, so the clarity and appetite of the EU is welcome.

Leveraging this Act, with other emerging risks (cyber, supply chain, DI) can enhance risk mitigation plans and help create objectives for 2024 for industry.

A few highlights in this Act that may help connect this with existing GxP programs include:

  • Risk based approach: The AI Act identifies approaches on applying the Act for systems of different risk levels (ICHQ9)
  • Ensure the “intended use” of the system is adequate.
  • Emphasis on human oversight.
  • Focus on innovation: with guardrails in place, the rules are better understood.

The relevance of this subject was evident at the 2023 PDA Quality and regulations conference in Antwerp, where Pat Day, a Principal Consultant of Compliance at Lachman Consultants and a leading Manufacturing Data Integrity and Computer Validation expert, presented on Supply Chain Resilience, Data Management and AI.

A recent Lachman blog on cross regulatory agency can be found here (Cross-Agency Cooperation Takes Next Step in Artificial Intelligence Maturity (lachmanconsultants.com)); however, with this pending Act, companies can now consider the impact and start the inclusion of its provisions into their 2024 objectives and QMS impact reviews. Lachman has presented in-depth analysis on this topic as well as incorporated this cultural shift into data governance maturity plans. Please contact us if you have any questions at query@lachmanconsultants.com.