Forget the cadence of the traditional three-to-five-year business plan when it comes to Artificial Intelligence (AI). The pace of change in this environment has indicators that need to be followed on a quarter-by-quarter basis. In Q1, we wrote about the fast evolving environment both from the legal perspective and the regulatory aspect in blogs that can be found here and here. To avoid getting overwhelmed, it’s useful to take all of the relevant new information and integrate it into the existing regulatory frameworks. Guidances, such as Q10 Pharmaceutical Quality System, can help companies filter the emerging laws and regulations related to AI into their own PQS or QMS.

ICH Q10 specifies key “Enablers” that can help achieve the goals specified as: Achieve Product Realization, Establish and Maintain a State of Control, and to Facilitate Continual Improvement. The two elements with applicability to AI are Knowledge Management and Quality Risk Management (QRM). The applicability of Knowledge Management will be the subject of a future blog, but consideration of QRM will be examined in more detail below. Companies are well advised to increase their critical thinking in areas such as QRM, moving from a tactical activity to a more strategic program. This cannot be overstated since the predominant hypothesis across the globe is that applying AI to a company’s operations is an inherent advantage rather than a risk that needs to be controlled to lessen potential impact.

Meanwhile, in the U.S. legal environment, a recent significant event (among others) was:

  • Introduction of the bipartisan bill, “The Future of AI Innovation Act,” which has the following highlights:
    • “…lays the foundation to maintain U.S. leadership in the global race to develop artificial intelligence…”
    • Authorizes the “U.S. AI Safety Institute at the National Institute of Standards and Technology (NIST) to promote the development of voluntary standards…”
    • Outlines the need for a strong partnership between government, private sector, and academia.
    • Direction to federal science to make available “curated datasets” to accelerate AI applications.

Noteworthy of this press release is the notion that this Act can also protect the U.S. against foreign adversaries that are building and competing in the AI space (see here). A future blog will address the concept of using AI in mitigating drug shortages.

Moving to the EU, delivering against the already-stated AI strategies, has realized some noteworthy updates, such as:

  • The European AI Office plans a webinar on “Risk Management Approach in the AI Act: Must-know fundamentals for regulatory compliance.” This is intended to assist in the implementation of the EU AI Act, scheduled to enter into force in June/July 2024.
  • The European Commission is promoting industry engagement in the AI Pact. Analogous to the U.S.-proposed Act (described above) in the call for government and industry collaboration, the EC initiative is quite a bit further in maturity and plans significant deliverables in 3Q and 4Q 2024.
  • The European Federation of Pharmaceutical Industries and Associations (EFPIA) has issued a statement on the application of the EU AI Act in the medicinal product lifecycle (here).
  • The MHRA has launched a regulatory sandbox model called “AI Airlock” (here).

So, what do the points above have to do with knowledge management and risk management? Keeping abreast of developments in the political and legal landscape is crucial in developing strategic risk-mitigation plans. There are clear similarities across the examples provided, including not only identifying AI as a benefit, but there are also signals that further plans are needed to learn how this technology can be regulated. Additionally, these plans will enable a new level of maturity in a company’s deployment of ICH Q10, turning risk into continuous improvement opportunities.