Navigating the EU AI Act: Charting the Future of Responsible AI

The EU Parliament passes the EU AI Act
  1. The recently passed EU AI Act sets new standards for AI usage, highlighting the need for ethical and responsible AI practices, particularly for high-risk AI systems like many applications used in the financial sector, from credit to insurance activities (credit is not a sector.
  2. Companies need to assess their current AI systems for potential biases and discrimination risks, ensuring they align with the principles of the new regulations.
  3. With the increasing use of AI across sectors, many companies may be utilizing more AI technologies than they realize, making this the right time for a comprehensive AI audit.
  4. Non-compliance with the new regulations could pose significant reputational risks, making the adoption of responsible AI practices not just a legal necessity but also a strategic decision for businesses and increasing value for all stakeholders.
  5. Given the long timelines for implementing new best practices and organizational changes, starting now is essential to ensuring compliance and promoting ethical AI use.

The European Union recently passed the world’s first comprehensive AI regulation, the EU AI Act, on June 14, 2023. This legislation sets new standards for AI usage and encourages the development of ethical and responsible AI practices, especially in high-risk sectors like banking and insurance.

While many companies might consider their AI usage to be minimal or low-risk, the broad definition of AI in the Act might include more technologies than anticipated. Therefore, it’s essential for organizations to assess their current AI systems, understand the level of risk associated with each system, and align them with the principles of the new regulations. This includes but is not limited to transparency requirements, user protections, and guidelines for dealing with different levels of risk.

Moreover, the Act specifically identifies certain AI applications as high risk, including those used in credit scoring and insurance. Companies in these sectors, and others dealing with similar high-risk AI applications, need to ensure that these systems are subject to rigorous assessment before they are put on the market and throughout their lifecycle. This includes meeting high standards of data governance and ensuring that AI does not lead to discrimination or bias.

Non-compliance with the Act could pose significant reputational risks for companies, besides potential legal consequences. This makes the adoption of responsible AI practices not just a legal necessity but also a strategic decision for businesses. A responsible adoption of AI furthermore increases stakeholders value and enhances trust in customers. As we’ve seen with Diversity, Equity, and Inclusion (DEI) initiatives in the corporate world, AI can either bolster or undermine progress. Without careful planning and implementation, AI systems can perpetuate and amplify existing biases, leading to discrimination and bias in decision-making.

Given the long timelines required for the implementation of new best practices and for the required organizational changes, it’s crucial for companies to start now. Regular audits of AI algorithms, securing and protecting data collected and used for AI systems, and keeping a trained human in the loop to review AI outputs and intervene when necessary, are just a few steps in the right direction.

While the EU is leading the way in AI regulation, it’s not the only player. The US, for example, is also moving in this space with initiatives such as the Biden-Harris administration’s recent allocation of $140 million in R&D funding to launch seven new National AI Research Institutes.

The potential for AI to impact industries in positive or negative ways comes down to how it’s developed, used, and monitored. The opportunity to create more equitable and ethical AI practices is here, and it’s up to everyone working on or with the technology to leverage it for the greater good.