Artificial Intelligence
What is the European AI Act?
The European AI Act is a set of rules created by the European Union. These rules are meant to control the use and creation of artificial intelligence in EU countries. The Act applies to companies in the EU that offer AI services and to individuals in the European Union who use AI.
Its aim is to balance promoting technological innovation with protecting citizens' fundamental rights and safety. The Act classifies AI systems by risk level and sets corresponding requirements to minimize risks.
This includes regulations on transparency, data quality, and the accountability of AI system operators. The European AI Act aims to build trust in AI by setting rules for ethics and sustainable growth in the industry.
The regulation is expected to come into force in May 2024.
EU AI Risk Categories
AI systems that pose risks like infringing on rights or leading to social scoring are banned to prevent harm.
Low-risk AI applications are subject to minimal requirements, ensuring transparency and user information.
AI systems that pose risks like infringing on rights or leading to social scoring are banned to prevent harm. This risk-based approach aims to foster innovation while safeguarding fundamental rights and safety.
AI expert Kevin Geis of the AI Regional Centre at Aschaffenburg University of Applied Sciences states that the majority of regulations will apply to the high-risk category. While not explicitly banned, certain requirements must be fulfilled, which have not been clarified yet.
High-risk categories, which include critical infrastructure, healthcare, transportation, public utilities, education, employment, and law enforcement, must adhere to strict compliance standards measures where their failure or misuse could have serious consequences.
These systems are expected to subject to stringent regulatory requirements to ensure their safety, transparency, and reliability. Before using high-risk AI, it must undergo strict assessments, detailed documentation, human supervision, strong data control, and clear transparency measures. These measures are in place to inform users about how AI works.
This demands continuous monitoring and reporting to maintain compliance, with a focus on minimizing risks related to bias, discrimination, and privacy breaches. The aim is to ensure these technologies are used responsibly, protecting citizens’ rights, and maintaining public trust in AI advancements.
AI expert Kevin Geis summarises the following requirements, which have not yet been officially confirmed:
- CE marking by external testing necessary
- must comply with essential requirements yet to be defined
- These requirements relate to data management, technical documentation, record keeping, transparency, provision of information to users, human oversight, robustness, accuracy and security
- Providers must establish a risk management system that documents and manages the risks over the entire lifecycle of the AI system when used as intended or in the event of responsibly foreseeable misuse
Pre- and post-market regulation for providers
Risk management will be crucial in the implementation of the EU AI Act. Although the exact details are yet to be defined, Kevin Geis has indicated that there will be two distinct phases of risk management: pre-market and post-market for AI system providers.
Before starting an AI system, providers must assess risks and address them to ensure the safe use of their technology. This involves ensuring the system's compliance with data protection, privacy, and security standards, as well as assessing its impact on fundamental rights.
Pre-market assessment
- Registration in a database
- Ai quality management system
- technical documentation
- keep generated logs
evaluation is carried out after an “internal control – self-certification” according to standardized criteria.
Post-market, providers must continuously monitor the performance and impact of their AI systems, updating risk assessments and mitigation measures as necessary. Continuously monitoring helps identify unexpected risks during use, ensuring AI systems stay safe, effective, and ethical throughout their lifespan.
Post-Market enforcement
- Establish and document a post market monitoring system
- Collection and versioning of relevant data
- monitor and report new risks, serious incidents or malfunctioning
- Mandatory reporting for
- user to provider
- from provider to a national Market Surveillance Authority (MSA)
Both points are currently in development and require further definition and elaboration. The information released so far gives an idea of the difficulties companies will encounter when using AI technology. The EU AI Act and its associated regulations not only present challenges but also provide guidelines and opportunities for companies.
From user to provider
The implementation method makes the difference
Due to AI expert Kevin Geis, the transition from a user to a provider of AI systems occurs when an individual or organization goes beyond the mere application of existing AI technologies and begins to develop, adapt or provide its own AI-based solutions, e.g. customized GPT models.
This involves making new AI models and changing existing systems to use them for specific tasks or services. When a company creates AI systems and lets others use them through sale or licensing, it becomes a service provider. This role brings with it additional responsibilities, including complying with regulatory requirements, ensuring transparency and assessing the risks associated with the provision of its AI systems.
But, Kevin Geis emphasizes that this point leaves a lot of room for interpretation and that more precise definitions are needed.
The role of technology in a regulated future:
An opportunity for innovation
The European AI Act marks a decisive step towards a responsible approach to AI. For companies, this set of regulations not only provides guidelines for compliance, but also a platform for innovation. By creating a clear framework, the EU AI Act encourages companies to develop trustworthy and ethical AI solutions.
While ASC does not create its own AI but rather adapts existing AI solutions for our applications like Recording Insights, we are fully committed to ensuring customers’ security when utilizing AI-driven solutions, says Product Manager Britta Chiaia from ASC.
ASC is dedicated to using responsible AI in their compliance recording solution, Recording Insights. They show this commitment by using Microsoft Azure infrastructure and AI services.
“The security of customer data is part of Microsoft Azure AI policy, ensuring encrypted storage and a clear assurance that no training is done using client data,” clarifies Britta Chiaia, ASC product manager.
Technology plays a crucial role in creating a future where AI is used safely and effectively for society's benefit. ASC's solutions demonstrate how recording and analytics technologies can not only facilitate regulatory compliance, but also improve the customer experience.
On-Demand: Webinar recording with AI experts
In our latest webinar “How to Navigate the Challenges and Opportunities of the EU Artificial Intelligence Act” Britta Chiaia and Kevin Geis led a session that highlighted how AI technologies can be used to ensure compliance while minimising risk. By checking disclaimers, automatically categorising calls and identifying sensitive information, AI provides a robust solution for regulatory compliance.
Speaker and webinar topics:
- Responsible use of AI technologies
- Potential to effectively fulfil compliance requirements and seize new opportunities
- Importance of having the right tools and deep regulatory understanding to minimize risk and enhance competitiveness