By Elena Mora, Head of Privacy and Data Protection at MAPFRE
The evolution of artificial intelligence (AI) has transformed multiple sectors, promising unprecedented technological advances, and with it, serious questions have been raised regarding the privacy and security of personal data. AI’s ability to process and analyze large volumes of information allows for mass data exploitation, which concerns both users and regulators. In view of this scenario, legislation seeks to establish a regulatory framework that protects the fundamental rights of individuals, including the right to privacy without hindering the potential of AI. In this regard, the European Union (EU), with the General Data Protection Regulation (GDPR), has been a pioneer in this area, establishing strict guidelines on the processing of personal data and granting citizens control over their personal information.
Furthermore, the use of personal data by governments has sparked an important ethical and legal debate. In the name of national security and public interest, some governments have justified the extensive use of data surveillance and analysis technologies. However, this practice has raised concerns about the balance between security and privacy, as well as the potential for abuse of power. The introduction of a new legislation aims to establish a specific regulatory framework in the application of AI.
Legislation establishes a regulatory framework by labeling risk levels
The new European Union AI Regulation is the world’s first comprehensive legal framework to regulate this technology. Approved in Brussels by the European Parliament on March 14 with more than 500 votes in favor and with entry into force on August 1 —with different effective implementation deadlines according to each obligation— this legislation ensures that the AI systems deployed in the European market are secure and respect citizens’ fundamental rights.
To this end, it introduces a classification of AI systems based on risk levels:
• Unacceptable Risk: certain applications of AI will be completely prohibited due to their potential ability to transgress fundamental rights, such as indiscriminate tracking of facial images obtained from the Internet.
• High Risk: systems with a significant impact on the security and rights of citizens, which will require a thorough evaluation of their impact on fundamental rights and transparency before they are introduced into the market.
• Limited Risk: applications that must comply with transparency obligations, such as conversational systems (such as ChatGPT) that must inform users that they are interacting with a machine.
• Minimum or Zero Risk: most AI systems fall into this category, in which the AI Regulation proposes minimum regulation to promote innovation and technological development.
Legislation will have a significant impact on the use of biometrics-based technologies, especially those considered high risk or unacceptable. These applications will face strict controls or prohibitions to protect privacy and other fundamental rights, and the creation of an AI Office in the European Commission, in addition to the national supervisory authorities that must also be created, will guarantee compliance with these regulations, supervising the application of the regulation by the Member States and the European Union.
In Spain, a specific supervisory authority on artificial intelligence has already been created: the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), headquartered in A Coruña.
The insurance industry, particularly in the areas of life and health, is classified as high risk in Appendix 3 of the regulation, except in exceptional cases where the development or deployment of an AI system does not represent a significant risk to health, safety or fundamental rights and freedoms. Insurance companies must work to act on the basis of regulations to understand these exceptions and operate accordingly.
How the new legislation will affect organizations
The European Union's new AI Regulation will likely have a considerable influence on companies by demanding a rigorous classification of AI systems based on their risk level, as well as the mandatory application of risk management systems in the case of high-risk AI systems. This regulatory framework aims to ensure the protection of privacy and other fundamental rights and freedoms of people, forcing organizations to adopt measures that guarantee data protection, transparency in data management, management of risks arising from the development and deployment of AI and the adoption of proactive liability models in compliance with regulations.
Its application will be a significant challenge —especially in startups and SMEs, due to the costs derived from the adaptation— and, under the new AI regulation, companies that develop or use AI systems in the EU will need to:
• Evaluate the risk level of their AI systems in accordance with the criteria established in the regulation.
• Implement appropriate mitigation measures for systems identified as high risk, such as techniques to ensure data quality, mechanisms to correct biases and ways to guarantee transparency and justification of decisions made by AI systems.
• Ensure ongoing compliance with the GDPR, especially with regard to processing personal data using AI-based systems.
MAPFRE goes beyond the regulatory framework
At MAPFRE, we have adopted a predictive strategy for future artificial intelligence regulations, taking advantage of the corporate privacy and data protection model and the synergies between data protection regulations and AI regulations. Adapting in advance to the regulations, analyzing the drafts associated with the regulation as they were available. This proactivity is evident in the early identification of specific controls to mitigate the risks associated with AI systems. In addition, we have a multidisciplinary team with a particular focus on the classification and management of AI systems, as well as on the modification of existing procedures to incorporate the specific requirements associated with this technology.
At the same time, at MAPFRE we have progressed in risk assessment and adopting responsible AI, even testing external tools for efficient and ethical management. We are exploring the adaptation and possible merger of specific risk management methodologies for AI systems with other corporate risk strategies, with the intention of fully integrating AI into the existing risk management system.
The initiative is led by the Privacy and Data Protection department, where we have created the work group in which we are dealing with the various requirements that are applicable and where we define the different user guides for this type of system.
The Corporate Security Division is convinced that trying to anticipate regulations is what allows us to work toward better risk management, especially due to the tight deadlines for regulatory application. With this, we can have the means to streamline and make the implementation of certain requirements more efficient.
|