[ loading / cargando ]

Chile
  

New proposed bill seeks to regulate artificial intelligence systems in Chile

The initiative creates a regulatory framework for providers and implementers of AI systems in the country.

Bofill Mir Abogados - On May 7, a bill seeking to regulate artificial intelligence (AI) systems was introduced in the Chilean Chamber of Deputies by presidential initiative.

The purpose of this bill is to promote the creation, development, innovation and implementation of AI systems at the service of human beings, respectful of democratic principles and fundamental rights of people in the face of harmful effects that certain uses may cause. Thus, it establishes the following regulatory framework:

Scope of application

It applies to providers and implementers of AI systems in Chile, as well as to those located abroad, whose systems are used in the country. Excluded are systems developed for national defense, research prior to market introduction or commissioning, and components under free and open source licenses.

Definitions

The draft defines relevant concepts such as "AI system", "risk", and different types of identification systems. Likewise, it defines what is considered a serious incident, which are all those that result in the following

(a) death or serious damage to the health of a person,
(b) serious disruption to the country’s critical infrastructure,
(c) violation of constitutionally and legally protected fundamental rights, and
(d) damage to the person or property, including environmental damage.

Principles

Among the principles that operators must observe are the following:

(a) human intervention and supervision,
(b) technical soundness and security,
(c) privacy and data governance,
(d) transparency and explainability,
(e) diversity, non-discrimination and equity,
(f) social and environmental welfare,
(g) accountability and responsibility, and
(h) protection of consumer rights.

Classification of AI systems

Four classes of AI systems are identified according to the risk involved in their use:

- (a) Unacceptable risk. Subliminal manipulation systems, those that exploit vulnerabilities, biometrically categorize people, socially qualify, perform real-time remote biometric identification in public spaces, selectively extract facial images, and assess emotional states in specific domains. Their use is prohibited.
- (b) High risk. Systems that present a significant risk of causing harm to individuals, their rights, and security. Compliance obligations are established with rules on risk management, data governance, technical documentation, records, transparency, human supervision, accuracy and cybersecurity.
- (c) Limited risk. Systems whose use presents no significant risk of manipulation, deception or error when interacting with natural persons. They must be provided with transparency, informing the people who interact with them.
- (d) Without evident risk. All other systems that do not fall into the above categories. 

Institutionality and governance

The control of compliance and the application of eventual sanctions for the matters governed by this law is the responsibility of the Personal Data Protection Agency, to which any person may report serious incidents that may occur. Additionally, the law creates an Artificial Intelligence Technical Advisory Council to work with the Ministry of Science, Technology, Knowledge and Innovation on matters related to the development and improvement of AI systems. 

Infringements and penalties

The confidentiality of the information and data obtained from an AI system must be safeguarded, especially protecting intellectual and industrial property rights, personal data and its processing, public interest and national security, and the integrity of criminal cases or administrative proceedings. Infringements and penalties are classified into three categories:

- (a) Very Serious: Putting into service or use of IA system of unacceptable risk, punishable by a fine of up to 20,000 UTM.
- (b) Serious: Non-compliance with rules for high risk IA systems, sanctioned with a fine of up to 10,000 UTM.
- (c) Slight: Non-compliance with transparency duties for limited risk AI systems, punishable with a fine of up to 5,000 UTM.

The law regulates a special sanctioning procedure in charge of the Agency and a judicial claim procedure against the Agency’s final resolutions. It also establishes the civil liability of the operator in case of damages caused by the use of AI systems.

Authors:

Manuel Bernet - Partner - IP, data and technology
Jorge Tisné - Senior Associate - IP, data & technology

Bofillmir.cl
 

Suscribe to our newsletter;

 

Our social media presence

  

  

  
 

  2018 - All rights reserved