[ loading / cargando ]

Spain
  

European Parliament calls for international dialogue to regulate Artificial Intelligence

Pilar del Castillo, co-chair of the Parliamentary Intergroup on AI, calls for a dialogue between the EU and the US to regulate this subject at a seminar organized by Herbert Smith Freehills to analyze the opportunities and risks of creative AI in business use.




October 31, 2023. In terms of regulation of generative Artificial Intelligence (AI) "an international dialogue is essential, starting with a transatlantic dialogue between the United States and the European Union because of the standards we share". This was stated by Pilar del Castillo Vera, co-chair of the Intergroup on Artificial Intelligence of the European Parliament and former Minister of Education, during the forum "Creative Artificial Intelligence. Business use: opportunities and risks", organized in Madrid by the law firm Herbert Smith Freehills, which has brought together several experts to address the main challenges of AI tools, which, in recent months, have become an accessible consumer item and which companies around the world already use in their daily activities.

Precisely, Del Castillo referred to the current debate taking place within the EU of what may end up being the world’s first regulation in this area. He assured that "the technological development of AI profoundly affects society and all economic sectors", so that "the debate on how to regulate it is taking place in different countries and international institutions, not only in Europe".  In this sense, he explained that the European regulation currently being worked on "pursues a very specific objective: to protect the health, safety and fundamental rights of people, generating greater confidence in Artificial Intelligence, which in turn would enhance its development and implementation". And he concluded that "being the first to regulate this matter can position and help companies operating in the EU to have more tools to develop".

For his part, Eduardo Soler Tappa, managing partner of Herbert Smith Freehills, said that "AI is experiencing exponential growth and is reminiscent in its origins of the birth of the Internet itself, achieving in a short time a large domestic implementation", which explains "the great challenges we face". The experts agreed that, although these tools are extremely useful in the performance of simple, repetitive and "mechanizable" tasks, due to the great voracity in data that the learning of these models requires, there are several risks of this technology, which extend to labor and contractual issues, privacy and identity, copyright or trademarks, among others. Thus, companies that use them must protect themselves against such risks by means of complete and clear governance guidelines.

In this regard, Iria Calviño, partner, and Miguel Ángel Barroso, senior associate of the firm, analyzed the ethical and regulatory bases of creative AI. They explained that "we are facing a challenge and an opportunity, and the determining factor is to use it taking into account the legal and ethical limits of our society". And they recalled that "the EU legal system already includes the ethical principles that regulate and bind us - Charter of Fundamental Rights - and that should also govern the ethical framework of AI". Hence, in their opinion, "many of the debates that seem to be raised by these new tools are already grounded in the legal world". And they added that the applicable ethical principles can be concentrated in four major blocks related to fundamental rights: respect for human autonomy, fairness and equality, prediction of harm, and, finally, transparency.

Corporate policies

Along the same lines, Pablo García Mexía, director of Digital Law at Herbert Smith Freehills, argued that, through generative AI, "we are going to create tremendously useful tools, but which also generate many risks that we must plan for and regulate, without waiting for their impact on our organizations to be so great that they overwhelm us." For this reason, he said, "it is essential to have an internal policy in companies that develops the way in which these tools are used". And he put forward a series of principles that must be respected, such as guidelines for the use and training of people; a specific policy adapted to the needs of the company; a simple and clear structure; and a security-oriented policy, based on transparency and international ethical global standards. He also recommended that companies should consider it as a policy in its own right, separate from other policies that exist within the organization, given its importance and risks. The objective, he said, is "that companies can use AI in an effective, safe and efficient way". And he called for the generation of AI tools within companies "because it will make it easier to control risks by making it possible to embed the desired principles from the start of its use". Ana Garmendia, an associate of the firm, spoke about the contents that this corporate policy should establish and develop, such as who it is aimed at, what risks exist in the use of AI, what conduct is expected of those who use it, when there are breaches and, in these cases, what the consequences are or to whom they should be reported.

Finally, Oriol Pujol, Professor of Computer Science and Artificial Intelligence and Dean of the Faculty of Mathematics and Computer Science at the University of Barcelona, spoke, pointing out the importance of the automatic learning of these tools (which improve their performance based on experience) being "supervised learning". In this regard, he pointed out that AI learns from humans, with inferential competencies but also with referential competencies, i.e., that connect what the tool produces with the real world. And he pointed to the risk that "if too much content is generated with these tools in an unregulated manner, these latter competencies may be lost over time". He also added other existing risks, in his opinion, of generative AI, such as the lack of knowledge of the implicit purpose of the language models, which can give rise to stereotypes or social biases; the incorrect use of the tool, implying possible security and privacy or Intellectual Property problems; the environmental impact; or the centralization of power because, he concluded, "only a few companies have control over these AI tools".
 

Suscribe to our newsletter;

 

Our social media presence

  

  

  
 

  2018 - All rights reserved