Channel technologies

Artificial Intelligence And Ethics: AI Regulations to Come?

The piloting phase to test the European Commission’s High-Level Expert Group (HLEG) assessment list of Ethics Guidelines for Trustworthy AI ended on December 1, 2019. This list contains the key requirements that an ethical AI system should meet and provides guidance for their practical implementation.

In the piloting phase, around 700 voluntary participants (both private and public organizations) provided feedback to the HLEG, which will now prepare a revised version to be proposed to the European Commission in early 2020. The objective is to develop a playbook that organizations can use to assess the level of compliance of their AI systems and applications with the ethics guidelines.

The questions included in the assessment list cover seven areas:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental well-being
  • Accountability

While it is likely that this list will require further fine tuning in order to address certain already-anticipated problems (i.e., the need for tailoring to specific industry sectors), it does seem to constitute a foundation for the EU Commission to provide practical guidance to the industry, in particular in relation to the design, development, and implementation of trustworthy AI systems.

Whether this sets the basis for some kind of “EU standard or certification” remains to be seen. What seems clear is that the EU Commission is expanding the “soft law” approach to the AI arena, probably in order to test the waters before creating a wide EU umbrella legislation. As a significant sign, EU Commission President- Ursula Von der Leyen- has already promised to propose legislation on the human and ethical implications of AI during her first 100 days in office.

The effectiveness of these measures and EU endeavors to frame a regulatory environment for AI will be confronted with the need to maintain the right balance between regulation and  innovation. Avoiding overregulation in an increasingly globalized and interconnected economy will be key to ensuring that Europe does not miss this train.

In the short term, it is likely that rather than a completely new set of rules for AI, we will see an exercise in updating and modifying existing pieces of legislation (GDPR, product liability, intellectual property, insurance, etc.), combined with a mix of international standards, which in some cases, are more flexible and efficient when dealing with emerging and fast-evolving technologies. Some international organizations, including the Institute of Electric and Electronic Engineers (IEEE), the Information Technology Industry Council (ITI) ,and the OCDE have already started producing their own rules.

Interesting times ahead, not only for the thousands of companies that are already working to get their piece of the AI pie, but especially for their lawyers, who must anticipate how to navigate an uncertain and segregated regulatory landscape while providing efficient, yet flexible, risk- proof legal solutions.

Miguel Viedma is group legal vice president, Digital and Business Innovation at Capgemini. Read more Capgemini blogs here.

Sponsored by Capgemini

With more than 180,000 people in over 40 countries, Capgemini is a global leader in consulting, technology and outsourcing services. The Group reported 2015 global revenues of EUR 11.9 billion. Together with its clients, Capgemini creates and delivers business, technology and digital solutions that fit their needs, enabling them to achieve innovation and competitiveness. A deeply multicultural organization, Capgemini has developed its own way of working, the Collaborative Business Experience(TM), and draws on Rightshore®, its worldwide delivery model.
Learn more about us at