IT management, Channel technologies, Enterprise

Artificial Intelligence: With Great Power Comes Great Responsibility

The constant stories on the possibilities and dangers of artificial intelligence might give you the impression that we are dealing with a “new new” phenomenon. Nothing is further from the truth. Actually, artificial intelligence (AI) was already present in the middle of the twentieth century and subsequently went through what Gartner calls a hype cycle. After having been hailed as the solution for almost everything, it failed to deliver on bloated expectations and a period called the AI winter set in. In a time when disillusionments about the actual results of AI were less than convincing, there were obviously not many that wanted to invest into the further development of AI.

Fast forward to the present, where the developments in technology and declining cost of processing and storage have changed the scene. To be clear, there are already many applications of AI. Examples are (semi)autonomous cars, speech recognition (Alexa, Siri), machine translations (Google Translate) and text and video analytics.

AI's Vast Application Potential

Paul van der Linden, principal consultant, Capgemini
Paul van der Linden, principal consultant, Capgemini

It’s good to understand that it is not only the big tech companies – Google, Amazon, Facebook, and Apple (GAFA) – or their Asian counterparts – Baidu, Alibaba, Tencent, and Xiaomi (BATX) – that are exploring and implementing the many applications of AI. The Chicago Police Department has been using a combination of camera feeds, soundbites from microphones, and AI models to understand whether they should send a police car or ambulance to sites that are generating suspicious images and sounds. The Dutch Tax Agency is using sophisticated models – based on analysis of historic data – that can distinguish between the income tax statements that need to be investigated and those that don’t. Uber uses an algorithm to connect drivers to people requesting a ride. In fact, it’s hard to find a company that would not benefit from the implementation of AI.

As Andrew Ng – who played a major role in transforming both Google and Baidu into AI-driven organizations – mentions, most current applications fall into the category of input-output relations. You bulk feed a lot of data into the algorithm and then end up with some sort of hopefully relevant output. Ng therefore states that we are only at the beginning of what AI is and is more interested in AGI (artificial general intelligence), which resembles the way children learn (not by bulk feeding and tagging information), something which he is not sure will happen in his lifetime.

For now, however, there is still a lot to win by applying this narrow form of AI. Andrew Ng’s comments clarify that AI is an evolution and we are just at the beginning. It also highlights that doing AI is not a choice. The choice is rather when and how to join that evolution.

With the ongoing adoption of AI into different fields it has also become clear that the big expectations for AI are accompanied with big challenges. Despite the best intentions, AI applications have been known to worsen rather than limit racial practices, to weaken rather than to enforce workforce diversity. Self-driving cars have hit (and even killed) pedestrians, which poses the question if the responsibility lies with the designer of the AI algorithms, the manufacturer of the car, or with the driver.

Organizations applying AI or pondering its application, should therefore also make sure that they have actively considered and addressed the implications on topics such as data trust, bias, ethics, and integrity. Trust, visibility, perception, and reputation are paramount in the digital era and can have a direct and substantial impact on your business.

What could be of help with this important, yet difficult matter is the assessment list for trustworthy artificial intelligence (ALTAI) provided by the high-level expert group on artificial intelligence (set up by the European Commission). It proposes the following seven requirements:

  1. Human agency and oversight: humans should be part of the process to oversee and guide.
  2. Data privacy and governance: making sure privacy is guarded by using proper governance.
  3. Technical robustness and safety: deliver trustworthy services also in case of changes.
  4. Accountability: this is closely related to risk and is about taking responsibility for the development, deployment and use of the AI system.
  5. Transparency: can you trace and explain how a result occurred and discuss the limitations of the AI solution?
  6. Diversity, non-discrimination and fairness: AI solutions should be available for all, be fair and not discriminate as a result of, for instance, historic bias, incompleteness, or bad governance models.
  7. Societal and environmental well-being: the AI system is part of a broader context where society and environment are also stakeholders.

All of the above requirements must be duly considered and included in any AI approach, by default and by design. The question then becomes, how to implement this in a practical way without it becoming a huge task.

How to Prosper With AI

Erwin Vorwerk, vice president, insights and data, Capgemini

In a subsequent article we will explore the factors that will enable an organization to implement and prosper from having an AI functionality. For now, let’s briefly highlight these factors:

  1. Know why: the business case. AI should deliver real benefits within set timelines. This will avoid it becoming an expensive hobby.
  2. Implement: build or buy? AI is a process and organizations need to decide if they want to build it themselves or start fast by buying.
  3. Trust, transparency, and ethics: Great results coming out of a black box solution will not inspire trust and could trigger questions on the ethics of the organization.
  4. Go cloud: when requirements are still unclear, or scalability is a consideration (it should be!) it stands to reason to consider using a cloud solution.
  5. Enterprise and third-party data: Successful AI depends on having vast amounts of good quality data – both internal and contextual (third-party data). AI is a process and it needs to be handled as such. The technical part is important but by no means is it the complete picture.
  6. AI thrives by having a diverse team consisting of people with different backgrounds and interests. The key differentiator is asking the right question.
  7. Support from the top: Continuing support from C-level is needed to setup, continue and scale AI. It helps overcoming resistance to change and hitting occasional road bumps.

Artificial intelligence is truly a game changer. We now have the means to continue on the promising AI journey towards artificial general intelligence as outlined by Andrew Ng. However, if we do not address the topics mentioned above, the chances of another AI winter will increase. Not because the possibilities are not there, but because we are not addressing the topics in a responsible and structural manner. And are overtaken by negative perception and sentiment. With great power comes great responsibility.


Author Paul van der Linden is an expert in data privacy and GDPR and Erwin Vorwerk is vice president, insights and data practice, Capgemini. Read more from Capgemini here.

Sponsored by Capgemini

With more than 180,000 people in over 40 countries, Capgemini is a global leader in consulting, technology and outsourcing services. The Group reported 2015 global revenues of EUR 11.9 billion. Together with its clients, Capgemini creates and delivers business, technology and digital solutions that fit their needs, enabling them to achieve innovation and competitiveness. A deeply multicultural organization, Capgemini has developed its own way of working, the Collaborative Business Experience(TM), and draws on Rightshore®, its worldwide delivery model.
Learn more about us at www.capgemini.com.