Channel technologies, Enterprise, Uncategorized

Consequences of AI: The Good, the Bad, the Ethically Responsible

Chris McClean, global head of digital business ethics, Avanade
Author: Chris McClean, global lead for digital ethics, Avanade

It’s hard to think of a single industry or even aspect of our personal lives that is not in some way changing because of the technologies we collectively call “artificial intelligence” (AI). From education, health care and finance, to how we work, play, and engage with other people, advances in AI are increasingly pervasive, and showing no signs of slowing down.

So the question is: What makes us so confident in AI technologies that we let them guide so many aspects of our lives? Given their influence, how do we make sure these systems reflect our ethical values?

At Avanade, we know our clients take digital ethics seriously when dealing with AI. In a recent survey, we found that 83% of business and tech decision-makers feel that digital ethics is a foundation for successful AI. And nearly all respondents (95%) think there could be negative consequences from not considering digital ethics when adopting AI.

But what exactly does it mean to have ethics as a foundation for success? Let’s look at a few cases in which organizations have missed the mark, then some that have successfully integrated ethical principles into their AI technology.

The bad: Credit gone wrong, biased incarceration, foul language

There are countless examples of AI mistakes from around the world, some mildly annoying, others extremely harmful. Here are three examples that stand out for me:

In November 2019, Apple and Goldman Sachs found themselves in hot water over claims of sexism when customers applying for the companies’ joint Apple Card discovered that women were offered significantly lower lines of credit than their male partners, even in cases where they shared assets evenly. A customer service rep blamed an automated algorithm.

In May 2016, ProPublica published a now widely referenced report that uncovered severe racial bias in a software application called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which court judges across the U.S. had been using to assess convicted people’s risk of recidivism when assigning prison sentences.

On June 11, 2020, OpenAI announced a new limited-beta program for its GPT-3 language generation model. Amidst positive response and media coverage highlighting its impressive capabilities, TechCrunch and You The Data both detailed some of the ethical failures models like this continue to face. Notably, much like Google’s biased search and auto-fill problems or Microsoft’s Tay chatbot learning demeaning language from fellow Twitter users, certain prompts to GPT-3 have yielded wildly inappropriate responses.

There are several important lessons to take away from these examples:

  • The Apple Card story shows how a single mistake in an algorithm might put huge segments of a population at a big disadvantage (in this case, financial) but it may go completely unnoticed until someone decides to dig a little deeper.
  • The COMPAS failure teaches us how algorithms may perpetuate, exacerbate or even create prejudices if we don’t use careful testing and oversight, and the potential for devastating human impact cannot be overstated.
  • The concerns about GPT-3 show how machine-generated content might wind its way into the countless, often mundane systems that feed us information every day; and without close scrutiny, this could mean a nearly imperceptible but extremely powerful reinforcement of harmful biases.

The good: Interpretable AI and AI analysis for ALS

Of course, it’s not all bad. When we fully consider the impacts (on people, on society and on the environment) of AI technology as we design, develop, deploy and operate it, we can reduce risks like those described above. We can also go a step further to build trust and assure that AI makes a positive impact. Here’s an example of each:

In a recent blog post, Microsoft highlighted how Scandinavian Airlines is using a system built with Azure Machine Learning to identify potential fraud in its loyalty program, then using Microsoft’s InterpretML to understand why the system flagged certain transactions for investigation. The head of data analytics and artificial intelligence for the airline explained that this interpretability helps establish trust among users and customers, which is necessary for adoption and positive business impact.

Showing how AI can be used for good, Avanade has worked with nonprofits such as the Answer ALS consortium, a group of research, industry and technology partners addressing the challenges of amyotrophic lateral sclerosis (ALS). Using Microsoft Azure, Avanade built a secure system where researchers can analyze genetic information from participating ALS patients. With this new set of technologies, scientists have already been able to identify a gene linked to ALS.

While we’re applying AI technologies to a vast range of use cases, these lessons are important to keep in mind. We know we need to pay close attention to how stakeholders interact with AI systems, whether data subjects or users, and make sure we maintain their trust by assuring security, privacy and transparency. We also need to understand how algorithms generate their results when applicable and have proper oversight to make sure these systems operate appropriately.

Your next step: Creating a digital ethics framework for your organization

Your first step toward addressing ethical concerns in your organization’s AI efforts is to build an ethical framework that establishes a set of values, guiding principles, and a governance structure. You may already be thinking about this – our research found that 66% of organizations implementing AI are also in the process of creating a digital ethics framework (even though only 28% of organizations have actually implemented a digital ethics framework).

To help our clients on this journey, we’ve established digital ethics as one of the five pillars of our AI Maturity model. The model guides decision-makers through the strategy, culture, ethics, technology and data practices they need for their AI projects and programs to be successful. This isn’t an easy task, but we all can agree it’s important to take the time to make sure we get AI technology right.


Author Chris McClean is global lead for digital ethics at Avanade. Read more from Avanade here.