IBM Abandons Facial Recognition Technology Amid Racial Profiling Concerns
Amid concerns that artificial intelligence (AI) could be used for racial profiling, IBM no longer offers general purpose facial recognition or analysis software, according to a letter from IBM CEO Arvind Krishna to Congress.
The letter, focused on Racial Justice Reform, raises concerns about how facial recognition software and related technologies could be used for mass surveillance and racial profiling.
In a portion of the letter, Krishna writes:
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.
Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.
Finally, national policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”
IBM’s letter to Congress reinforces the technology industry’s recent outcry against racism and social injustices. Many technology company CEOs have called for social justice and police reforms following the deaths of George Floyd, Ahmaud Arbery and Breonna Taylor at the hands of police.
Artificial Intelligence Concerns
Concerns about the use of artificial intelligence for mass surveillance and potential racial profiling have grown in recent months. Many technology companies — including Apple, Google, IBM and Microsoft — have revised their AI strategies and/or called for government regulations to mitigate concerns about potential privacy, bias and security issues.