Artificial Intelligence & Facial Recognition: Regulations & Privacy Issues Updates
Artificial Intelligence (AI) technology promises to automate and reshape business, commerce and consumer activities worldwide. But AI also triggers concerns about privacy, potential racial bias, security and plenty more.
As a result, the AI industry and governments worldwide will likely blend innovations with AI regulations. That means channel partners will need to maintain a careful balancing act — driving AI innovations while carefully considering customer privacy, data protection and other issues.
AI-related facial recognition technology is particularly controversial, since the underlying software shows rampant racial bias, according to a 2019 NIST study that evaluates the effects of race, age and sex on Facial Recognition Software.
To help channel partners understand the issues, this regularly updated blog describes AI viewpoints, milestones, shortcomings, ethical issues, potential biases, and emerging industry regulations.
Article Updates: Originally published January 21, 2020. Article updated regularly with news updates & related analysis. Latest updates involve potential United States AI regulations and FDA recommendations.
Artificial Intelligence Regulations, Policies, Innovations & Viewpoints
Amazon AI Policies: Multiple updates…
- Amid concerns that AI and facial recognition technology may lead to racial profiling, Amazon will no longer make its Rekognition software available to police departments over the next year. Amazon is hoping Congress steps up to regulate the facial recognition industry. Source: ChannelE2E, June 10, 2020.
- Amazon has acquired seven AI companies since 2010. Source: PC Mag, July 24, 2020.
Apple AI Policies: Multiple updates…
- Apple acquired Xnor.AI recently and decided to terminate the work on Project Maven, an effort by the U.S. Department of Defense to use AI software to analyze imagery captured by military drones. Source: The Information, January 29, 2020.
- Apple has acquired 20 AI companies since 2010. Source: PC Mag, July 24, 2020.
Australia AI Policies: Britain’s data watchdog and its Australian counterpart have joint investigation into the personal information handling practices of facial recognition technology company Clearview AI. Source: Reuters, July 9, 2020.
Canada AI Policies: Canadian privacy authorities have launched an investigation into New York-based Clearview AI to determine whether the firm’s use of facial recognition technology complies with the country’s privacy laws, the agencies said. Source: Reuters, February 21, 2020.
ClearView AI: Britain’s data watchdog and its Australian counterpart have joint investigation into the personal information handling practices of facial recognition technology company Clearview AI. Source: Reuters, July 9, 2020.
Consumer Activism: A coalition of more than 40 consumer, privacy, and civil liberties organizations released a sign on letter to support to a campaign urging administrators to keep facial recognition technology off of college and university campuses. Fight for the Future, an organization that drives online protests, is deeply involved in the effort. The signers include the ACLU, FreedomWorks, National Center for Transgender Equality, Liberty Coalition, Electronic Frontier Foundation, Color of Change, Jewish Voice for Peace, Jobs with Justice, Consumer Federation of America, Mijente, Campaign for a Commercial Free Childhood, and the National Immigration Law Center. Source: Fight for the Future, February 13, 2020.
European Union AI Policies: Multiple updates…
- The European Union is considering banning facial recognition technology in public areas for up to five years, to give it time to work out how to prevent abuses. Source: Reuters, January 16, 2020.
- What are the EU’s plans for regulating AI? Some perspectives are here. Source: SiliconRepublic, February 14, 2020.
- The European Union unveiled proposals to regulate artificial intelligence that call for strict rules and safeguards on risky applications of the rapidly developing technology. Source: New York Post and Associated Press, February 19, 2020.
- EU regulators unveiled plans aimed at placing more restrictions on machine learning-enabled technologies in fields ranging from public surveillance cameras to cancer scans and self-driving cars. Source: Wall Street Journal, February 19, 2020.
Facebook: Multiple updates…
- The social media giant has acquired eight AI companies since 2010. Source: PC Mag, July 24, 2020.
- Facebook Inc won preliminary approval from a federal court for settlement of a lawsuit that claimed it illegally collected and stored biometric data of millions of users without their consent. Source: Reuters, August 19, 2020.
Facial Recognition Technology, Race Detection Software and AI:
- IBM has stopped selling its facial recognition technology amid concerns about potential misuse of the technology. Somewhat similarly, Amazon and Microsoft have stopped selling facial recognition technology to police departments amid concerns about potential racial bias in the technology. On the flip side, NEC, Clearview AI Inc., and Ayonix Corp., which sell facial recognition products to police agencies in the U.S. and around the world, said they have no plans to change their sales strategies. Sources: ChannelE2E and The Wall Street Journal, June 12, 2020.
- Race Detection Software: More than a dozen companies offer artificial-intelligence programs that promise to identify a person’s race, but researchers and even some vendors worry it will fuel discrimination. Source: The Wall Street Journal, August 14, 2020.
- The head of Google and parent company Alphabet has called for artificial intelligence (AI) to be regulated. Writing in the Financial Times, Sundar Pichai said it was “too important not to” impose regulation but argued for “a sensible approach.” He said that individual areas of AI development, like self-driving cars and health tech, required tailored rules. Source: BBC, January 20, 2020.
- Google has acquired 14 AI companies since 2010. Source: PC Mag, July 24, 2020.
IBM AI Policy: Multiple updates including:
- The company called for rules aimed at eliminating bias in artificial intelligence to ease concerns that the technology relies on data that bakes in past discriminatory practices and could harm women, minorities, the disabled, older Americans and others. Source: Bloomberg, January 21, 2020.
- IBM formally announced the IBM Policy Lab — an initiative aimed at providing policymakers with recommendations for emerging problems in technology. IBM also outlined a set of priorities for AI regulation, including several aimed at compliance and explainability. Source: VentureBeat, January 21, 2020.
- IBM outlines five AI policy imperatives. Source: ChannelE2E, January 22, 2020.
- IBM no longer offers general purpose facial recognition or analysis software amid concerns about potential bias, according to a letter about racial justice reform from IBM CEO Arvind Krishna to Congress. Source: ChannelE2E, June 9, 2020.
- IBM said the U.S. Commerce Department should adopt new controls to limit the export of facial recognition systems to repressive regimes that can be used to commit human rights violations. Source: Reuters, September 11, 2020.
Microsoft AI Policy: Multiple updates including…
- Referring to facial recognition technology, Microsoft outlines the need for public regulation and corporate responsibility. Source: Microsoft, July 13, 2018.
- Microsoft outlines why it’s important for governments in 2019 to start adopting laws to regulate facial recognition technology. Source: Microsoft, December 6, 2018.
- Microsoft VP and Chief Legal Counsel Brad Smith cautions against the European Commission’s call for a temporary ban on AI facial recognition technologies. Source: ZDnet, January 21, 2020.
- Microsoft will not sell facial recognition technology to police departments in the United States until there is a national law in place ground in human rights that will govern this technology. Source: The Washington Post, June 11, 2020.
- Microsoft has acquired 10 AI companies since 2010. Source: PC Mag, July 24, 2020.
Musk, Elon: The Tesla and SpaceX CEO is calling for regulation on organizations developing advanced artificial intelligence, including his companies. Elon Musk tweeted, “All orgs developing advanced AI should be regulated, including Tesla.” Source: NY Post, February 22, 2020.
New York City: Companies in New York City that use artificial intelligence and other technology to make hiring, compensation and other human-resources decisions would face tighter restrictions under a new bill. Source: The Wall Street Journal, February 27, 2020.
NIST – AI and Facial Recognition Concerns: This report suggests facial recognition technology may be designed with AI biases. Source: NIST, February 19, 2020.
OpenAI: A new language model, OpenAI’s GPT-3, is making waves for its ability to mimic writing, but it falls short on common sense. Some experts think an emerging technique called neuro-symbolic AI is the answer. Source: The Wall Street Journal, August 11, 2020.
Oregon: The Portland, Oregon City Council passed two ordinances to ban the use of facial recognition by both public and private entities. This ban prohibits corporations from using facial recognition in “places of public accommodation. Source: Fight for the Future, September 9, 2020.
The Pope: Vatican officials plan to release principles promoting the ethical use of artificial intelligence (AI), with the backing of Microsoft and IBM as the first two technology industry sponsors. The “Rome Call for AI Ethics” asserts that the technology should respect privacy, work reliably and without bias, consider “the needs of all human beings” and operate transparently – an area of ongoing research because AI systems’ decisions are often inscrutable. Source: Reuters, February 28, 2020.
Rite Aid: Over about eight years, the American drugstore chain Rite Aid Corp quietly added facial recognition systems to 200 stores across the United States, in one of the largest rollouts of such technology among retailers in the country, a Reuters investigation found. Source: Reuters, July 28, 2020.
United Kingdom AI Policy:
- Britain’s most senior police officer called on the government to create a legal framework for police use of new technologies such as artificial intelligence. Speaking about live facial recognition, which police in London started using in January 2020, London police chief Cressida Dick said that she welcomed the government’s 2019 manifesto pledge to create a legal framework for the police use of new technology like AI, biometrics and DNA. Source: Reuters, February 24, 2020.
- Here’s what British insurers are thinking about AI regulations: Source: National Law Review, March 9, 2020.
- Britain’s data watchdog and its Australian counterpart have joint investigation into the personal information handling practices of facial recognition technology company Clearview AI. Source: Reuters, July 9, 2020.
United States AI Policy: Multiple updates…
- White House officials in January 2020 formally announced how the Office of Science and Technology wants federal agencies to approach regulating new artificial intelligence-based tools and the industries that develop AI tech. In particular, federal agencies should avoid ‘overreach.’ Sources: Recode and Vox, The Verge, January 7 and January 8, 2020.
- The White House on February 10, 2020 proposed roughly doubling nondefense research-and-development spending on artificial intelligence and quantum information sciences, citing fierce global competition, while cutting overall funding for R&D. Within the next two years, annual spending on AI would rise to more than $2 billion and funding for quantum computing would increase to $860 million, according to the White House plan. Source: The Wall Street Journal, February 10, 2020.
- It’s time for the U.S. federal government to create a federal agency to regulate artificial intelligence, according to this thesis from Rob Toews, a venture capitalist at Highland Capital Partners. Source: Forbes, June 28, 2020.
- An activist group called Ban Facial Recognition has launched a congressional “scorecard” to name and shame lawmakers who have not endorsed legislation to stop face-based surveillance. Source: Ban Facial Recognization activist group, July 22, 2020.
- The New York Senate and Assembly have passed the Biometric Surveillance in Schools Moratorium, Fight for the Future notes. The group is now calling on New York Governor Andrew Cuomo to sign the legislation into law, and for other states to similarly protect schools from this invasive tech. Source: Fight for the Future, July 22, 2020.
- The U.S. intelligence community, following in the footsteps of the Defense Department, has rolled out its own set of ethics policies for the use of artificial intelligence in its operations. The Principles of AI Ethics for the Intelligence Community and AI Ethics Framework draw inspiration from the DoD’s own set of AI ethics principles that Secretary Mark Esper approved in February 2020. Source: Federal News Network, July 23, 2020.
- Two U.S. senators are proposing legislation to prohibit private companies from collecting biometric data without consumers and employees’ consent. Democratic Senator Jeff Merkley of Oregon said this week he is introducing the reform measure along with independent Senator Bernie Sanders of Vermont. Source: Reuters, August 6, 2020.
- Reps. Will Hurd (R-TX) and Robin Kelly (D-IL) recently introduced a concurrent resolution calling for the creation of a national artificial intelligence (AI) strategy. The legislators and influencers identified 78 specific actions guiding America towards responsible AI innovation. Source: Homeland Preparedness News, September 18, 2020.
- The US Food and Drug Administration (FDA) has proposed new regulatory framework on artificial intelligence and machine learning technologies. Source: News-Medical.net, October 16, 2020.
Research: Multiple updates…
- AI and Management Tasks: Nearly 70 percent of managers’ routine work will be completely automated by 2024 thanks to artificial intelligence (AI) coupled with workflow automation, Gartner predicts. Source: ZDnet, January 23, 2020.
- AI and Crime Prediction Concerns: The Coalition for Critical Technology is raising concerns about artificial intelligence (AI) that attempts to predict the likelihood that an individual will commit a criminal act. In a blog, the coalition alleges that such AI technology and related research “reproduces injustices and causes real harm.” Source: Coalition for Critical Technology, June 23, 2020.
- EY Study: According to Bridging AI’s trust gaps report, an EY study developed in collaboration with The Future Society, AI discrepancies exist in four key areas: fairness and avoiding bias, innovation, data access and privacy and data rights.
Track all AI-related coverage on ChannelE2E here.