Artificial Intelligence: Ethics, Regulations, Policies & Privacy Issues Explained
Here are more updates, sorted alphabetically.
OpenAI: A new language model, OpenAI’s GPT-3, is making waves for its ability to mimic writing, but it falls short on common sense. Some experts think an emerging technique called neuro-symbolic AI is the answer. Source: The Wall Street Journal, August 11, 2020.
Oregon: The Portland, Oregon City Council passed two ordinances to ban the use of facial recognition by both public and private entities. This ban prohibits corporations from using facial recognition in “places of public accommodation. Source: Fight for the Future, September 9, 2020.
The Pope: Vatican officials plan to release principles promoting the ethical use of artificial intelligence (AI), with the backing of Microsoft and IBM as the first two technology industry sponsors. The “Rome Call for AI Ethics” asserts that the technology should respect privacy, work reliably and without bias, consider “the needs of all human beings” and operate transparently – an area of ongoing research because AI systems’ decisions are often inscrutable. Source: Reuters, February 28, 2020.
Rite Aid: Over about eight years, the American drugstore chain Rite Aid Corp quietly added facial recognition systems to 200 stores across the United States, in one of the largest rollouts of such technology among retailers in the country, a Reuters investigation found. Source: Reuters, July 28, 2020.
United Kingdom AI Policy:
- Britain’s most senior police officer called on the government to create a legal framework for police use of new technologies such as artificial intelligence. Speaking about live facial recognition, which police in London started using in January 2020, London police chief Cressida Dick said that she welcomed the government’s 2019 manifesto pledge to create a legal framework for the police use of new technology like AI, biometrics and DNA. Source: Reuters, February 24, 2020.
- Here’s what British insurers are thinking about AI regulations: Source: National Law Review, March 9, 2020.
- Britain’s data watchdog and its Australian counterpart have joint investigation into the personal information handling practices of facial recognition technology company Clearview AI. Source: Reuters, July 9, 2020.
United States AI Policy: Multiple updates…
- White House officials in January 2020 formally announced how the Office of Science and Technology wants federal agencies to approach regulating new artificial intelligence-based tools and the industries that develop AI tech. In particular, federal agencies should avoid ‘overreach.’ Sources: Recode and Vox, The Verge, January 7 and January 8, 2020.
- The White House on February 10, 2020 proposed roughly doubling nondefense research-and-development spending on artificial intelligence and quantum information sciences, citing fierce global competition, while cutting overall funding for R&D. Within the next two years, annual spending on AI would rise to more than $2 billion and funding for quantum computing would increase to $860 million, according to the White House plan. Source: The Wall Street Journal, February 10, 2020.
- It’s time for the U.S. federal government to create a federal agency to regulate artificial intelligence, according to this thesis from Rob Toews, a venture capitalist at Highland Capital Partners. Source: Forbes, June 28, 2020.
- An activist group called Ban Facial Recognition has launched a congressional “scorecard” to name and shame lawmakers who have not endorsed legislation to stop face-based surveillance. Source: Ban Facial Recognization activist group, July 22, 2020.
- The New York Senate and Assembly have passed the Biometric Surveillance in Schools Moratorium, Fight for the Future notes. The group is now calling on New York Governor Andrew Cuomo to sign the legislation into law, and for other states to similarly protect schools from this invasive tech. Source: Fight for the Future, July 22, 2020.
- The U.S. intelligence community, following in the footsteps of the Defense Department, has rolled out its own set of ethics policies for the use of artificial intelligence in its operations. The Principles of AI Ethics for the Intelligence Community and AI Ethics Framework draw inspiration from the DoD’s own set of AI ethics principles that Secretary Mark Esper approved in February 2020. Source: Federal News Network, July 23, 2020.
- Two U.S. senators are proposing legislation to prohibit private companies from collecting biometric data without consumers and employees’ consent. Democratic Senator Jeff Merkley of Oregon said this week he is introducing the reform measure along with independent Senator Bernie Sanders of Vermont. Source: Reuters, August 6, 2020.
- Reps. Will Hurd (R-TX) and Robin Kelly (D-IL) recently introduced a concurrent resolution calling for the creation of a national artificial intelligence (AI) strategy. The legislators and influencers identified 78 specific actions guiding America towards responsible AI innovation. Source: Homeland Preparedness News, September 18, 2020.
- The US Food and Drug Administration (FDA) has proposed new regulatory framework on artificial intelligence and machine learning technologies. Source: News-Medical.net, October 16, 2020.
- The Trump administration is completing guidance for agencies on how to regulate artificial intelligence, according to senior technology official Michael Kratsios. Source: The Wall Street Journal, October 21, 2020.
- The ACLU (American Civil Liberties Union) has filed a Freedom of Information Act (FOIA) request seeking information about the types of AI tools intelligence that government agencies are deploying, what rules constrain their use of AI, and what dangers these systems pose to equality, due process, privacy, and free expression. Source: ACLU, March 26, 2021.
- The White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) on Thursday announced the launch of a National Artificial Intelligence Research Resource Task Force. The new task force will act as an advisory committee and is tasked with ensuring that AI researchers and students across all scientific disciplines receive the computational resources, high quality data, educational tools and other user support. It will submit two reports to Congress that present a comprehensive AI strategy and implementation plan: an interim report in May 2022, and a final report in November 2022. Source: FedScoop, June 11, 2021.
- Three U.S. senators, including Democrat Amy Klobuchar who chairs the Senate Judiciary Committee’s antitrust panel, wrote a letter to Amazon.com to express concern about its palm print recognition system, Source: Reuters, August 13, 2021.
- Top science advisers to President Joe Biden are calling for a new “bill of rights” to guard against powerful new artificial intelligence (AI) technology. The White House’s Office of Science and Technology Policy launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character. Source: The Associated Press, October 8, 2021.
- As AI makes dramatic inroads in enterprises, the U.S. government has quietly started to regulate the use of AI in the consumer credit industry and other areas by banning the use of biased and unexplainable algorithms in decisions that affect consumers. Source: TechTarget, October 19, 2021.
- Two Senators introduced legislation that would form a working group charged with monitoring the security of AI data obtained by federal contractors. This body would also ensure that the data adequately protects national security and recognizes privacy rights, the lawmakers said. Source: GovInfoSecurity, October 22, 2021.
Research: Multiple updates…
- AI and Management Tasks: Nearly 70 percent of managers’ routine work will be completely automated by 2024 thanks to artificial intelligence (AI) coupled with workflow automation, Gartner predicts. Source: ZDnet, January 23, 2020.
- AI and Crime Prediction Concerns: The Coalition for Critical Technology is raising concerns about artificial intelligence (AI) that attempts to predict the likelihood that an individual will commit a criminal act. In a blog, the coalition alleges that such AI technology and related research “reproduces injustices and causes real harm.” Source: Coalition for Critical Technology, June 23, 2020.
- EY Study: According to Bridging AI’s trust gaps report, an EY study developed in collaboration with The Future Society, AI discrepancies exist in four key areas: fairness and avoiding bias, innovation, data access and privacy and data rights.
Track all AI-related coverage on ChannelE2E here.