As artificial intelligence infiltrates technology, business and social settings, the debate about proper AI policies for privacy, security and social responsibility continues to intensify.Amid that context, IBM has issued a policy paper on AI regulation that outlines five policy imperatives for companies, whether they are providers or owners of AI systems. The policies, word for word from IBM, are:Track ongoing AI policy updates from technology companies, governments, businesses and policy makers here.
- Designate a lead AI ethics official. To oversee compliance with these expectations, providers and owners should designate a person responsible for trustworthy AI, such as a lead AI ethics official.
- Different rules for different risks. All entities providing or owning an AI system should conduct an initial high-level assessment of the technology's potential for harm. And regulation should treat different use cases differently based on the possible inherent risk.
- Don't hide your AI. Transparency breeds trust; and the best way to promote transparency is through disclosure making the purpose of an AI system clear to consumers and businesses. No one should be tricked into interacting with AI.
- Explain your AI. Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.
- Test your AI for bias. All organizations in the AI developmental lifecycle have some level of shared responsibility in ensuring the AI systems they design and deploy are fair and secure. This requires testing for fairness, bias, robustness and security, and taking remedial actions as needed, both before sale or deployment and after it is operationalized. For higher risk use cases, this should be reinforced through "co-regulation", where companies implement testing and government conducts spot checks for compliance.