Generative artificial intelligence (AI) has emerged as a new technology with the potential to completely reshape business practices and create new avenues for innovation. It will significantly impact virtually every industry sector. Organizations have rushed to embrace it, as evidenced by . In that survey, a vast majority — 81% — indicated they planned to maintain or increase investments in AI in the near term.
However, along with new efficiencies, AI tools will bring new risks to the forefront for those that use them. AI’s risk will force enterprise risk management programs to incorporate strategies to effectively manage, which will undoubtedly be a new process for most entities.
Top Risks Associated with Generative AI Use
While we’re still evaluating all the risks associated with generative AI, we’ve identified several high-risk categories that bear close attention for business leaders, including:
- Data bias and fairness… AI models and their underlying assumptions have the potential to infuse bias in decision-making and to perpetuate discriminatory practices.
- Privacy concerns… Several privacy laws related to collecting, storing and sharing personally identifiable information will likely apply to AI usage. Careful consideration of compliance obligations related to these issues should be a priority.
- Data quality… Relying on incomplete or incorrect data can lead to flawed analyses and outputs.
- Intellectual property and data ownership… Ownership rights and trade secrets may not be properly protected, raising litigation risk related to consent and ownership rights.
- Regulatory risk… Multiple states have enacted bills centered on compliance requirements for those providing AI platforms and those using them. Global privacy regimes have already passed laws with similar focus.
The Chief Artificial Intelligence Officer: A New Role
As modern, risk management-oriented organizations leverage AI to remain competitive, they may need to address a new role: The chief artificial intelligence officer (CAIO). The role will require a keen ability to balance innovation with AI risk management, and may include:
- Strategic AI leadership: Creating and implementing the overall AI strategy, with an eye on improving operational efficiencies, improving customer experiences and identifying new revenue streams.
- Risk management and compliance: Establishing a framework for safe and responsible AI use as they align with both organizational ethical standards and those that external parties generally expected. This also should extend to compliance to regulatory requirements as they evolve.
- Governance programs:Setting up formal structures to oversee AI initiatives and projects. These SOPs should help to ensure the organization meets ethical and regulatory standards with an emphasis on fairness, transparency, data security and preventing unintended consequences.
- Internal cross collaboration:Close coordination with leaders across several divisions and the C-suite. This should foster collaboration among various stakeholders, including but not limited to legal, IT, privacy, operations, marketing, human resources, sales and risk management departments.
- Performance measurement and continuous improvement:Promoting a culture of continuous innovation centered on AI. Periodic evaluations of the performance of AI tools and the return on investment in AI resources while staying current on new technologies that align with current and future goals of the organization.
Where to Start: New AI Risk Management Guidelines
Several organizations have recently published suggested frameworks for risk-based standards in implementing AI programs:
- , National Institute of Standards and Technology (NIST)
- , United States Department of State
- , which is part of the ISO’s ISO/IEC WD 38505 — Information technology — Governance of data
- , the global principles of the Organization for Economic Co-operation and Development (OECD)
While not all organizations may be ready for a CAIO at this point, they should carefully consider such an investment. This role will become more important as AI becomes a standard requirement to stay competitive. Most businesses have already embraced AI in some form, and there’s every indication that AI use will continue increasing rapidly. Litigation and regulatory risks have risen in lockstep with the pace of AI engagement, driving the need for risk managers to be at the forefront of AI risk as it unfolds.
Was this article valuable?
Here are more articles you may enjoy.