The full benefits of AI can only be realized by ensuring that AI is developed and used responsibly, fairly, securely, and transparently to establish and maintain public trust, according to Melissa MacGregor, Deputy General Counsel and Corporate Secretary at SIFMA.
“It is critical to find the right mix of high-level principles, concrete obligations, and governance commitments for effective AI regulation,” she wrote in the SIFMA Blog.
SIFMA has proposed a risk-based approach to regulating AI that contains strong accountability measures for high-risk AI uses, while providing flexibility to allow industry to innovate.
The SIFMA approach would require companies, under the supervision of their sectoral regulators, to (1) identify how AI is being used, (2) determine which AI uses pose the highest risks, (3) have qualified persons or committees at the company review high-risk AI applications and determine whether the risks are too high, and if so, (4) provide meaningful mitigation steps to reduce those risks to an acceptable level or require that the AI application be abandoned.
To achieve these objectives, any AI regulation should include the following components:
- Scoping. Companies should determine which AI applications are in scope of the framework when building their governance programs.
- Inventory. Companies should prepare and maintain an inventory of their AI applications with sufficient detail to allow them to be risk rated.
- Risk Rating. Companies should have a process for identifying their highest-risk AI applications. The risks considered would include legal and regulatory risks, including operational, reputational, contractual, discrimination, cybersecurity, privacy, consumer harm, lack of transparency, and confidentiality risks.
- Responsible Persons or Committees. Companies should designate one or more individuals or committees who are responsible for identifying and assessing their highest-risk AI applications, and either accepting those risks, mitigating them, or abandoning the particular AI application because the risks are too high.
- Training. Companies should develop training programs to ensure that stakeholders are able to identify the risks associated with their AI use and the various options for reducing risk.
- Documentation. Companies should maintain documentation sufficient for an audit of the risk assessment program.
- Audit. Companies should conduct periodic audits that focus on the effectiveness of the risk assessment program, rather than on individual AI applications. Companies should be permitted to determine how and when audits should be conducted, and who can conduct those audits.
- Third-Party Risk Management. Companies should use the same risk-based principles that are applied to in-house AI applications to evaluate third-party AI applications, and mitigate those risks through diligence, audits, and contractual terms.
According to MacGregor, this proposed framework could be incorporated into existing governance and compliance programs in related areas such as model risk, data governance, privacy, cybersecurity, vendor management, and product development, with further guidance from applicable sectoral regulators as needed.
“Having qualified persons identify, assess, and mitigate the risks associated with the highest-risk AI uses improves accountability, appropriate resource allocation, and employee buy-in through clearly defined and fair processes,” she said.
MacGregor said that given the rapid rate of AI adoption and its potential societal impact, policymakers are facing increased pressure to enact AI regulation.
“SIFMA’s risk-based approach would provide a valuable, flexible framework through which companies and their sectoral regulators can build tailored AI governance and compliance programs that ensure accountability and trust without stifling innovation or wasting time or resources on low-risk AI applications,” she said.