Want to develop an AI risk management framework? Treat him like a human.

[ad_1]
The The Transform Technology Summits begin October 13 with Low-Code / No Code: Enabling Enterprise Agility. Register now!
Artificial intelligence (AI) technologies offer extremely significant strategic benefits and risks for global businesses and government agencies. One of AI’s greatest strengths is its ability to engage in behaviors typically associated with human intelligence, such as learning, planning, and problem-solving. AI, however, also brings new risks to organizations and individuals, and manifests those risks in bewildering ways.
It is inevitable that AI will soon face increased regulation. Over the summer, a number of federal agencies released advice, started reviews, and researched information on the disruptive and, at times, disordered capabilities of AI. Now is the time for organizations to prepare for the day when they need to demonstrate that their own AI systems are accountable, transparent, trustworthy, non-discriminatory and secure.
Managing the new risks of AI presents real and daunting challenges. Fortunately, organizations can use some recent agency initiatives as how-to guides for creating or improving AI risk management frameworks. Seen up close, these initiatives demonstrate that the new risks of AI can be managed in the same way as the risks arising from human intelligence. Below, we’ll outline a seven-step approach to bringing a human touch to an effective AI risk management framework. But before that, let’s take a look at the various related government activities over the summer.
A summer of AI initiatives
While summer has traditionally been a quiet time for agency action in Washington, DC, summer 2021 has been anything but calm when it comes to AI. On August 27, 2021, the Securities and Exchange Commission (SEC) issued a information request ask market players to provide the agency with a testimonial on the use of âdigital engagement practicesâ or âDEPâ. The SEC’s response to digital risks posed by FinTech firms could have major ramifications for investment advisers, retail brokers and wealth managers, who are increasingly using AI to create investment strategies. ‘investment and direct customers to higher income products. The SEC action follows a request for information from a group of federal financial regulators that closed earlier this summer regarding possible new AI standards for financial institutions.
As financial regulators assess the risks of AI in guiding individuals’ economic decisions, the Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) announcement on August 13, 2021, a preliminary assessment to examine the safety of AI to steer vehicles. NHTSA will examine the causes of 11 Tesla crashes that have occurred since the start of 2018, in which Tesla vehicles crashed into stages where first responders were active, often in the dark, with autopilot or the cruise control warning of activated traffic.
Meanwhile, other agencies have sought to standardize and standardize AI risk management. On July 29, 2021, the Department of Commerce’s National Institute of Standards and Technology released a information request to help develop a voluntary AI risk management framework. In June 2021, the General Accountability Office (GAO) issued a AI accountability framework identify key practices to help ensure the accountability and responsible use of AI by federal agencies and other entities involved in the design, development, deployment and ongoing monitoring of AI systems.
Using human risk management as a starting point
As the summer government activity portends, agencies and other important stakeholders are likely to formalize the requirements to manage the risks to individuals, organizations and society associated with AI. Although AI presents new risks, organizations can effectively and efficiently extend some aspects of their existing risk management frameworks to AI. Practical advice offered by certain risk management frameworks developed by government entities, in particular by GAO, the intelligence community AI Ethics Framework, and the European Commission High Level Expert Group on Artificial Intelligence Ethical guidelines for trustworthy AI, provide an outline for a seven-step approach for organizations to extend their existing human risk management frameworks to AI.
First of all, the nature of how AI is created, trained and deployed makes it imperative to build integrity into AI at the design stage. Just as employees need to be aligned with an organization’s values, so too should AI. Organizations need to set the right tone from the start on how they will responsibly develop, deploy, evaluate and secure AI, consistent with their core values ââand a culture of integrity.
Second, before entering AI, organizations should perform a type of due diligence similar to that of new hires or third-party vendors. As with humans, this due diligence process should be risk-based. Organizations should verify the equivalent of the AI ââresume and transcript. For AI, this will take the form of ensuring the quality, reliability and validity of the data sources used to train AI. Organizations may also have to assess the risks of using AI products when service providers are unwilling to share details about their proprietary data. Since even good data can produce bad AI, this due diligence review would include checking the equivalent of benchmarks to identify potential biases or safety issues in AI’s past performance. For particularly sensitive AI, this due diligence may also include a thorough background check to rule out any security issues or insider threats, which might require reviewing the source code of the AI ââwith the vendor’s consent.
Third, once integrated, AI must be anchored in the culture of an organization before being deployed. Like other forms of intelligence, AI needs to understand the organization’s code of conduct and applicable legal limits, and then it needs to embrace and maintain them over time. AI must also learn to report alleged wrongdoing by itself and others. Using AI risk and impact assessments, organizations can assess, among other things, the privacy, civil liberties, and civil rights implications for each new AI system.
Fourth, once deployed, AI must be managed, evaluated and held accountable. As with people, organizations should take a risk-based, conditional, and incremental approach to the responsibilities assigned to an AI. There should be an appropriate probation period for the AI, with advancement conditioned on achieving results consistent with program and organizational goals. Like humans, AI must be appropriately supervised, disciplined in cases of abuse, rewarded for its success, and able and willing to cooperate meaningfully in audits and investigations. Companies should systematically and regularly document the performance of an AI, including any corrective actions taken to ensure that it is producing the desired results.
SixthLike people, AI needs to be protected against physical damage, insider threats, and cybersecurity risks. For particularly risky or valuable AI systems, security precautions may include insurance coverage, similar to the insurance companies purchase for key executives.
SeventhLike humans, not all AI systems will meet an organization’s core values ââand performance standards, and even those that do will eventually leave or have to retire. Organizations should define, develop and implement transfer, termination and retirement procedures for AI systems. For particularly severe AI systems, there should be clear mechanisms to, in effect, escort the AI ââout of the building by disengaging and deactivating it when things go wrong.
AI, like humans, poses surveillance challenges because inputs and decision-making processes are not always visible and change over time. By managing the new risks associated with AI the same way people do, the seemingly daunting monitoring challenges associated with AI can become more accessible and help ensure that AI is as reliable and accountable as any other. forms of intelligence of an organization.
Michael K. Atkinson is a partner in a law firm Crowell & Moring in Washington, DC, and co-lead of the cabinet’s national security practice. He was previously Inspector General of the Intelligence Community in the Office of the Director of National Intelligence.
Rukiya Mohamed is a partner in Crowell & Moring’s White Collar and Enforcement Group in Washington, DC
VentureBeat
VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the topics that interest you
- our newsletters
- Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
- networking features, and more
Become a member
[ad_2]