The
Director's Playbook for Generative AI
By
Jim DeLoach, Former Andersen Partner
and currently a Managing Director at Protiviti. He is the author of
several books and a frequent NACD contributor.
Copyright
2023 National Association of Corporate Directors (NACD). This article originally
appeared on NACD’s BOARDTALK original article maybe found here. Reprinted with permission. No
further reproduction is permitted without permission from NACD.
Without a doubt, the value proposition of generative
artificial intelligence (AI) is alluring. Opportunities for using it to enhance
customer experiences, increase process efficiencies, innovate products and
services, and improve productivity are immense—despite its risks and
limitations.
To contribute value in boardroom discussions about
implementing generative AI models, it is necessary to understand their
opportunities, limitations, and risks. Directors should immerse themselves in
learning about and getting hands-on with using this accessible technology. They
should learn from experts inside and outside the organization and from
published articles providing relevant content. Armed with a baseline
understanding, directors should consider the following questions when engaging
CEOs and their teams in strategic conversations regarding generative AI:
What is the business opportunity in deploying generative
AI? This critical “Why should we care?”
question should be considered strategically and tactically. Directors can ask
the following five high-level questions to advance the conversation:
- What
are the implications of generative AI for our industry, and what are
competitors doing with it?
- Do
we have a strategy for why, where, how, and when we will deploy generative
AI? What use cases are we considering, and how are we selecting and
prioritizing these opportunities and measuring the value contributed?
- Are
we organized appropriately to roll out our strategy? How are we empowering
our people to build, train, and use generative AI?
- Have
we documented our organization’s guidelines and values for privacy,
security, transparency, fairness, human versus machine responsibilities,
and other matters related to our generative AI deployments? Do our
policies account for the need to govern and manage this technology
differently than nongenerative AI?
- How
do we know we are adhering to our guidelines and values? For example, do
we have a cross-functional ethics committee that vets all plans and
actions and monitors for unintended outcomes and consequences?
Insights gained from this discussion enable the board to
understand how and why management intends to position generative AI in the
business.
What are the legal, regulatory, and ethical issues we need
to address? Generative AI is on the radar of
regulators and policymakers at the national, state, and local levels as well as
of other stakeholders due to the potential cyber, privacy, and societal risks.
The environment varies by country and region. With legislative initiatives
already underway and risk frameworks emerging around the world, directors
should inquire how management keeps track of market developments.
How are we sourcing and managing the data used by our
generative AI model(s)? Directors should obtain an
understanding from management regarding whether the organization is using (a)
publicly available models and domains, (b) foundation models that are
fine-tuned with internal proprietary data, or (c) fully customized models.
Whether a company uses its own data, third-party data, or data generally
available in the marketplace will influence a model’s risk profile.
Do we have the talent we need to implement generative AI? Finding and onboarding the requisite talent and expertise is
key to determining the mode of generative AI a company can deploy. While publicly
available tools such as ChatGPT require no specialized expertise, they are far
less secure, private, and reliable. That is why most companies will likely
choose the middle road: fine-tuning a foundation model, which requires lighter
data science expertise through a low-code interface.
Do we have a governance framework that enables
experimentation? The board should inquire
about the governance process and organizational structure for overseeing the
company’s generative AI innovations and monitoring industry developments.
Overall governance involves considerations relating to trust, ethical use, risk
management, the third-party ecosystem, legal and regulatory compliance, and
standards and controls. It entails a generative AI review and approval process.
An adaptable governance framework could function through a small
cross-functional, multidisciplinary team representing the data, engineering,
security, and operational aspects of generative AI models.
What monitoring mechanisms and accountabilities do we have
in place? Model owners—those responsible for
their design, development, and operation—should be held accountable for their
proper functioning. Human oversight supported by automated alerts and controls
is an integral part of any generative AI solution, particularly when the model
is connected to hardware or software, or there is a significant impact on
sensitive decisions, e.g., employment matters. The board should inquire as to
whether a process is in place to assure generative AI model outcomes align with
intended results and comply with relevant regulatory requirements. Due to the
complexity of the correlations in the model, it may be necessary to embed
self-check mechanisms and conduct human reviews of AI-generated output.
Internal audit can also serve as a check and balance. Models should be
evaluated periodically for unreliable or biased content.
How do we manage the risks? Early implementations have exposed generative AI’s
shortcomings: content source and provenance are not always evident, ownership
rights are a major concern, bias and prejudice in text and images can be an
issue, and images or videos appearing realistic can be deceptively false (deepfakes).
Models can hallucinate or drift, that is, they can deliver results not backed
by the data to which they have access. These issues can lead to
misinformation—inaccurate and misleading content—and blatant plagiarism. They
can also lead to disinformation (e.g., fake news; mimicking people or specific
individuals through falsified photographs, videos, and voice scams). They open
the door to more sophisticated cyber threats and deceptive social engineering
strategies. Boards should ascertain how these issues are being addressed.
What are the change management issues? With resistance to change a
formidable challenge for many organizations, management should
communicate the following:
- Generative
AI technology’s strengths and limitations
- The
intention to deploy the technology thoughtfully, responsibly, and in
accordance with applicable laws and regulations.
- The
initial use cases planned and how those use cases align with broader
strategic efforts, such as environmental, social, and governance as well
as diversity, equity, and inclusion initiatives
- The
risks to be managed, including protection of the company’s intellectual
property (e.g., trade secrets, other confidential information)
Reskilling and upskilling will be necessary for employees
whose job functions are affected by generative AI.
The dawn of generative AI is yet another wake-up call for
boards, another disruptive force for business. In this digital world framed by
the Internet, digital devices, smart devices, the cloud, and ever-increasing
connectivity, mobility, and computing power, directors rooted in the analog age
and unable or unwilling to make the transition to be technology-engaged in the
boardroom need not apply.
Check out Jim’s website.