10 Questions to Ask About Generative AI
By
Jim DeLoach, Former Andersen Partner
and currently a
Managing Director at Protiviti. He is the author of several books and a
frequent Forbes and NACD contributor.
Copyright 2024 Corporate Compliance Insights. This article originally appeared on Corporate Compliance Insights and can be found here. Reprinted with permission. No further reproduction is permitted without permission from Corporate Compliance Insights..
Generative AI is accelerating change quickly, but many questions remain about the associated risks. Protiviti’s Jim DeLoach talks about how board directors and senior executives can balance opportunity and risk of this technology and guide their organizations as it proliferates.
So much has been written about generative artificial intelligence (genAI), it seems like a constant buzz inspiring both wonder and fear. According to a McKinsey webinar last year, the value proposition is alluring:
- GenAI
is poised to boost performance and unlock up to $4.4 trillion of value
from sales and marketing, product R&D, customer operations, software
engineering and other business functions.
- Productivity
growth has declined over the last 20 years, but genAI and other
technologies are expected to unleash a new wave of productivity growth
over the next 20 years.
- Up to 70% of the global workforce has the potential to see up to half of their job functions automated, potentially freeing them up to perform more complex and interesting work.
Admittedly, such claims can be viewed as those of a hype machine in overdrive. But as many extol the possibilities around how AI will alter the way people work, learn, play and communicate with each other, a reality is steadily coming into focus within many C-suites and boardrooms: AI has the potential to harness data and analytics in ways that will enhance customer experiences, increase process efficiencies, innovate products and services and improve productivity across all industries.
But there are risks and limitations to consider. The risks of disinformation and misinformation span an ever-expanding list of potential, egregious abuses fueling alarm and cries for regulation. As a result, lawmakers, regulators and social institutions are struggling to catch up with the pace of AI development.
But the cat’s out of the bag. Generative AI and large language models (LLM) are here to stay. For senior executives and board directors, understanding the opportunities, limitations and risks of these models is now table stakes. Given the technology’s accessibility, executives and directors should get hands on and immerse themselves with it. Armed with a baseline understanding, executive teams and directors should consider the following 10 questions when engaging in strategic conversations regarding genAI and LLMs:
What is the business opportunity?
This important “Why
should we care?” question should be considered both strategically and
tactically. But it could also be a matter of survival for organizations
choosing to sit and watch while competitors implement and scale AI. So
much has been written about the speed of business, disruptive change and
information overload. The prospect of AI offering a solution to cut through the
ever-expanding mass of data to identify what truly matters to adapting at
speed, driving innovation and establishing sustainable competitive advantage in
a rapidly changing global economy is enticing.
When evaluating the value proposition of AI, decision-makers should focus on how applying it can facilitate learning from data and results in near-real time. They should seek to understand the implications of genAI and LLMs to the industry and what competitors are doing with the technology. Finally, they should assess more broadly where genAI, LLMs and advanced analytics could play a role in the business.
What is our strategy?
Scaling generative AI begins with a strategic view ranging from transformation of the business model for delivering value to the marketplace to tactical productivity improvements in back-office processes. Following are useful questions when evaluating the strategic vision for genAI and LLMs:
- Do
we have a strategy for why, where, how and when to deploy? What use cases
are we considering, and how are we selecting and prioritizing them? How
are we measuring the value contributed?
- Are
we organized appropriately to roll out our strategy? How are we empowering
our people to build, train and use the technology? How are we scaling
implementation across the organization?
- Have
we documented our organization’s guidelines and values in deploying the
technology regarding such matters as privacy, security, transparency and
human versus machine responsibilities? Do our policies account for genAI
needing to be governed and managed differently than “classical AI”?
- How do we know we are adhering to our values? For example, do we have a cross-functional ethics committee that vets all plans and actions and monitors for unintended outcomes and consequences?
Gaining insights on the above questions enables executive teams and the board to understand how and why genAI is being positioned in the business. Most important is the recognition that all uses of generative AI are not created equal from a criticality standpoint, nor do they require the same degree of oversight.
What are the legal, regulatory and ethical issues we need to address?
GenAI is on the radar of regulators and policymakers at the national, state and local levels, as well as other stakeholders due to the potential cyber, privacy and societal risks. The legal and regulatory environment varies by country and region:
- With
legislative initiatives already underway and risk frameworks emerging
around the world, executive teams and the board should inquire as to how
management is keeping track of these market developments.
- Applicable
requirements and guidelines are likely to include, but are not limited to,
transparency, data security, fairness and bias recognition,
accountability, ethical considerations and continuous monitoring and
improvement.
- These requirements and guidelines should be embedded into genAI solutions and the company’s internal policies supporting them.
Major players, including Microsoft, Google and NIST are also weighing in on responsible generative AI practices and risk management.
How are we sourcing and managing the data used by our genAI model(s)?
Senior leaders and directors should obtain an understanding regarding the following:
- Whether
the organization is using (a) publicly available models and domains, (b)
foundation models that are fine-tuned with internal proprietary data or
(c) fully customized models.
- The
maturity and readiness of the existing data governance and
data management infrastructure.
- The nature of the use cases and the interoperability and maturity of supporting IT architecture and data ecosystem that influence the selection or development of fit-for-purpose genAI technology and the supporting data core.
The real power of genAI will likely come from companies infusing it with internal proprietary data. Whether a company uses its own data, third-party data or data that is generally available in the marketplace influences the model’s risk profile.
Do we have the talent we need to implement the technology?
A key adoption challenge is finding and onboarding the right talent with the requisite expertise. Skilled technical practitioners in AI are scarce, especially those who understand how to incorporate the company’s business requirements into a generative AI model. It takes a team effort to run an end-to-end AI-embedded system or process, including business champions, data owners, senior program managers and developers, legal and compliance resources, and operational teams.
Available skillsets heavily influence the mode of genAI a company can deploy. While publicly available capabilities such as ChatGPT require no specialized expertise, they are far less secure, private and reliable. That is why many companies will most likely choose the middle road, i.e., fine-tune a foundation model, which requires lighter data science expertise through a low-code interface.
Do we have a governance framework that enables experimentation?
Executives and directors should inquire as to the governance process in place — including the organizational structure — for overseeing genAI advancements and experimentation across the industry and company. An adaptable generative AI governance framework ordinarily functions through a small cross-functional, multidisciplinary team representing the data, engineering, security and operational aspects of genAI models. Overall governance involves trust and ethical use, risk management, third-party ecosystem, legal compliance and policies, and standards and controls. Each team responsible for a model should own responsibility for its efficacy.
Innovation involving any technology entails experimentation starting small, learning by doing, keeping track of innovation in the marketplace and responding to value-adding opportunities in an agile way. This is why a genAI review and approval process is imperative. As applications proliferate throughout the organization, the CEO should inquire of every direct report as to how this capability is being applied, where it is being used, which decisions and processes are affected most by it and who bears responsibility for the efficacy of its outcomes.
What monitoring mechanisms do we have in place?
Human oversight supported by automated alerts and controls is an integral part of any generative AI solution, particularly when the model is connected to hardware or software or there is a significant impact on sensitive decisions, e.g., employment, healthcare services access or protection of vulnerable parties. As models inevitably change over time, they should be evaluated periodically for unreliable or biased content.
Accordingly, the executive team and the board should ensure that management has implemented a process that provides assurance that genAI model outcomes are aligned with intended results and compliant with regulatory requirements. When this is not possible due to the complexity of the correlations in the model, embedding self-check mechanisms as well as human review of AI-generated output is an alternative. Internal audit can also serve as a check and balance. Taken together, these elements of effective monitoring provide decision-makers with real-time alerts of trends indicating emergence of anomalies or errors and enable continuous model improvement.
How do we set accountability?
Model owners, including those responsible for their design, development and operation, should be held accountable for their proper functioning. The context for accountability means that throughout their lifecycle, generative AI models perform in accordance with their prescribed, intended role and in accordance with applicable regulatory frameworks. The question for the board regarding these models is, how do we know they are working as intended? Thus, a model’s performance should be supervised as an employee’s performance would be. The aforementioned monitoring mechanisms and emphasis on sufficient transparency facilitates the supervisory process along with appropriate enforcement policies.
How do we manage the risks?
GenAI and LLMs take issues embedded in social media to a whole new level as their early implementations have exposed their shortcomings. While the summaries provided by the models are easy to read, the source and provenance of content are not always evident. Ownership rights are a major concern, as the legal and regulatory landscape shifts at lightning speed to address how original content is differentiated from proprietary content.
How can one be sure that the output received from generative AI is not infringing on the intellectual property (IP) of others? Or, conversely, how do we know that company IP is not being inadvertently fed into the public domain through the use of genAI models? Who owns the output from such models, and can that output be copyrighted? In addition, there are issues of bias and prejudice in text and images as well as deepfakes, or images or video that can appear realistic but are deceptively false, making it almost impossible to discern truth from fiction.
These issues can lead to misinformation — inaccurate and misleading content — as well as blatant plagiarism. They can also lead to disinformation — intentionally misleading content, including mimicking people or specific individuals through falsified photographs, videos and voice scams. They open the door to more sophisticated cyber threats and deceptive social engineering strategies.
If a company is using its
own data in conjunction with an appropriate governance framework, this becomes
less of an issue. Unless the data set is controlled, the misinformation,
falsehoods, opinions and conjecture so prevalent in social media and elsewhere
could become part of the written record from which generative AI draws. Lack of
data control also triggers the aforementioned ownership and copyright issues.
Accordingly, executive teams and their boards should understand how these issues will be addressed. Controlled datasets in a closed-source model and requiring attribution of genAI content can engender transparency and confidence. But it also introduces the risk of unintentional bias on the part of those who define the data sets and program the algorithms for creating content from the data. There is also the issue of managing data from external sources, e.g., strategic suppliers, third-party providers and channel partners.
What are the change management issues?
With resistance to change a formidable risk for many organizations and deployment of generative AI proliferating, employees need to understand the ground rules for its responsible and ethical use. Management should communicate to the organization genAI’s strengths and limitations, the intention to deploy it thoughtfully with purpose, the initial use cases planned and how it aligns with broader strategic initiatives, such as those regarding environmental, social and governance (ESG) and diversity, equity and inclusion (DEI).
Reskilling and upskilling will be necessary for those employees with job functions that are affected. The policies and ground rules for genAI’s use should be aligned with applicable laws and regulations and the need to protect the company’s intellectual property (e.g., trade secrets and other confidential information). They should also address the related impact on cybersecurity and privacy risks and reinforce monitoring protocols and accountabilities.
GenAI adds yet another disruptive force for business along with supply chain disruptions, workplace upheavals, talent shortages and higher inflation and interest rates. The good news is that off-the-shelf software interfaces of the technology are so readily available, it is almost equivalent to software-as-a-service.
The bottom line for senior executives and directors is clear: Prepare yourselves for the journey.
Jim DeLoach, a founding Protiviti managing director, has over 35 years of experience in advising boards and C-suite executives on a variety of matters, including the evaluation of responses to government mandates, shareholder demands and changing markets in a cost-effective and sustainable manner. He assists companies in integrating risk and risk management with strategy setting and performance management.