The rapid developments in AI and generative AI are transforming business functions across industries, including the private equity space. We talk with sponsor clients regularly about their uptake of AI solutions, and the adoption of AI tools internally by PE sponsors has grown exponentially in the last year.
Firms have adopted and deployed AI solutions with lightning speed. From enhanced financial models to content summaries and content creation, and from diligence analysis to performance tracking, sponsors have traveled a long way from their early, hesitant experiments with ChatGPT. Some sponsors have even developed and deployed their own internal, proprietary models.
All of this forward motion is occurring in the shadow of surprisingly fast legislative and regulatory activity, as well as a quickly developing set of best practice standards. Sponsors can lead on AI, and they should also have best practices and potential risks top of mind. Below are three guideposts for AI adoption and management.
1. Risk Management Early and Often
AI is fundamentally a pie that is extremely difficult (and expensive) to un-bake. Training data, fine-tuning data, inputting data – once it goes in, for all intents and purposes, it won’t come out. This means that it is critical to get risk management right at the outset. You want to make sure that you have the right stakeholders in your organization evaluating use cases, and implementing controls, so that you can stand behind your AI activities.
AI risk management can be complex, but it doesn’t necessarily need to be. Some organizations will find that they already have internal policies and procedures (around software solution approvals, information management and device security) that comfortably address emerging AI use. Other organizations will need to implement additional policies (such as acceptable use policies). Still others will want a more comprehensive and dedicated AI risk management framework (such as the framework released by the National Institute of Standards and Technology).
What risk management means can be very different for any given organization. There are multiple ways to do it. More important than the specific approach is ensuring that you can demonstrate your organization has thought about it, put appropriate guardrails in place for your stakeholders, and is actively engaged.
2. Be Flexible and Curious
What does it mean to get risk management right? It does not have to mean stifling your team’s enthusiasm for testing and adopting new AI solutions. Testing out new opportunities, in particular, is an important aspect of risk management. You can and should encourage your teams to be curious and ambitious, and to pilot new projects. That culture can thrive when there’s a clear risk management approach underneath it.
Risk management done right puts scaffolding in place that will allow the organization’s uptake of AI to evolve over time. This means getting underlying principles and guardrails right, and building more specific policies over them. The principles and guardrails will be your foundation, and the policies can develop over time. For example, one principle might be human oversight of AI; the associated guardrail would be a requirement that all AI use cases in the organization must have human oversight; and you might articulate in an AI acceptable use policy specifically how employees are expected to ensure human oversight of AI in practice.
Organizations should develop a consistent cadence of revisiting policies and practices, and a willingness to make changes when and where they’re called for. Changes and additions may be necessary for multiple reasons, including legal developments (with the EU AI Act now in force, and numerous U.S. states adopting AI-specific legislation), as well as learnings about how specific AI use cases are working within the organization. AI risk management demands your attention to risks not only to your organization, but also to individuals, other organizations and society at large. Organizations getting risk management right are baking those interests into their approach.
3. Know Your Data and Do Your Diligence
Any AI use case can be defined by the data it demands. Data use carries risk.
It is important that an organization understand and document each AI use case being piloted or more broadly deployed. Within each use case, it is critical to understand and document what types of data are in play. Do your analysts want to use AI tools to analyze data that you received under an NDA? Do you have a vendor offering digital health solutions to your employees? In the deal context, does your target offer a solution that requires training on consumer information?
When you have a clear picture of your data reliance, you can assess potential risks and make sure you’re solving for them. For instance, to get comfortable with the tool your analysts want to use, you’ll need to understand how the proposed tool uses data. Does the provider reserve rights to use the input data for its own internal purposes? If so, using the tool may violate the terms of your NDA that prohibit sharing confidential information with third parties. Is your wellness vendor collecting sensitive health information from your employees? If so, you’ll want to take a hard look at that vendor’s terms and who has responsibility for the security of the data. Does your target processing consumer information have its own AI guardrails and a privacy program? Make that a focus in diligence, to ensure it is on the right side of the FTC’s laser focus on algorithmic disgorgement as a remedy for alleged improper use of consumer information.
AI is becoming integrated into nearly all areas of our organizations. As such, awareness of AI opportunities and risks should be integrated into all of our risk management functions. Sponsors should actively take steps to respond accordingly. Gather stakeholders to think deeply about your organization’s values and goals and what principles are foundational. Be curious about new tools and developments in AI. Consider what policies you already have in place to support AI adoption, and whether new or different policies are needed. Look at adding some AI-specific questions into your standard vendor risk assessment questionnaire. Dust off your NDA form and think about whether there are any provisions to add or revise. Ask your teams what AI tools excite them and what support they need to launch a pilot.
As with any new opportunity, the risks and the rewards walk side-by-side. Establishing an AI risk management culture now that prioritizes both innovation and prudence will pay dividends in the future.