An Interview with
Andy Sharma
AI, Risk, and Reward, How Private Equity Can Create Value Through Secure Innovation
About the Interviewer:
Matt Gordon
Matt is a Chicago-based cybersecurity talent and advisory partner who works closely with CISOs to help them design, scale, and mature high-impact security and infrastructure teams. He partners with security leaders navigating growth, transformation, and increased regulatory or investor scrutiny, supporting everything from early team design to executive-level hiring.
As a Principal Consultant at Intaso, Matt is part of a global cybersecurity practice with 30+ years of combined experience supporting security leaders and organizations. Intaso’s work is grounded in deep industry knowledge, long-term relationships, and a highly tailored approach to helping security leaders build teams that actually move the business forward.
Andy Sharma
Andy Sharma is a PE-proven CIO, CISO, and CTO who helps investors and management teams turn cybersecurity and compliance into enterprise value. He has led global teams of 650+ with $200M+ budgets across PE-backed and Fortune 500 companies, modernizing IT, scaling security, and enabling growth through cloud, AI, and automation. Andy has led cyber due diligence on 16+ transactions totaling $2.5B+, and is widely known for translating regulatory complexity into board-level strategy that accelerates revenue, strengthens retention, and de-risks exits.
Currently serving as CIO & CISO at Redwood Software. With a career spanning senior leadership roles at organisations including Charles Schwab, Santander Consumer USA, and Optiva, Andy brings deep expertise in cybersecurity, digital transformation, and enterprise governance.
“How do we capture the upside of AI without exposing the business to unacceptable risk?"
Why is this such a pivotal moment?
AS: It’s the right question, because AI adoption is not just a technology play. It’s a value-creation strategy with a new risk equation. The firms that will win in this next chapter are not just the ones that move fastest on AI, but the ones that integrate security, governance, and ethics into the design of innovation itself.
You’ve said AI has moved “from experimentation to expectation.” What does that shift look like in practice?
AS: Only a year ago, most portfolio companies were exploring AI use cases. Today, many are expected to operationalize them. PE sponsors are pressing for faster digital leverage — from intelligent customer engagement and automated analytics to generative tools that accelerate content, code, and decision-making.
But speed introduces exposure. AI expands the attack surface, data dependencies, and reputational risks. Sensitive data feeds new models. Algorithms make opaque decisions. And third-party AI integrations blur the boundary of accountability.
The challenge isn’t just technological, it’s governance.
In most organizations, AI innovation is sprinting ahead of the guardrails designed to contain it.
That leads to what you often describe as the “dual mandate” for PE-backed businesses. Can you explain that?
AS: For PE-backed businesses, this tension is especially acute. Portfolio companies are under constant pressure to deliver EBITDA improvements, integrate acquisitions, and scale efficiently, all while managing limited resources.
In this environment, AI can be transformative, if it’s deployed responsibly.
- But when governance lags, AI risk can quickly erode value:A data privacy lapse tied to an AI model can trigger regulatory fines or customer attrition.
- Inaccurate or biased algorithms can damage brand trust.
- A compromised AI integration can become an entry point for cyberattackers.
As one CIO recently put it:
“AI without security is just automated risk at scale”
That’s why leading PE firms are reframing AI adoption as a risk-adjusted growth opportunity, not a pure innovation race.
So where does governance actually begin? What does good AI governance look like?
AS: The solution starts with governance. AI strategy must evolve from “what can we automate?” to “what should we automate, and under what controls?”
Forward-leaning portfolio companies are embedding AI risk management into enterprise governance frameworks, treating AI like any other critical asset class.
Key elements include:
- Accountability Structures
- Data Provenance
- Transparency
- Third-Party Oversight
- Ethical Guidelines
These are not bureaucratic constraints; they are the trust architecture that allows innovation to scale safely.
Where does cybersecurity sit in this picture?
AS: AI doesn’t replace cybersecurity; it depends on it.
Every AI initiative increases the volume, variety, and velocity of data being handled. That data is the new attack surface.
Strong cybersecurity foundations (from identity management to threat detection and incident response) become the enabling layer for AI adoption.
Equally important is AI-enabled security:
- Predictive threat detection
- Generative AI for faster incident analysis
- Automated compliance reporting and anomaly detection
How should private equity look at AI from a value-creation perspective?
AS: For private equity, every investment thesis now intersects with AI. The firms that create differentiated value will build AI assurance models that quantify both upside and downside in financial terms.
This means going beyond cyber maturity assessments to evaluate:
- AI readiness
- Data integrity
- Operational resilience
- Reputational exposure
Integrating these dimensions into the investment and exit playbook creates a measurable edge:
- Lower diligence risk
- Higher valuation multiples
- Faster post-acquisition integration
- Stronger confidence with LPs and regulators
AI-driven value creation is no longer about just using the technology, it’s about governing it better than your competitors.
You’ve said AI governance is becoming a strategic differentiator. What are leading companies doing today that others aren’t?
AS: In the next two years, AI governance will move from a compliance exercise to a strategic differentiator. Investors, boards, and customers will favor companies that can prove their AI is secure, ethical, and reliable.
Forward-looking portfolio companies are already:
- Establishing AI risk committees under board oversight
- Building cross-functional councils for AI ethics, data strategy, and cybersecurity alignment
- Embedding risk quantification into digital KPIs
Requiring vendors to meet shared security and transparency standards
These organizations understand that trust is the ultimate accelerant. When governance and security are built in, innovation becomes scalable, and defensible.
And finally, what’s the way forward?
AS: AI is changing the rules of competition. But in this new era, speed without stewardship is a liability.
Private equity firms and their portfolio companies have an opportunity (and a responsibility) to define what responsible AI adoption looks like.
Those that lead will not only protect value, but create it: through faster decision-making, smarter automation, and enhanced trust from customers, investors, and regulators alike.
The next frontier of value creation won’t come from deploying AI faster. It will come from deploying it smarter, and safer.