CTO + CPO = CPTO?

The Role Convergence Debate.

Should your CTO and CPO be one person or two people in distinct roles? There is no universal answer. The right structure depends on your product’s complexity, your team’s maturity, and how tightly your competitive advantage is bound to technical execution. What AI is changing is not the urgency of the question but the shape of it: when the boundaries between “what to build” and “how to build it” start dissolving, the traditional division of labor between these roles gets harder to maintain.

How should you think about CTO CPO CPTO roles and their convergence at your company? It is a question growth-stage companies have wrestled with for as long as these roles have existed. We saw this convergence happen with the SaaS boom over the past 15 years. But with AI, the CTO CPO CPTO role convergence question is playing out again in exec team meetings, board meetings, and across many sectors. It tends to generate strong opinions quickly, and those opinions are often shaped more by what worked at someone’s previous company than by what the current situation actually requires. What follows maps the real trade-offs, so the call you make is grounded in your context, not someone else’s.

At Hoola Hoop, we have watched this debate play out across our portfolio companies for the better part of a decade. For us, this is not a question AI invented. At our quarterly CTO and CPO leadership roundtables, we have had CPTOs speak directly to what the role demands, what the unique challenges are, and where it can break down. The patterns in what they describe, across different companies, team sizes, and sectors, are consistent enough to say that something meaningful is happening to these roles, not just to the individuals in them.

AI has changed the terms of the conversation, and speed is only part of it. The shift is more fundamental: the boundary between “what to build” and “how to build it” is dissolving. Product leaders are building functional prototypes using tools like Cursor and Claude Code and submitting pull requests directly to engineering repositories. Engineers are shaping UX decisions and product direction from day one, not after requirements land in a backlog. AI is not just making individuals faster, it is eliminating the handoff between roles, turning sequential process into continuous, shared problem-solving. We have written in more depth about how AI is reshaping the CTO and CPO role and what it means for how teams are structured. For the CTO CPO CPTO question, the implication is significant. The issue is not about optimizing the handoff between product and engineering, it is about whether the traditional separation still maps to how your work actually gets done.

How the CTO and CPO Roles Were Designed

The traditional separation between CTO and CPO made sense for a specific era of technology development. The CTO owned the technical foundation: the how, core architecture, infrastructure, engineering velocity, security, and the team capable of building and maintaining all of it. The CPO owned the user-facing strategy: the why, roadmap, prioritization, market fit, and the voice of the customer inside the organization.

Each role had a clean mandate. The CPO determined what mattered to users and, as a result, what the company should build. The CTO figured out how to build it. For a long time, those mandates were distinct enough to justify two separate leaders. Build cycles were long. The distance between an engineering decision and a customer outcome was wide enough that two people could stand on either side of it without stepping on each other’s toes.

That distance has collapsed. Product-led growth, platform business models, and full-stack engineering have been closing the gap for the better part of a decade. When the product itself becomes the primary acquisition and retention mechanism, every technical decision becomes a product decision. When engineers own the full-stack delivery, the leaders above them need to hold both lenses simultaneously. By the time most companies reach Series B or C, the CTO who cannot think in product outcomes and the CPO who cannot interrogate technical trade-offs are both becoming a liability.

How AI Is Accelerating the CTO CPO CPTO Role Convergence

AI has not created the underlying tension between product and technical leadership, it has removed the slack that allowed organizations to manage that tension through process. Three shifts stand out.

The collapse of the build cycle. Features that once required quarters to scope, design, build, and ship can now be prototyped in days and deployed in weeks or less. That compression means the hand-off between product thinking and technical execution can no longer afford to be a formal process with its own calendar, gate reviews, and dedicated meeting cadences. The thinking has to happen simultaneously, between leaders who are in genuine dialogue about both domains at once.

The disappearance of clean ownership. When a company deploys a large language model as a core product feature, who owns it? The CPO, because it faces the user? The CTO, because it sits on the technical stack? Neither answer is sufficient. Someone needs to own the intersection, and that person needs to understand model behavior, inference costs, data quality, user expectations, safety considerations, and product positioning all at once.

Emergent product capability from the technical layer. AI is now generating product capability that neither the CTO nor the CPO put on the roadmap. These capabilities are not coming from a user story or a product brief. They surface from the technical layer and land in the product layer. A leader who can only see one side of that process is going to miss the opportunity, or deploy it without fully understanding the implications for users.

Five Signs Your CTO CPO Structure Has a Problem

The CTO CPO CPTO role convergence debate often stays abstract until someone recognizes their own organization in it. These five patterns tend to appear before the structural role issue becomes a crisis:

01
The Separate Document Problem
Your roadmap and your architecture decisions live in separate documents and are reviewed in separate meetings. By the time they meet, someone has already committed to something that constrains the other. The hand-off becomes a renegotiation rather than a continuation.
02
The Parallel Team Problem
Your CTO and CPO have strong individual relationships with their respective teams, but the combined leadership group does not share a clear view of what the company is building or why. Engineering and product operate with different mental models of the same product.
03
The AI Handoff Problem
AI features are being prototyped and shipped by engineering before product has shaped the user experience. Or product is committing to AI capabilities, and even building via agents, before engineering has confirmed they are feasible at the required cost and quality. Neither direction produces good outcomes at speed.
04
The Unresolved Conversation Problem
The CPTO question has come up in two or more executive or board meetings without a clear decision. When a structural question keeps surfacing without resolution, the organization builds around it rather than through it.
05
The Rebuild Problem
You are rebuilding something your CPO shipped 12 months ago because the technical foundation was not designed to support what the product needed to become. This is the most expensive symptom and the one most often attributed to the wrong cause.

The Common Thread

Each of these patterns is a symptom of the same underlying condition: the open wound between product and technical leadership is visible, and the organization is building scar tissue around it rather than closing it. The CTO CPO CPTO role convergence question gets harder to avoid the more of these patterns are present simultaneously. The cost of leaving that wound open is painful and increases with every product cycle.

What the CPTO Structure Does Well

When it works, a single CPTO removes the most common failure mode at growth-stage companies: the structural gap between what is technically possible and what actually gets built. That gap is rarely caused by bad people. It is usually caused by two leaders with different mental models, different incentive structures, and different stakeholder relationships. Those structural conditions produce friction by design. In a CPTO model, the tensions between technical rigor and product velocity live inside one person. They get resolved faster.

The CPTO structure works best when the company’s product and technical strategy are genuinely inseparable. For an AI-native company, a developer tools company, or a company building on a fast-moving technical platform, separating product and technical leadership can create exactly the kind of schism that slows the decisions that matter most. In these contexts, the CPTO is not a compromise. It is the natural shape of the role. Leading voices in the field argue that in the AI era, the CPTO becomes one of the most critical executive roles a company can define well.

The companies that benefit most from a CPTO role tend to share a common characteristic: their competitive advantage lives at the intersection of technical and product capability, not on either side. When that is true, having two leaders who must negotiate across that intersection adds latency that a single leader eliminates.

What the CPTO Structure Gets Wrong

The counter-argument deserves equal attention. The CTO and CPO are each a genuine full-time job that requires different areas of expertise. Asking one person to hold both is often asking them to half-execute two strategies rather than fully commit to one. At scale, the gaps become critical. Security incidents, architectural decisions, and platform reliability require sustained CTO attention that a leader splitting their focus cannot always provide. Roadmap clarity, user research integration, and commercial alignment require sustained CPO attention that the same leader, pulled toward technical firefighting, will deprioritize.

There is also a governance risk that organizations consistently underestimate. When the CTO and CPO are two people, they provide the organization, and each other, with a structural counterweight and checks-and-balances. The CTO’s pragmatism checks the CPO’s ambition. The CPO’s user focus checks the CTO’s tendency to over-engineer. Removing that check requires the CPTO to internalize both perspectives and actively argue against their own instincts. This is possible, but it requires the right context and a specific kind of discipline that most leaders develop only after making costly mistakes on both sides.

The right CPTO is also genuinely rare. The number of leaders with real depth in both technical architecture and product strategy, combined with the operational experience to run both domains simultaneously, is very small. Promoting someone who is 80% CTO into a CPTO role does not create a CPTO. It can create a “CPO” gap that no one owns. Companies that have tried to merge the roles and found their CPTO struggling are not failing because the concept is wrong, the failure of the role is most likely due to not having the right person for the full scope of the job.

There is a deeper leadership dimension worth naming here. Marc Maltz at Hoola Hoop writes about the authority-control paradox at the heart of senior leadership: as formal authority increases, actual control over outcomes often decreases. A CPTO sits in what Marc calls the Crisis Zone, where formal authority over product and technology is at its peak, but outcomes depend on two large teams, competing organizational dynamics, and market forces simultaneously. That gap does not close because the title does. It tends to widen. Add to this what Marc describes as inherited baggage: step into a combined role and you inherit not just a broader job description, but the historical friction between product and engineering, the cultural patterns each team built under the previous structure, and what psychoanalyst Christopher Bollas calls the “unthought known,” the unconscious patterns that everyone acts on but no one names. Courageous role-taking, in Marc’s framing, means entering a role with realistic expectations rather than the idealized version you were sold. That discipline is especially important for a CPTO, where the gap between the job description and the lived reality tends to be widest.

What Great Leaders Do Regardless of Title

The executives navigating the CTO CPO CPTO role convergence question best, regardless of their title, share a specific capability: they have invested in genuine fluency in the domain that is not their home ground. This is not about becoming a generalist. It is about building enough literacy in the adjacent domain that the conversation between product and technology happens at the right level of specificity.

πŸ“–
Build genuine domain literacy
The CTO who thinks in product outcomes builds technical strategy that anticipates where the company needs to go, not just where it has been. The CPO who can interrogate technical constraints does not overpromise, does not generate engineering debt through roadmap commitments, and does not get blindsided when a simple feature turns out to require a three-quarter rebuild. Both invest time in the other’s world.
🀝
Make ownership explicit
Don’t leave the ownership of product capability, AI or otherwise, to emerge by default. Identify the specific intersection where product and technical decisions collide, name who owns it, and build the accountability structure around that person. Ambiguity here creates exactly the kind of slow, expensive decisions that a poorly defined CTO CPO structure was always at risk of producing.
⚑
Compress the decision loop
The most effective CTO-CPO pairs operate with overlapping rather than sequential thinking. Product direction and technical feasibility are discussed in the same room, at the same time, early. The decision does not travel from product to engineering and back for approval. The goal is not to eliminate the distinction between the roles but to eliminate the latency between them.
πŸ“
Build shared measurement
If your CTO and CPO are tracking different success metrics with no shared layer, structural divergence is almost inevitable. You’re marching towards different north stars. Build at least one set of metrics that both leaders are accountable for and that connect technical execution to product outcomes. This is much harder than it sounds, but the discipline of attempting it surfaces misalignment before it becomes conflict.
πŸ”¬
Use structural friction as a diagnostic
When your CTO and CPO repeatedly disagree, that is information about your structure, not just your people. Before concluding that you need different leaders, ask whether the conflict is a symptom of a structural gap that a different design would resolve. The answer tells you whether you need a personnel decision or an organizational one.

The Underlying Principle

The leaders who develop genuine cross-domain fluency, regardless of whether they hold one title or two, consistently make better calls at the intersection of product and technology. This matters with or without AI in the picture. Building that literacy is not a response to a trend. It is a core leadership discipline for anyone running a technology organization.

The question is not whether to merge the roles. It is whether the leaders holding those roles are building the shared understanding that makes the seam between them invisible to the organization.

Questions to Sit With

If you are working through the CTO CPO CPTO role convergence question right now, these organizational questions are worth sitting with before you make a call. For anyone actually considering stepping into the CPTO role, Marc’s four questions for courageous role-taking are an equally valuable companion: do you have the tolerance and temperament for this risk zone, what inherited baggage will you encounter, and can you negotiate for the authority and boundaries you will actually need? Both sets of questions matter.

  • When your CTO and CPO last disagreed on a significant, strategic decision, was it a productive tension or evidence of a structural gap?
  • Is your roadmap genuinely shaped by both technical possibility and customer need, or does one side consistently dominate?
  • If you had to hand a combined CPTO brief to one person on your current leadership team, who would it be? What does that answer tell you about where your biggest gap actually is?
  • How is AI adoption showing up right now in the space between your technical strategy and your product strategy? Who owns that space?
  • Are you designing your CTO and CPO roles for the jobs as they existed three years ago, or for what those roles require today and tomorrow?

A Final Thought

The CTO CPO CPTO role convergence debate will not be settled by an org chart. It gets resolved by the quality of the relationship between the leaders holding those roles, and by the individual investment each of them makes in understanding the domain that is not their primary one. AI has made that investment more urgent. The pace of change in the technical layer is now fast enough that a CPO who treats engineering as a black box is navigating with incomplete information. The pace of change in user expectations is fast enough that a CTO who treats product as someone else’s responsibility is building something that will need to be rebuilt.

Whether your company has two leaders, one combined leader, or is still working through the decision, the underlying work is the same: build genuine technical-product fluency at the senior leadership level. Create the conditions for fast, high-quality decisions at the intersection of both domains. And recognize that this intersection is where most of the highest-stakes choices in today’s technology company are being made.

This is exactly the kind of challenge that looks different from the inside than it does from the outside. The leaders who navigate it best are rarely the ones with the strongest view about how it should be structured. They are the ones who have built the kind of executive fluency that extends well beyond their primary domain. If you are doing that work in parallel on the relationship side of the house, the principles in Managing Up as a CTO or CPO are closely related. The same investment in understanding what the people across the table actually need from you applies in both directions, and the skills compound.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’
Leigh Newsome - CTO Coach

Leigh Newsome

Partner, Hoola Hoop Β· CTO & CPO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO & CPO coach with 25 years of experience scaling product and engineering teams. He has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown, and successfully led TargetSpot (backed by Union Square Ventures, Bain Capital Ventures, and CBS) through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft, but never all three at once.

Share this:

Agentic AI Governance: What CTOs Need To Know

The Agentic AI Governance Framework Every CTO Needs in 2026.

Deploying AI agents has become the easy part. Most engineering organizations are doing it faster than they can govern it and that gap is where the real risk accumulates.

Agentic AI governance has become a defining challenge for leaders in 2026. Dell Technologies recently changed its word of the year from “agentic” to “governance,” which is a small signal worth paying attention to. The industry’s most forward-thinking leaders are not debating whether AI agents are capable. They are asking whether organizations are capable of running them responsibly, at scale, without losing control of outcomes.

The numbers behind that shift are significant. Gartner predicts that over 40 percent of agentic AI projects will be canceled by end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. Accelirate’s analysis of the governance crisis points to poor data foundations and ungoverned deployments as the root causes. What successful companies have in common is worth paying attention to: they built governance infrastructure before they needed it, not after.

A large part of what is making that gap worse is “agent sprawl.” Companies, especially startups, that are hungry to move fast and frustrated by slow procurement cycles, are independently spinning up AI agents outside of any centralized governance framework. Each team means well. In aggregate, what they create is a patchwork of ungoverned autonomous systems, each with its own tool access, its own cost exposure, and its own failure modes, with no CTO able to see the whole picture. Agent sprawl has moved from a technical concern to a board-level risk topic in the space of twelve months.

Agentic AI governance is not a compliance exercise. It is the leadership infrastructure that determines whether your agents remain useful tools or become autonomous sources of risk your organization cannot trace or contain.

Why This Is a Different Kind of Engineering Problem

What makes agentic AI so powerful is also what makes governing it fundamentally different. Traditional software systems do what they are programmed to do. When something goes wrong, there is a decision log, a code path, or a config change to point to. Agents operate differently. They reason, act on incomplete information, and chain decisions together in ways that are not always fully predictable from the inputs. That is what makes them capable of compressing days of engineering work into minutes. It is also what requires a different control architecture.

When an agent fails in production, it rarely presents as a crash or outage. Instead, it manifests as a confidently made but subtly incorrect decision that can be repeated across many transactions before it’s detected. By the time it surfaces, the blast radius is substantial and the causal chain is hard to untangle. Traditional audit trails weren’t built for this. Neither were escalation processes.

Take MCP, Model Context Protocol, a standard that connects AI agents to external tools and data sources: databases, repos, communication channels. Every MCP server you provision to an agent is a capability expansion and a blast radius expansion, often without the access review you would apply to a human team member requesting the same permissions. I’ve seen platform teams find that MCP server sprawl is one of the primary mechanisms through which agent sprawl actually happens. Access accumulates informally until no one can tell you what your agents are actually capable of doing. Governing which agents access which MCP servers, under what conditions, and with what audit trail, is becoming a core platform engineering necessity.

There’s also a hidden governance trap. When IT responds too strictly to these risks, AI development doesn’t cease, instead it goes underground. Teams find workarounds and ungoverned experiments run in personal cloud accounts. The organization ends up with the worst of both worlds: an official AI posture that appears controlled, and a shadow AI ecosystem that is invisible and entirely ungoverned. The CTO’s governance challenge isn’t just about preventing unsafe agents from running. It’s about designing a framework permissive enough that teams don’t bypass it entirely.

Here’s How Agentic AI Governance Breaks Down

Below are a few patterns that are starting to emerge more broadly, including themes from our Q1 2026 CTO Roundtables. These aren’t hypothetical, they’re already playing out in real-world systems.

01
No Accountability Chain
When an agent makes a consequential mistake, who owns it? In some organizations today, the answer may be “nobody” or “it’s not clear.” Accountability diffuses across teams, and the post-incident conversation becomes a search for a process rather than a decision. Governance requires a clear accountability chain before something goes wrong, not after.
02
Observability as an Afterthought
You cannot govern what you cannot see. Many organizations add observability to agentic systems post-deployment, when something has already broken. By that point, you are reconstructing a decision chain rather than monitoring one in real time. Provenance trails: records of what an agent decided, why, and exactly what information it acted on are the foundational layer that makes every other governance control possible.
03
Ungoverned Tool Access
Agents operate by calling tools: APIs, databases, external services, communication channels. Without deliberate access controls, an agent’s blast radius grows with every tool you provision to it. MCP server sprawl is a specific version of this problem platform teams are wrestling with right now. The principle of least privilege, foundational in security for decades, applies to agents just as it does to human users. Most agentic systems are not designed this way yet.
04
The Human-in-the-Loop Trap
Many organizations respond to governance concerns by adding human review steps to every agentic workflow. This sounds safe, but it creates a different problem: the human review becomes a rubber stamp. Reviewers see loads of agent decisions per day, most of which look fine, and their attention degrades. Effective governance requires being deliberate about where human judgment adds real value, rather than inserting it everywhere and getting the illusion of control rather than the substance of it.
05
Cost Without Visibility
Usage-based billing means every agent invocation, every tool call, every token consumed is a cost event. Without governance controls on agent behavior, a poorly designed agent or an unexpected production edge case can generate AI spend that dwarfs anything in your infrastructure budget and often isn’t caught until the monthly reconciliation. Treating agentic usage as a first-class cost dimension, with the same attribution and alerting discipline you apply to cloud spend, is what makes the economics governable.
06
Agent Sprawl and Shadow AI
Teams under pressure to move fast will not wait for procurement cycles. When the official path is too slow or too restrictive, teams spin up their own agents using personal accounts, consumer tools, or ungoverned cloud environments. The CTO ends up with two parallel AI ecosystems: a visible one that is governed and an invisible one that is not. Governance frameworks that are too rigid to be usable create the very shadow AI problem they are trying to prevent.

What connects all of these is a version of the same challenge: governance infrastructure designed for systems that do exactly what they are told. Agents do not operate that way. They reason under uncertainty, take action based on incomplete context, and compound decisions in ways that could be hard to predict in advance. The organizations getting this right have accepted that difference and redesigned their controls accordingly, rather than assuming the old frameworks would carry over.

What Good Agentic AI Governance Looks Like

Governance is not a brake on AI capability. When done well, it is what makes capability sustainable. The organizations moving fastest are not the ones that skipped governance, but they are the ones that built it early enough to scale on top of it:

πŸ”
Observability before deployment
The most effective agentic governance starts with provenance trails baked into the system from day one. Every decision an agent makes should produce a log: what it was asked to do, what information it acted on, what tool it called, and what outcome it produced. This is not just for debugging. It is the foundation for accountability, cost analysis, and continuous improvement of agent behavior over time.
πŸ”
Least-privilege tool access
Every tool you give an agent is a surface for unintended consequences. Leading engineering organizations are applying the same least-privilege principles to agentic tool access that they apply to human user permissions. An agent should have access to the minimum set of tools required to complete its task, with explicit governance over any expansion of that access. This is the single most effective way to limit blast radius when something goes wrong.
🧠
Deliberate human checkpoints
Human-in-the-loop is not a binary choice between full autonomy and constant oversight. Effective agentic AI governance identifies the specific decision types that require human judgment, and routes only those decisions to humans, with the context needed to review them meaningfully. This requires real understanding of where the agent’s reasoning is reliable and where it is not.
πŸ“‹
Policy enforcement as code
Governance policies that live in documents are not governance. They are intentions. The organizations doing this well encode their agentic governance policies directly into the systems: rate limits, scope boundaries, approval gates, and cost controls that are enforced automatically rather than relied upon through human discipline. This ensures the baseline controls hold even when an engineer is unavailable at 3am on a Saturday!
πŸ“Š
Governance metrics alongside engineering metrics
Most engineering teams instrument agentic systems the way they instrument any service: deployment velocity, P95 latency, workflow throughput. Those matter, but they miss where governance risk actually concentrates. Decision quality, escalation rate, cost per completed workflow, retry frequency, and how gracefully agents handle the boundary of their competence are the metrics that tell you whether your governance controls are holding.

Good governance is not about limiting what agents can do. It is about creating the infrastructure that lets you trust what they do. Trust is not a feeling. It is a property of a system with provenance, accountability, and meaningful controls. Build those, and speed follows. Skip them, and you are not moving fast. You are accumulating a debt that arrives all at once when something goes wrong in production.

The organizations moving fastest with AI agents are not the ones that skipped governance. They are the ones that built it early enough to scale on top of it.

When an Agent Gets It Wrong: Accountability Frameworks for CTOs

One of the toughest questions in agentic AI governance is accountability: when an agent makes a mistake such as harming a customer, exposing sensitive data, or disrupting a business process… who is responsible? This isn’t a philosophical debate. It is a practical challenge that CTOs will face increasingly as agent deployment scales.

The answer needs to be decided before the mistake happens. After the fact, accountability diffuses, timelines blur, and the post-incident conversation focuses on symptoms rather than the structural question of who has ownership. Agentic AI governance requires establishing accountability frameworks prospectively, as part of the development and deployment process, rather than reactively when something breaks.

A practical way to think about this is in three layers:

1. Ownership of the agent’s design: its scope, the tools it can access, and the constraints that shape its behavior;
2. Ownership of deployment: how it’s approved, tested, and rolled back; and
3. Ownership of outcomes: ongoing monitoring, escalation paths, and decisions about expanding or tightening autonomy based on what happens in production.

Without clearly defined owners at each layer, governance defaults to shared responsibility which, in practice, means no actual ownership at all.

A Governance Audit Worth Running

These are the questions I find most revealing when I work with CTOs on this. They are worth sitting with honestly rather than answering quickly:

  • If your most consequential agent made a significant error today, could you immediately reconstruct exactly what it decided, what information it acted on, and which tool calls it made? If not, you do not yet have the observability layer that governance requires.
  • Who in your organization is accountable for each agent’s design, deployment, and outcomes? Are those accountability assignments written down and agreed across engineering and product, or do they exist only as assumptions that have never been tested?
  • Have you applied least-privilege principles to your agents’ tool access? Does each agent have only the access it requires for its current task, or has tool access grown incrementally because restricting it felt like friction?
  • Where in your agentic workflows does human review add genuine value, as opposed to the appearance of oversight? Have you designed those checkpoints deliberately, or did they emerge as a default response to governance anxiety?
  • What governance metrics are you tracking alongside your engineering metrics? Do you know your agents’ decision quality, escalation rate, retry frequency, and cost per completed workflow? Or are you only looking at throughput and latency?

Governance Is Also a People Problem

One dimension of agentic AI governance that rarely shows up in formal frameworks is the human aspect. As agents assume more of the execution work, engineering roles are being redefined. Engineers who once wrote the code are now designing systems that orchestrate agents to do that work. That’s a different role and one that demands strong judgment, broader technical insight, and a clearer understanding of where risk actually concentrates.

This shift from writing code to AI orchestration does not always happen smoothly. Engineers who have built careers around hands-on implementation are being asked to think at a higher level of abstraction essentially defining guardrails, validating outputs, and designing exception paths rather than building logic from scratch. Some adapt quickly, but others find it disorienting. The governance framework a CTO builds needs to account for this transition, not assume that the team already knows how to think in these terms.

The CTOs navigating this well treat it as a team structure and culture challenge alongside a technical one. That means investing in how engineers develop judgment about agent behavior, not just skills in agent tooling. It means building review processes that help engineers develop intuition about where agents fail gracefully and where they fail badly. It also means making space for the more deliberate thinking that good governance design requires, even when shipping pressure pushes in the opposite direction.

The CTOs who get agentic AI governance right understand that governance and velocity are not in tension. Governance is what makes velocity sustainable. Provenance, accountability, and policy enforcement are not constraints on what agents can do. They are the infrastructure that allows organizations to trust what agents do well enough to give them more responsibility over time. If you are navigating this and finding that the standard playbooks don’t quite map to the complexity you’re dealing with in practice, you can explore how Hoola Hoop approaches these challenges in more depth here.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’

Leigh Newsome - CTO Coach

Leigh Newsome

Partner, Hoola Hoop Β· CTO Coach & Advisor

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach and advisor with 25 years of experience scaling product and engineering teams. He has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown, and successfully led TargetSpot (backed by Union Square Ventures, Bain Capital Ventures, and CBS) through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft, but never all three at once.

Share this:

AI ROI Board Pressure: What Boards Want To Hear

The AI ROI Pressure Point.

The conversation has shifted. Most CTOs are not struggling to invest in AI, but they’re struggling to account for it. Boards that spent 2024 asking “what’s your AI strategy?” are now asking “what did it cost, what did it return, and how do you know?” Those are different questions, and most technology leaders are less prepared for them than they realize.

The AI ROI board conversation is now one of the defining pressure points for CTOs and CPOs in 2026. According to Kyndryl’s 2025 Readiness Report, which surveyed 3,700 senior business leaders, 61% say they feel more pressure to prove AI ROI now than they did a year ago. That number does not surprise me. What surprises me is how many technology leaders are still walking into that conversation underprepared, armed with metrics that felt compelling twelve months ago and now fall flat the moment a board member asks what it means in revenue terms. To help address this, we’ve guided CTOs and CPOs across our portfolio and hosted two CTO Roundtables in March 2026, titled “CTO: Off The Record – What We’re Not Saying About AI,” providing a forum to explore the challenges and best practices for framing AI ROI.

The frustration being voiced in CTO communities right now is specific: technology leaders who made thoughtful, responsible AI investments during the 2023-2025 buildout period are now being asked to retroactively justify those decisions in financial language they were never tracking. The goalposts moved. And for many, the conversation feels unfair because the technical work was genuinely good.

But boards are not wrong to push. The AI ROI board dynamic reflects a real and legitimate shift in expectations. AI is no longer a speculative bet on future capability. It’s a significant line item in operational budgets, and boards are right to expect that spending to connect to outcomes. The question is how CTOs and CPOs build that case, credibly and on their terms, rather than having the conversation forced on them in a format they didn’t design.

Why the Old Metrics No Longer Work For AI ROI

When AI adoption was in its early stages, organizations tracked what they could measure: the percentage of engineers using Copilot, the number of AI features shipped, the volume of user interactions with AI-powered capabilities. These metrics made sense at the time. They showed momentum. They demonstrated that the organization was moving.

Boards and investors are done with momentum metrics. The AI ROI board expectation in 2026 is near-term business outcomes: revenue impact, cost reduction, and cycle time compression. CFOs, not chief AI officers, are increasingly being positioned as the accountability layer for AI returns, which means CTOs and CPOs now find themselves in conversations that require a different kind of fluency. The shift is not cosmetic. It requires a fundamentally different way of thinking about how AI investment gets measured and reported.

The Hidden Cost Problem in AI ROI

There is a specific version of the AI ROI board challenge that is hitting CTOs right now, and it deserves its own treatment. The cost side of the AI ROI equation is moving. Not incrementally, but significantly and repeatedly. Tools like Cursor, Claude Code, and GitHub Copilot have all shifted pricing models in the past twelve months, and the shift from seat-based licensing to usage-based billing has caught a large number of engineering budgets off guard.

The dynamic is this: a CTO budgets for a fixed number of AI tool seat licenses, a predictable, defensible number to put in front of a CFO. Then agentic usage scales up. An engineer running a long Claude Code session or a Cursor agent working overnight isn’t consuming a seat license. They’re running up a consumption bill. The cost structure of these tools is fundamentally different from traditional software licensing, and many organizations discovered that difference in their quarterly cloud and SaaS reconciliation rather than during budget planning.

This creates a specific AI ROI board problem. You cannot build a credible ROI case when the cost baseline keeps moving. Boards and CFOs are reasonable to question a return calculation built on a cost denominator that looked very different six months ago and may look different again in six months. The CTOs who are navigating this well are doing two things: they’re tracking AI tooling costs at the consumption level, not just the license level, and they’re building their ROI narrative with explicit assumptions about cost trajectory rather than treating current spend as a stable baseline. Acknowledging the volatility, with a clear framework for monitoring it, builds more credibility than presenting a clean number that a CFO can easily challenge.

Five AI ROI Board Patterns That Are Holding CTOs Back

Across the CTOs and CPOs I coach, I see the same patterns emerging when the AI ROI board conversation goes poorly. Recognizing yours is the starting point:

01
The Vanity Metrics Trap
Adoption rates, AI feature usage, and “percentage of codebase generated by AI” are all inputs, not outcomes. Boards don’t buy inputs. When a board member asks what the AI investment returned, answering with an adoption number signals that you haven’t connected the investment to value creation. It doesn’t build confidence. It raises more questions.
02
The R&D Budget Burial
Many organizations buried AI investment inside generic R&D budgets during the buildout years. That worked when AI was exploratory. It makes the ROI conversation nearly impossible now, because the cost side is invisible and the attribution is murky. CTOs who cannot isolate AI spend from general R&D cannot demonstrate AI returns with any credibility.
03
The Retroactive Justification Problem
Many technology leaders are being asked to prove ROI on investments they made without setting up the measurement infrastructure to capture it. Velocity improvements, cost savings, and cycle time reductions that were real and genuine were never tracked in a way that connects to the P&L. Reconstructing the narrative after the fact is hard, and it shows.
04
The Three-Horizons Blind Spot
Boards are increasingly asking technology leaders to balance near-term AI efficiency gains with longer-term structural transformation, simultaneously, with the same team. Many CTOs frame AI ROI as either short-term productivity or long-term competitive positioning. The answer boards want is both, with a clear narrative connecting them.
05
The Missing Language Bridge
Technical leaders default to technical language: deployment frequency, model accuracy, engineering throughput. Board members think in revenue, margin, and market position. When there’s no bridge between those languages, even strong results get lost. The AI ROI board conversation lives or dies on whether the CTO or CPO can translate technical outcomes into financial ones without losing nuance.

### The Common Thread

Every one of these patterns has the same root cause: AI investment was treated as a technology program when it needed to be treated as a business investment from day one. That doesn’t mean the technical work was wrong. It means the measurement and narrative infrastructure wasn’t built alongside it. The good news is that building that infrastructure now, even retroactively, is possible, and it changes the AI ROI board conversation significantly.

What Good AI ROI Board Preparation Looks Like

The CTOs I’ve seen handle the AI ROI board conversation well share a set of practices that distinguish them. None of these are complicated. All of them require doing the work before you walk into the room:

πŸ“Š
Give AI investment its own P&L line
The single most important structural change you can make is isolating AI spend so it’s visible and attributable. This does not have to mean a separate budget process. It means tagging AI-related costs consistently, so that when you build the ROI narrative, the cost side is credible. Without a visible cost baseline, the return conversation is purely directional and boards will push back.
🎯
Lead with near-term business outcomes
Revenue impact, cost reduction, and cycle time compression are the metrics boards find credible in 2026. If you have data on any of these, lead with it. If you don’t have direct P&L attribution, proxy metrics that connect clearly to business outcomes, such as time-to-market for product features or support resolution rates, are far more credible than engagement or adoption numbers.
🀝
Partner with your CFO before the board does
The biggest mistake I see is CTOs preparing the AI ROI narrative in isolation and presenting it without CFO alignment. CFOs are now a primary accountability layer for AI returns. A board member who hears a ROI claim from a CTO and cannot verify it with the CFO will leave the room skeptical. Building the narrative together, with the CFO as a co-presenter or at minimum a visible supporter, changes the credibility dynamic entirely.
πŸ”­
Own the three-horizons narrative explicitly
The most effective AI ROI board presentations I’ve seen name the three horizons deliberately: here is what AI returned in the last 12 months (efficiency), here is what it will return in the next 12 (structural improvement), and here is the longer-term competitive positioning story. Boards that receive all three, clearly separated and honestly framed, are significantly more comfortable than boards asked to accept a single undifferentiated “AI is working” narrative.
πŸ—£οΈ
Translate technical outcomes into financial language
Build an explicit translation layer between your technical metrics and the financial language your board uses. Engineering throughput up 40% is a technical metric. Translating that into reduced contractor spend, faster time-to-market for revenue-generating features, or reduced support load is a financial one. The translation is not always clean, but the discipline of attempting it, and being honest about where the connection is indirect, builds more credibility than leaving it implicit.

### The Underlying Principle

The AI ROI board conversation is fundamentally a trust conversation. Boards trust technology leaders who demonstrate that they understand the business implications of their investment decisions, who track what matters to the business rather than what’s easy to measure, and who can hold uncertainty honestly while still giving the board a clear enough picture to make decisions. Technical credibility gets you into the room. Business fluency keeps you there.

Boards don’t need CTOs to be CFOs. They need CTOs who can speak both languages fluently enough to make the translation visible, the assumptions honest, and the direction clear.

The CFO Partnership: Your Most Underused Asset

One pattern stands out as particularly underused among the CTOs I coach. The CFO-CTO relationship around AI has historically been adversarial: the CTO advocates for investment, the CFO scrutinizes the returns, and the conversation happens at budget time. That dynamic is shifting, and the CTOs who recognize it early are building a significant advantage in the AI ROI board conversation.

CFOs are now being positioned, internally and by their boards, as the primary accountability owners for AI returns. That’s a significant shift. It means CFOs have both the mandate and the organizational standing to be genuine allies in building the ROI narrative. A CTO who treats the CFO as a gatekeeper to be managed is missing the opportunity. A CTO who brings the CFO in as a partner in designing the measurement framework, building the ROI narrative, and presenting to the board is creating shared accountability that benefits both parties.

The practical entry point is straightforward. Before your next board preparation cycle, set up a 90-minute working session with your CFO specifically focused on the AI ROI measurement question. Agree on what the relevant business outcomes are, which metrics you can attribute directly versus indirectly, and how you want to frame the three-horizons narrative together. Do this well before the board meeting, so you’re not negotiating the framing under time pressure. The CFO who walks into a board meeting having co-built the narrative is a very different ally than the CFO who sees the slides for the first time in the briefing.

Questions to Sit With

If you’re a CTO or CPO heading into an AI ROI board conversation in the next quarter, these are the questions worth working through honestly before you walk in:

  • Can you isolate your AI investment on its own cost line, clearly enough that a board member could verify the number with your CFO and get a consistent answer?
  • Are your current AI metrics connected to business outcomes, such as revenue, cost reduction, or cycle time, or are they primarily adoption and usage metrics that measure input rather than impact?
  • Does your CFO know your AI ROI narrative as well as you do? Could they present the financial side of it credibly without you in the room?
  • Have you separated the three horizons explicitly: near-term efficiency returns, structural improvement in the next 12 months, and longer-term competitive positioning? Or are you presenting them as a single undifferentiated story?
  • When a board member asks what the AI investment returned, can you answer in a way that a CFO would sign off on? If not, what would need to change in your measurement or framing to get there?

A Final Thought

The AI ROI board conversation is not going to get easier. As AI spending grows and board scrutiny increases, the expectation that technology leaders can account for their AI investments in business terms will only intensify. The CTOs who build that fluency now, before it becomes a crisis conversation, will have a significant advantage over those who continue to hope that momentum metrics will carry them through.

What I find most consistently true, coaching technology leaders through this, is that the ROI conversation is rarely the problem. The problem is the infrastructure behind it, the measurement systems, the CFO relationship, the ability to translate between technical and financial language, that was never built. Building it is not a board-meeting sprint. It’s a discipline that develops over quarters.

The same clarity that helps you navigate the AI ROI board conversation also strengthens how you manage upward across the organization. If you’re thinking about how to build stronger executive relationships alongside your financial fluency, the principles in Managing Up as a CTO or CPO are directly relevant. The skills compound. And the CTOs who develop both tend to find the board conversation stops feeling like a threat and starts feeling like an opportunity.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’
Leigh Newsome - CTO Coach

Leigh Newsome

Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. He has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown, and successfully led TargetSpot (backed by Union Square Ventures, Bain Capital Ventures, and CBS) through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft, but never all three at once.

Share this:

Managing Up: How CTOs and CPOs Build Trust with Their CEO

What Your CEO Actually Needs From You.

Managing up is the skill most CTOs and CPOs never got taught. Your good at building teams, shipping product, and navigating technical complexity. The relationship with your CEO is a different kind of problem, and quietly, it’s where some of the most capable technical leaders I coach and advise come unstuck.

For CTOs and CPOs, managing up to the CEO is often the leadership skill they were least prepared for. The problem rarely looks like a problem at first. You’re delivering. Your team is shipping. The CEO seems satisfied. And then, often suddenly, something shifts. You find yourself out of sync. Decisions begin to happen without your input, with your CEO working around or beneath your role to move things forward. Your strategic proposals don’t land the way you expected. The CEO seems to be operating from assumptions about your team that aren’t accurate, and you’re not sure how that happened.

What I’ve observed, across a lot of these relationships, is that the breakdown rarely starts with a single incident. It accumulates through a pattern of missed expectations, ones the CTO or CPO didn’t even know existed.

Why Managing Up to the CEO Is Hard for CTOs and CPOs

The CTO and CPO roles sit in a uniquely difficult position when it comes to managing up. Unlike a CFO or CMO, whose domains are reasonably legible to most CEOs, your world, whether that’s engineering architecture, technical debt, or product strategy, is opaque in ways that matter. The CEO can’t easily assess whether your team is performing well, whether the risks you’re describing are serious, or whether the timeline you’ve committed to is realistic.

That opacity creates a relationship that depends almost entirely on trust. And trust, in this context, relies on a very specific kind of communication: the ability to translate what’s happening in your world into language that connects to what the CEO cares about most.

Most CTOs and CPOs are never taught how to do that translation well. So they default to what they know: status updates, technical briefings, roadmap reviews. These feel thorough, but they often miss the point entirely.

Five Patterns That Erode the Relationship

These are the patterns I see most consistently when CTOs and CPOs struggle with managing up to their CEO:

01
The Translation Failure
You’re reporting on what your team is doing rather than what it means. Your CEO is left to connect the dots between “we’re refactoring the data layer” and what that implies for the product roadmap, the Q3 targets, or the board presentation. When they draw the wrong conclusions, it’s not because they’re not smart. It’s because the translation was your job, and it didn’t happen.
02
The Optimism Trap
The instinct to fix a problem before raising it feels responsible. To a CEO, it looks like concealment. When risk surfaces late, it’s almost always more expensive to address, and the CEO’s trust takes a hit that has nothing to do with the problem itself and everything to do with the timing of when they found out. They needed to know earlier, even without a solution in hand.
03
The Commitment Trap
You’re asked “when will it be done?” in a planning meeting. You give a number under pressure. That number becomes a commitment the CEO holds, often for much longer than you intended, and the CEO cites it in board conversations you weren’t part of. The skill isn’t refusing to answer. It’s giving a confident, bounded answer that reflects genuine uncertainty without sounding evasive.
04
The Reactive Cadence
You communicate upward when something goes wrong, when a decision is needed, or when a 1:1 is scheduled. Your CEO builds their mental model of your work from these irregular, often high-stakes interactions. You’re not managing the relationship. You’re responding to it. And a mental model formed from crises and deadlines will always look worse than the reality.
05
The Invisible Constraints
You know exactly why a decision made eighteen months ago is limiting your options today. You know where the technical debt lives and what it will cost to address. Your CEO doesn’t. And if you haven’t proactively shared that context, you’ll find yourself defending decisions that seem inexplicable from the outside, often at the worst possible moment.

The Common Thread

None of these patterns stem from incompetence or bad intent. They develop because most CTOs and CPOs are never given a map for managing up. The encouraging thing is that all five are addressable, not through dramatic behaviour change, but through more deliberate communication habits applied consistently.

What Strong Managing Up Looks Like for CTOs and CPOs

The CTOs and CPOs who excel at managing up to their CEO aren’t necessarily the ones doing the best technical work. They’re the ones who’ve developed a specific set of communication habits that make their work legible, their risks visible, and their judgment trustworthy.

🎯
Lead with the decision, not the status
Every briefing with your CEO should start with one of two things: the decision you need them to make, or the business implication of what you’re sharing. Not the technical detail, not the team update, not the feature list. If you can’t articulate the decision or implication, ask yourself whether this meeting is necessary. The CEO’s job is to make good decisions, and your job is to make that easy.
“If your CEO has to ask ‘so what does that mean for us?’ after your update, the translation didn’t happen.”
⚠️
Surface risk before you have a solution
The rule I coach: if you know about a meaningful risk, your CEO should know within 24 hours, even if you don’t yet know how you’re going to address it. “I don’t know yet, but here’s what I’m doing to find out” is a legitimate update. “I was waiting until I had an answer” is not. CEOs can absorb uncertainty. What erodes trust is the feeling that you withheld information. It’s worth being honest about why CTOs and CPOs wait: the instinct often has a fear component, specifically fear of appearing unprepared, or of alarming the CEO unnecessarily. Recognizing and working through that instinct is part of what it means to lead with real courage. We explore this at length in our Courage to Lead series.
πŸ“£
Own the narrative before it owns you
Your CEO is forming a view of your organization from many sources: your direct conversations, what other executives say, what the board asks about, what they read. If you’re not actively shaping that narrative, others shape it for you. A brief weekly written update, two or three tight paragraphs, gives you a consistent stake in how your CEO understands your team’s work. It doesn’t need to be long. It needs to be regular.
πŸ”„
Build a consistent rhythm
The strongest CTO/CPO-CEO relationships I’ve seen are built on predictable, structured cadences: a weekly written update, a monthly strategic conversation focused on direction rather than status, and a thoughtful contribution to the quarterly business review. The rhythm itself builds trust. When the CEO knows what to expect and when, your relationship becomes a source of stability rather than a variable they have to monitor.
🧭
Understand your CEO’s actual priorities
What are your CEO’s three biggest concerns right now? What did they promise the board last quarter? What keeps them up at night? Your job is to make the connection between your team’s work and those things visible and explicit, not to wait for the CEO to draw the line themselves. This is particularly important as agentic development changes what engineering teams can deliver, and the CEO needs to understand that shift in terms of business opportunity, not technical capability.

The Underlying Principle

Across all five habits, there’s a consistent thread. The CTO or CPO who manages up well isn’t trying to impress their CEO. They’re trying to make the CEO’s job easier. That shift in intent changes almost everything about how you communicate upward.

Your CEO will never fully understand your world. But you can fully understand theirs. That asymmetry, when you lean into it, is where the strongest CTO/CPO-CEO relationships take root.

A Note on Timelines and Commitments

The timeline question deserves its own attention because it’s where managing up most often breaks down for CTOs and CPOs. “When will it be done?” is one of the most loaded questions a CEO can ask, and almost nobody answers it well.

The wrong answer is false precision: a specific date given under pressure that becomes an anchor in the CEO’s mind, surfaces in board conversations, and lingers long after you’ve forgotten you ever said it. Evasion is no better: “it depends,” “we’re still scoping it,” or an answer so hedged it communicates nothing.

The right answer is confident uncertainty. Something like: “Based on what we know today, we’re targeting the end of Q2. The biggest risk to that is X, and here’s how we’re managing it. I’ll give you an updated view in three weeks when we know more.” This gives the CEO what they actually need: a planning horizon, the key risk, and a date when they’ll have better information. It also models the kind of thinking that builds credibility over time.

The same principle applies to scope. When a CEO asks “can we add this feature?” the answer almost always involves a trade-off. The CTO or CPO who says “yes, and here’s what moves” is far more useful than the one who says “it’s complicated” or, worse, says yes and quietly absorbs the cost into the team.

Questions to Sit With

If you’re a CTO or CPO thinking honestly about managing up to your CEO, these are worth working through:

  • If your CEO had to describe to the board what your team accomplished last quarter, what would they say? Is that what you would say? If there’s a gap, that’s a communication gap, not a performance gap.
  • When did you last raise a meaningful risk to your CEO before you had a plan to address it? If you struggle to find an example, consider what that pattern is costing you in terms of trust.
  • Do you have a consistent, self-initiated cadence for the CEO relationship, or does most of your upward communication happen reactively, when something is needed or something has gone wrong?
  • Does your CEO understand the key constraints your team is operating under, such as the architectural decisions, the accumulated technical debt, and the hiring gaps, or would those feel like surprises if they came up in a board conversation?
  • What does your CEO actually care most about right now? Not what they say in 1:1s, but the things driving their decisions. How explicitly does your work connect to those things in the way you communicate it?

A Final Thought

Managing up well doesn’t mean being political. It doesn’t mean softening hard truths or packaging bad news attractively. It means developing the discipline to communicate your world in terms that are useful to the person you’re communicating with. And it takes real courage to do it consistently: to raise the uncomfortable thing early, to push back on the unrealistic expectation, to say “I don’t know yet” to someone you want to inspire confidence in. If that’s a dimension you want to explore further, our Courage to Lead series goes deep on exactly that.

Your CEO is making decisions about the whole business, often with incomplete information and real time pressure. The more you can make your piece of the business legible to them, the better those decisions get. That’s good for them, good for your team, and ultimately good for you.

The CTOs and CPOs who build strong upward relationships aren’t the ones who have the smoothest delivery or the most polished presentations. They’re the ones who understand that their job doesn’t stop at the boundary of their own organization, and who invest in the relationship with the same seriousness they bring to everything else.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’

Leigh Newsome - CTO Coach

Leigh Newsome

Partner, Hoola Hoop Β· CTO & CPO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO & CPO coach with 25 years of experience scaling product and engineering teams. He has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown, and successfully led TargetSpot (backed by Union Square Ventures, Bain Capital Ventures, and CBS) through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft, but never all three at once.

Share this:

The Agentic SDLC: A CTO’s Guide

From SDLC to Agentic SDLC.

I’ve lived through a lot of process evolutions. The move to agentic development is different in kind, not just degree. It’s changing what it means to lead an engineering organization altogether.

CTOs aren’t asking “should we use AI?” anymore. That debate is over. They’re asking: how do we rebuild our development process around it and how do I need to lead differently?

This article is my attempt to answer that.

What Traditional SDLC Was Built For

The Software Development Lifecycle: requirements, design, development, testing, deployment, feedback was architected around a fundamental constraint: humans are the only ones who can do the work.

That constraint shaped everything. Sequential handoffs existed because one person or team needed to finish before the next could start. Sprints existed to timebox human capacity and velocity. QA came after development because writing tests and writing code at the same time was too expensive. Code reviews were async because engineers couldn’t be in two places at once.

We built an entire system of process around human throughput limitations. And for decades, it worked.

What’s Changed

Agentic AI tools like Claude Code, Cursor, Copilot, and Devin are changing more than individual developer speed. They’re collapsing and reinventing the handoffs between stages entirely. Here’s what I’m seeing as a CTO and in the organizations we advise:

01
Requirements
What used to take weeks of grooming sessions is being drafted, refined, and structured by AI in hours. Engineers are parsing business goals and generating user stories directly but not waiting for a PM to hand off a spec. PMs who embrace this shift move from writing specs to shaping strategy, but those who don’t will find their role increasingly redundant.
02
Design & Architecture
What used to require a senior architect and days of whiteboarding can now produce multiple candidate architectures with trade-off analysis in a single session. The human job shifts from creating the design to evaluating and deciding between options.
03
Development
Developers are increasingly writing intent and describing what they want to build, while agents scaffold, implement, and iterate. The developer reviews, steers, and catches failure modes. They’re not gone from the process, but operating at a higher altitude. And frankly, I think that makes them more valuable, not less.
04
Testing
Testing is no longer a phase that happens after the code is written. Agents generate and run tests during development, flag regressions in real time, and maintain test coverage as the codebase evolves. The idea of a separate QA phase is becoming an artifact of the old model.
05
Deployment & Operations
Agents monitor, alert, and in some cases remediate without waking anyone up at 2am for a failure pattern. The human role shifts from routine incident response to designing the guardrails that determine when and how agents escalate.

What This Means for CTOs

The process implications are real, but the leadership implications are even bigger.

πŸ—οΈ
Your org chart hasn’t caught up
The traditional split between engineers, PMs, and QA, or between engineers and architects, is eroding fast. If your team structure still reflects a handoff model, you’re adding coordination overhead that no longer serves a purpose. My prediction: Product, Engineering, and Design will ultimately converge into a single unified team.
🎯
The bottleneck has moved
In traditional SDLC, the bottleneck was development throughput and not enough engineers, not enough time. In agentic SDLC, the bottleneck is increasingly decision quality. Are we building the right thing? Is this architecture sound? The humans in the loop need to be excellent at judgment, not just execution.
⚠️
New failure modes require new oversight
Agents fail differently than humans and typically confidently, sometimes quietly, but at scale. They can produce code that looks right, passes tests, yet introduces errors or debt that surface weeks later.
πŸ”²
“Done” is harder to define
In a world where an agent can keep iterating indefinitely, knowing when to stop and ship is a real skill. Scope discipline becomes critically important, not less.
πŸ“‹
The governance question is unavoidable
What decisions can an agent make autonomously? What requires human approval? What gets logged and audited? Who is the engineer and responsible for the code in production? These aren’t theoretical questions but operational ones every engineering leader needs to answer before they can responsibly scale agentic workflows.
“Ever received a PR from your CEO who’s decided they’re now an engineer via Claude Code? That governance conversation is happening in more organizations than you’d think.”

What Doesn’t Change

I want to be direct about this, because I think there’s a real risk of CTOs either over-correcting or under-responding to this shift.

The human things, specifically the leadership things, DO NOT go away. They become more important.

Your job as a leader is to create the conditions where higher-level thinking can actually happen, which means protecting your best people from being buried in review queues of AI-generated code.

Knowing which problem to solve, and why now, is still yours. AI can accelerate execution with extraordinary speed. It cannot set direction. Strategic judgment, customer empathy, navigating organizational ambiguity, building a team that trusts each other. None of that is on the automation roadmap.

What is changing is the ratio. More of the execution layer is being handled by agents, which means the humans in the loop need to be operating at a higher level not just reviewing code, but shaping outcomes. The exceptional engineers on your team will thrive in this environment. Those who relied primarily on execution throughput will find the transition harder.

The bottleneck in agentic SDLC isn’t development throughput – it’s decision quality. The humans in the loop need to be excellent at judgment, not just execution.

Questions to Sit With

If you’re a CTO thinking through what this means for your organization, these are the questions I’d encourage you to work through:

  • Does your development process still have handoffs that exist because of human throughput constraints rather than because the handoff itself adds value?
  • Where is your human review layer, and is it optimized for catching agent failure modes rather than traditional human ones?
  • Are your engineers spending more time on judgment, direction, and evaluation… or are they still primarily in execution mode?
  • How are you communicating the implications of agentic development to your CEO and board, and are you bringing them along before the organizational changes become visible?
  • What does “done” mean in your current process, and does that definition still hold in an environment where iteration is nearly free?

A Final Thought

The CTOs who navigate this transition well aren’t the ones who adopt every new tool fastest. They’re the ones who understand what the tools change about process, about team structure, about where human judgment is irreplaceable and redesign their organizations around that understanding.

That’s not a tooling or process problem. It’s a leadership problem.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’

Leigh Newsome - CTO Coach

Leigh Newsome

Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. He has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown, and successfully led TargetSpot (backed by Union Square Ventures, Bain Capital Ventures, and CBS) through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft β€” but never all three at once.

Share this:

CTO Coaching: A Guide for Leaders

I’ve spent 25 years scaling product and engineering teams, and one thing I’ve learned is that the hardest part of being a CTO is not about technology. For most CTOs and engineering leaders I know and have worked with, it’s not technical competence that holds them back. It’s the leadership aspects of the job that challenges them. The role demands that you set technical vision, build and scale engineering teams, navigate AI adoption, manage board and investor relationships, and drive product strategy all at once.

That’s exactly why I do what I do. And it’s what CTO coaching is for.

What is CTO Coaching?

CTO coaching is a structured working relationship between a technology executive and an experienced coach, designed to help the CTO grow as a leader, make better decisions, and perform more effectively in their role.

It’s very different from consulting. A consultant gives you answers. As a CTO coach, I help you to develop the skills, self-awareness, and judgment to find better answers yourself.

My approach is grounded in 25 years of hands-on experience as a technology and product executive β€” having served in CTO, CPTO, and CEO roles across a range of growth-stage companies. What I value most about the team I work with at Hoola Hoop is that every partner and coach brings that same perspective. We’re all former operators who have navigated exactly the challenges our clients face. Not theorists. Practitioners. The guidance is concrete because the experience is real.

Who Needs CTO Coaching?

In my experience, CTO coaching is highly valuable at multiple career stages, but it tends to be incredibly impactful in a few specific situations.

01
The Transition to CTO
The first is the transition from engineer or VP of Engineering to CTO. This is one of the hardest professional shifts in tech, and I see most leaders struggle with it. The skills that made you a great engineer such as technical problem-solving, individual execution are not the same skills that make a highly impactful CTO. Coaching helps accelerate that transition, grow your leadership and avoid common pitfalls.
02
Company Growth
The second is company growth. When a startup scales from 20 to 200 people, the CTO’s job changes dramatically. What worked at one stage breaks at the next. I’ve been through those inflection points myself, CTO coaching provides a sounding board for navigating those points in real time.
03
Friction & Conflict
The third is when you are experiencing friction with the CEO, the product team, the board, or your own engineering organization. These dynamics are rarely just technical. As a CTO Coach, I help you understand what’s really going on and how to address it.

Areas CTO Coaching Addresses

Effective CTO coaching covers both the technical leadership dimension and the human side of the role. In my work with technology leaders, these are some areas that come up consistently:

🎯
Technical Strategy & Vision
Helping CTOs articulate a clear technical roadmap that aligns with business goals, make sound architecture decisions, and communicate technical trade-offs to non-technical stakeholders in a way that builds trust.
πŸ‘₯
Building & Scaling Engineering Teams
Hiring, developing, and retaining strong technical talent. I work with CTOs on how to build a high-performance culture, develop technical managers, and structure their engineering organization for scale.
πŸ€–
AI Adoption & Innovation
Today’s CTOs are under significant pressure to integrate AI into their products and processes. I help you think through AI strategy clearly β€” what to build, what to buy, and how to lead your teams through the change.
🎀
Executive Presence & Influence
CTOs frequently need to advocate for technical investments to a CEO, board, or investors who may not have a technical background. CTO coaching builds the communication skills and executive presence to do this effectively.
🀝
Cross-functional Leadership
The relationship between product and engineering is one of the most important β€” and most frequently strained β€” dynamics in a growth-stage company. CTO coaching helps leaders build stronger working relationships across the C-suite.
πŸ›οΈ
Managing Up & Board Relationships
As companies scale, CTOs increasingly interact with boards and investors. I prepare technology leaders for these conversations and help them navigate the dynamics involved, including the ones nobody warns you about.

What Makes A Good CTO Coach?

Not all coaches are created equal. Here’s what I’d tell any leader looking for a CTO coach:

  • Real operating experience in technology leadership First and most importantly, look for a coach with real operating experience in technology leadership. A coach who has never scaled an engineering team, managed technical debt under growth pressure, or navigated a difficult CTO-CEO dynamic will struggle to give you relevant, credible guidance. I’ve spent decades doing exactly that, and it’s the foundation of every coaching relationship I have.
  • Someone who asks great questions, not just dispenses advice Second, look for someone who asks good questions rather than just dispensing advice. The best coaching unlocks your own thinking. A coach who just tells you what to do creates dependency, you want someone who builds your capacity to think through hard problems independently.
  • Honest and willing to challenge you Third, look for a CTO coach who will be honest with you and challenge you. As a CTO, you often don’t get candid feedback from your teams or peers. A good CTO coach will tell you what you need to hear, not just what you want to hear.

What to Expect from CTO Coaching

My coaching engagements typically involve regular one-on-one sessions usually weekly or bi-weekly, focused on whatever is most pressing for you at that moment and on the goals we have set together. Sessions are confidential, which creates the space for the kind of honest conversation that’s hard to have with a direct report, peer, or investor.

I often start with an interview-based 360 review speaking directly with your peers, direct reports, and CEO. This helps get an honest, multi-perspective picture of where you are excelling and where the real development opportunities lie. Then we define 3 to 5 goals to work on. This gives our CTO coaching sessions a grounded starting point rather than relying solely on self-assessment.

From there, coaching sessions evolve to address real-time issues as they arise such as team performance challenges, a board presentation, technology decisions, a team restructure, or a conflict with the CPO or CEO. I also facilitate CTO leadership roundtables, bringing together technology leaders from across our client base and portfolio to share experiences, challenge each other’s thinking, and learn from peers who are navigating similar inflection points. Many CTOs find these peer sessions as valuable as the one-on-one coaching itself.

A great CTO coach is the trusted advisor you can call when you’re facing tough decisions and need someone in your corner who has seen it before.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’
Leigh Newsome - CTO Coach

Leigh Newsome

Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. He has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown, and successfully led TargetSpot through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft β€” but never all three at once.

Share this:

AI Is Reshaping the CTO and CPO Role: What Tech Leaders Need to Know

In 25 years of working in and around technology leadership, I’ve watched a lot of shifts and coached many CTOs and CPOs. But how AI is changing the CTO and CPO role feels different from anything I’ve seen before. It’s not just in how software gets built, but in what it means to lead a technology organization.

The boundary between product and engineering is dissolving fast, and for CTOs and CPOs, the more consequential change isn’t in the tooling. It’s in how you structure your teams, define roles, and make strategic decisions.

This is what I’m seeing on the ground with the technology leaders I coach, and what I think every CTO and CPO needs to be thinking about right now.

The Blurring Line Between Product and Engineering

For decades, software teams operated on a handoff model: product managers defined what to build, engineers figured out how to build it, and the two disciplines met somewhere in the middle usually in a doc or a Jira backlog. That model is breaking down, and it’s happening now.

I’m seeing product leaders build functional prototypes using tools like Cursor, Lovable, and Claude Code, and submit pull requests directly to engineering repositories. Engineers, meanwhile, are shaping UX decisions, architectural strategy, and product direction from day one not after requirements are handed over.

AI isn’t just making each individual faster. It’s eliminating the friction between roles entirely, turning sequential handoffs into continuous, shared problem-solving. As a CTO or CPO, this changes what you need to lead, how you design your organization, and what you hire for. The leaders who recognize this early will have a significant advantage.

What’s Emerging Right Now

Three structural shifts are already visible in forward-thinking tech organizations:

πŸ”€
Hybrid roles are becoming the norm.
The same person is defining the problem and implementing the solution  with AI serving as a force multiplier at every step. For CTOs and CPOs, this means rethinking job architecture, career ladders, and how performance is evaluated. I’m having this conversation with nearly every tech leader I coach right now.
🎯
Teams are organizing around outcomes, not titles.
The question is no longer “who writes the code?” versus “who writes the spec?” It’s about decision-making speed and customer impact which puts new pressure on tech leaders to create clarity without rigid structure.
🀝
Stand-up culture is changing.
The most productive teams I see are already asking “What did you and your AI agents ship together?” which is fundamentally different question than “What did you do yesterday?” Leading these teams requires a different kind of presence and a different set of management skills.

What Won’t Change

Amid all this disruption, some things remain irreplaceable  and these are exactly the areas where exceptional CTOs and CPOs create disproportionate value. I want to be direct about this, because I think there’s a risk of tech leaders undervaluing what makes them most effective:

πŸ’‘
Deep customer empathy.
Understanding what users actually need and not just what they ask for requires human judgment and genuine curiosity. This is a leadership quality, not a technical one.
🧭
Strategic decision-making.
Knowing which problem to solve, and why now, is still a deeply human skill. AI can accelerate execution. It can’t set direction.
βš–οΈ
Navigating complex tradeoffs.
Weighing competing priorities, managing ambiguity, and making calls with incomplete information this is where great tech leaders earn their seat at the table.

AI handles implementation velocity. Tech leaders handle direction. In my experience coaching CTOs and CPOs, that distinction is becoming more important, not less.

πŸ’‘ The organizations that will win aren’t the ones that protect traditional role boundaries. They’re the ones led by CTOs and CPOs who know how to build adaptive, outcome-driven teams  and use AI to amplify what makes their people irreplaceable: judgment, creativity, and collaboration.

The Strategic Question Every CTO and CPO Is Facing

The question on every tech leader’s mind

In two years, will we still need separate product management and software engineering roles  or just “orchestrators” and “makers” who do both?

The honest answer is: it depends on the organization. But the direction of travel is clear. The most effective teams will likely look less like two distinct disciplines coordinating with each other, and more like a unified group of versatile builders with AI deeply embedded in how they work.

The transition won’t happen overnight, and not every company will move at the same pace. But the CTOs and CPOs who start designing for this reality now , especially in how they hire, structure teams, and evaluate performance will be better positioned to scale, move faster, and build better products.

Key Questions for Tech Leaders Navigating This Shift

If you’re a CTO or CPO thinking through what this means for your organization, these are the questions I’d encourage you to sit with:

  • Are you hiring for adaptability and curiosity, or optimizing for role-specific credentials that may matter less in 18 months?
  • Do your team rituals (stand-ups, planning, retrospectives) still reflect a handoff model, or a genuinely collaborative one?
  • Are you creating the conditions for engineers to engage in product strategy, and for product leaders to get hands-on with prototypes?
  • How are you communicating this evolving organizational design to your CEO and board and bringing them along on the journey?

Coaching CTOs and CPOs Through How AI Is Changing Their Role

These aren’t abstract future challenges. They’re decisions being made right now, in real organizations, under real pressure. And they’re exactly the kind of high-stakes, nuanced questions that CTO and CPO coaching is designed to help you work through.

At Hoola Hoop, my coaching work with CTOs and CPOs is built around the specific challenges of leading technology organizations at the intersection of AI, organizational design, and business strategy. I work with tech leaders at startups and growth-stage companies who are navigating ambiguity, scaling teams, and shaping the future of how their organizations build and ship.

Every coach at Hoola Hoop is a former operator including CTOs, CPOs, and C-suite executives who have sat in the seat. We don’t offer generic leadership frameworks. We bring real-world experience from inside the roles you’re in, and help you develop the strategic clarity, executive presence, and organizational judgment to lead effectively through periods of rapid change.

Whether you’re rethinking your team structure, preparing for a board conversation about AI strategy, or working through how your role itself is evolving β€” I’d love to help.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’
Leigh Newsome - CTO Coach
Leigh Newsome Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. Leigh has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown. He successfully led TargetSpot, backed by Union Square Ventures, Bain Capital Ventures, and CBS, through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft β€” but never all three at once.

Share this:

Podcast: Optimizing Tech Teams & Strategy in EdTech

In this executive leadership episode of EdTech Elevated, Lisa March, President and Founder of Partner in Publishing, interviews Leigh Newsome, Partner at Hoola Hoop and New York University adjunct professor. This episode focuses on scaling EdTech companies through navigating the complexities of technology leadership. Drawing from his experience as both a Silicon Valley engineering leader and executive coach, Leigh shares methodologies for CTOs and CEOs, including specialized CTO coaching programs and technology team optimization. In addition, how to prepare for due diligence and strategic technical debt management.

Furthermore, the discussion explores critical tech leadership challenges, particularly focusing on strategic outsourcing decisions, AI implementation and impact in education technology, and subsequently, conducting technical due diligence during mergers and acquisitions. Additionally, Leigh reveals effective frameworks for CEO-CTO alignment and demonstrates how Hoola Hoop’s comprehensive executive coaching and advisory services help EdTech leaders excel. Consequently, through targeted CTO coaching, leadership development, and strategic planning, Hoola Hoop consistently supports education technology executives in building and scaling successful companies.

00:06 – Introduction to EdTech Elevated

00:22 – Leigh Newsome’s Background and Role

01:31 – Leigh’s journey to CTO Leadership… and CTO Coach.

05:09 – Challenges in Technology Leadership

07:42 – AI’s Impact on EdTech

10:18 – Supporting Pre-Revenue Companies

13:03 – Due Diligence for Investors and M&A

16:45 – Outsourcing and Tech Team Management

19:20 – CEO and CTO Collaboration

25:10 – Hoola Hoop Team Overview

27:12 – Closing Remarks

Share this:

Beyond the Code: Executive Coaching for CTOs and CPOs

Chief Technology Officers (CTOs) and Chief Product Officers (CPOs) navigate the complex intersection of technology, product strategy, people leadership and business objectives. At Hoola Hoop, we offer specialized executive coaching tailored to the unique challenges faced by these tech leaders. Let’s start by dispelling some common myths about CTO and CPO coaching.

Common Myths About CTO and CPO Coaching

Myth “As a tech leader, I don’t need coaching. I only need to know how to build products.”
Reality The role of a CTO or CPO extends far beyond product development. Our coaching focuses on tech leadership at the executive level. We help you navigate board presentations, shape company-wide technology strategy, and make critical decisions on tech investments and acquisitions. Our sessions focus on developing your ability to lead and be accountable at an executive level, translating technical concepts for non-technical stakeholders, aligning technology initiatives with business goals, and building a high-performing technology culture.
Myth “CTO or CPO coaching is just about improving my coding skills or product management techniques.”
Reality While technical skills are important, our coaching focuses on strategic thinking, organizational design, and leadership skills crucial for top-level tech executives. We help you balance technical depth with business acumen expected for a C-Level role.
Myth “A coach can’t understand my specific technical challenges or product market.”
Reality Our coaches have extensive experience across various tech stacks, product domains, industry verticals and have led large product and engineering organizations. We provide insights that bridge your unique technical challenges with broader business goals.
Myth “As a CTO/CPO, I don’t have time for coaching. I’m too busy putting out fires!”
Reality Our coaching helps you shift from reactive to a proactive leadership. We work on strategies to prioritize, reduce recurring issues, improve processes, and expand your ability for strategic thinking and innovation.
Myth “Coaching will expose my technical or leadership weaknesses to my team or the board.”
Reality Our coaching is confidential and focused on your growth. We help you develop strategies to address challenges, enhancing your confidence and effectiveness as a leader. Additionally, we prepare you to effectively present your technology and product strategies to your board, other leaders and in public.
Myth “CTO/CPO coaching is just about better engineering or product management”
Reality While optimizing team performance is important, our coaching goes far beyond technology management. We focus on elevating your strategic impact across the entire organization. This includes architecting scalable tech ecosystems, aligning technology and product roadmaps with overarching business goals, and amplifying your executive leadership presence. We guide you in effectively working with your CEO, board and team, influencing C-suite decisions and scaling your capabilities in tandem with your company’s growth.

Our Approach to CTO and CPO Coaching

πŸ—οΈ
Technical Leadership Expertise
We bring deep experience in CTO and CPO roles, offering advice rooted in firsthand knowledge of leading technology and product organizations. Our guidance spans critical areas such as architecting scalable engineering organizations, making high-stakes technology decisions, strategically managing technical debt, preparing for an M&A event, and harmonizing ambitious product visions with engineering realities.
🎯
Strategic Business Alignment
We help you navigate the crucial intersection of technology, product and business strategy. This includes translating business goals into actionable tech and product strategies, communicating technical concepts to non-technical executives, and influencing company-wide transformation initiatives.
πŸ’Ό
M&A and Investor Readiness
We guide you through comprehensive tech diligence and product strategy audits, ensuring you’re fully prepared for mergers, acquisitions, investor scrutiny and/or most other queries. This includes assessing technical risks, evaluating product scalability, and developing robust strategies for investor presentations.
πŸ“ˆ
Leadership Amplification
We enhance your effectiveness as a CTO or CPO,  developing your abilities, building high-performing engineering and product teams, managing diverse technical personalities, and preparing you for growth.
πŸ”’
Open Dialogue
We provide a confidential judgment-free space for honest conversations about sensitive issues, free from the constraints of office politics and internal biases. This includes navigating complex stakeholder relationships and addressing technical disagreements within your team or even conflicts with your CEO or C-suite teammates.
πŸ”„
Holistic Tech Leadership Development
Our coaching integrates leadership growth and technical excellence. We help you balance hands-on technical work with strategic leadership, stay technically relevant while focusing on high-level decision making, developing your personal brand as a technology leader.
πŸ“Š
Measurable Outcomes
We emphasize the importance of measurement, tracking your leadership effectiveness and your impact on key engineering, product and financial metrics. This includes working with you to define the right type of metrics for your organization and measuring the ROI of your technology investments.

What to Expect from Hoola Hoop CTO/CPO Coaching

01
Illuminate Your Tech Leadership Blind Spots
Uncover the critical areas for improving your tech leadership, from inter-team communication to development pipeline bottlenecks. This includes surfacing feedback your team and other key stakeholders may be hesitant to share directly with you. We may also recommend advanced tools and metrics for a more granular view of engineering performance and product development efficiency.
02
Candid, Experienced Feedback
Receive honest, constructive feedback from seasoned tech leaders. Our feedback process includes tailored assessment tools like interview-based 360Β° reviews, and advanced leadership assessment tools specifically designed for tech executives.
03
Strategic Technology Alignment
Refine your long-term product and technology strategies, ensuring they align with business objectives and market dynamics, including evaluating emerging technologies and their potential impact on your roadmap.
04
Leadership Team Development
Gain insights into building and nurturing high-performing tech teams. We focus on helping you develop your direct reports and key talent, crucial for scaling your impact as a CTO or CPO.
05
Balanced Tech-Business Perspective
We offer both high-level strategic insights and detailed tactical guidance, ranging from optimizing development processes to crafting product roadmaps and quantifying the ROI of tech investments.
06
Peer Network Expansion
Beyond 1:1 coaching, engage with fellow tech leaders through our exclusive CTO and CPO roundtables, forums that foster peer learning and collaboration on industry-specific challenges.

Is Hoola Hoop Right for You?

Our coaching is ideal for tech leaders who:

  • Embrace constructive feedback and new perspectives on technical and strategic challenges.
  • Are dedicated to long-term personal growth and organizational technological advancement.
  • Are willing to be challenged, open to measuring their success through concrete tech and product outcomes, and ready to step outside their comfort zone.

If you’re seeking a partnership that will elevate your tech leadership and enhance your company’s technological performance, let’s connect. Together, we’ll drive innovation, streamline your tech operations, and position your products for market success.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’
Leigh Newsome - CTO Coach
Leigh Newsome Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. Leigh has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown. He successfully led TargetSpot, backed by Union Square Ventures, Bain Capital Ventures, and CBS, through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft β€” but never all three at once.

Share this:

Product and Technology Due Diligence

In mergers, acquisitions, and investment decisions, comprehensive product and tech due diligence is crucial for informed decision-making and risk mitigation. This strategic evaluation process examines critical areas including technical debt assessment, architectural decisions, R&D investment analysis, and team capabilities evaluation. Beyond surface-level code review, it provides deep insights into a company’s technological sustainability, product validation, and future scalability potential. Understanding these fundamental components helps stakeholders make confident investment decisions and identify promising opportunities while avoiding costly oversights.

Components of Product and Technology Due Diligence

01
Technical Debt Evaluation
Understanding a company’s technological foundation is crucial. Technical debtβ€”the accumulation of shortcuts taken in software development to accelerate time-to-marketβ€”can impact future growth and scalability. An evaluation of this debt helps investors and buyers determine whether the company’s technology is sustainable or if costly redevelopment will be necessary in the future.
02
R&D Expenditure Analysis
Research and Development (R&D) investment must align with the company’s growth stage and product roadmap. The R&D spend of a startup will differ from that of a scaling company or one with a mature, late-stage product. Thorough due diligence uncovers if the R&D budget is fueling genuine innovation or simply sustaining obsolete products. It also exposes any misallocation of resources, whether through excessive or insufficient spending in specific areas.
03
Product Validation
Verifying that a product delivers on its promises is essential. Product validation ensures that the technology performs as advertised. Any misrepresentation can lead to investor disappointment or customer concerns. It’s crucial to confirm that the product meets its claims in terms of functionality, scalability and security. Furthermore, it’s important to examine both short-term and long-term strategic plans, a.k.a product roadmap, to understand the product’s future direction and potential.
04
Team Quality Assessment
The quality of the technical team and their decision-making processes are important factors in a company’s ability to execute its product vision and navigate future technological challenges and business demands. This assessment typically involves interviewing key technical leaders, conducting code walkthroughs, examining architectural decisions, and evaluating the team’s ability to adapt to challenges and changes. A highly-skilled and efficient team has the ability to tackle technical obstacles and drive innovation. In contrast, an underperforming team or poorly structured organization may find it challenging to sustain or advance the product.

Dispelling Common Myths

Several misconceptions surround product and technology due diligence:

Myth It’s Solely About Code Review
Reality While code analysis is part of the process, due diligence encompasses a broader scope, including architecture assessment, team capabilities evaluation, product strategy analysis, and market fit determination.
Myth Early-Stage Startups Don’t Require It
Reality Product due diligence is very critical even for early-stage companies. Identifying potential issues early, such as unscalable architecture or poorly planned R&D budgets, can prevent costly interventions down the line.
Myth Investors Focus Exclusively on Revenue
Reality Technical health often serves as an indicator of future revenue potential. A product with controlled technical debt, a solid R&D plan, and validated performance is more likely to scale successfully and thrive in the market.

Significance of Product and Technology Due Diligence

For investors and in M&A, understanding the true state of a company’s technology is vital. It helps mitigate risks, ensure alignment with future growth projections, and provide insights into whether the product and team can deliver on their promises. Ultimately, product due diligence is a strategic tool for safeguarding long-term investment value.

By conducting thorough product and tech due diligence, stakeholders can make more informed decisions, potentially avoiding costly mistakes and identifying promising opportunities that might otherwise be overlooked.

Reach out if you would like to know more about Product & Tech Diligence at Hoola Hoop.

Ready to talk about tech diligence with Leigh?

Book a 30-minute introductory call to talk about your diligence needs.

Book a meeting with Leigh β†’
Leigh Newsome - CTO Coach
Leigh Newsome Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. Leigh has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown. He successfully led TargetSpot, backed by Union Square Ventures, Bain Capital Ventures, and CBS, through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft β€” but never all three at once.

Share this:

Technical Due Diligence: A CTO’s Guide

Preparing for Technical Due Diligence

Technical due diligence requests arrive at the worst possible time β€” mid-fundraise, mid-acquisition, mid-everything. The engineering leaders who handle them well aren’t the ones who scramble. They’re the ones who were already prepared.

It’s common for engineering leaders to receive technical due diligence requests on behalf of an investor or for M&A purposes. In some organizations, this process leads to panic, poor results, and last-minute scrambling. It doesn’t have to. The difference between leaders who sail through diligence and those who don’t is almost always preparation β€” and knowing what’s actually being evaluated.

In this article I’ll walk through what investors and acquirers are really looking for, the most common mistakes CTOs make, and how to build a state of readiness before the request ever arrives. I also shared a detailed talk on this topic at ELC Annual 2020, including key excerpts from Hoola Hoop’s engineering due diligence playbook.

Watch: Preparing for Investor & M&A Technical Due Diligence

ELC Annual 2020 Β· Leigh Newsome, Partner & CTO Coach, Hoola Hoop

Watch the Talk β†’

What Investors and Acquirers Are Actually Evaluating

Most CTOs assume technical due diligence is a code review. It isn’t. It’s a comprehensive assessment of whether your technology, your team, and your processes can deliver on the promises your business is making. Here’s what’s actually being examined:

01
Technical Debt
Investors want to understand the accumulation of shortcuts taken to accelerate time-to-market β€” and whether those shortcuts are manageable or a ticking clock. Uncontrolled technical debt signals future cost and risk. Controlled, well-documented debt signals engineering maturity. The difference matters enormously to how your valuation is perceived.
02
R&D Investment & Alignment
Is your R&D spend fueling genuine innovation or sustaining obsolete systems? Diligence teams look at whether your engineering investment aligns with your growth stage and product roadmap β€” and whether resources are being allocated effectively or scattered. Misalignment here is one of the most common red flags.
03
Product Validation
Does the product actually do what it claims? Investors verify that the technology performs as advertised β€” in terms of functionality, scalability, and security. They’ll also examine the product roadmap to assess strategic direction and whether the organization has a credible path to where it says it’s going.
04
Team Quality & Decision-Making
The quality of your technical leadership and their decision-making processes is often the most important factor. Diligence teams interview key leaders, conduct code walkthroughs, and examine architectural decisions. A skilled, well-structured team can recover from technical problems. A weak team will struggle to sustain even a strong product.

Common Mistakes CTOs Make

In my experience conducting diligence and coaching engineering leaders through the process, the same failure modes come up repeatedly:

⚠️
Treating it as a code review
Code quality matters, but diligence is a much broader evaluation β€” architecture, team, process, strategy, and IP. CTOs who optimize for code alone miss what’s actually being assessed.
⚠️
Waiting until the request arrives
Diligence requests come with tight timelines and high stakes. Engineering leaders who haven’t documented their architecture, debt, and team capabilities before the request scramble β€” and it shows.
⚠️
Hiding problems instead of contextualizing them
Experienced diligence teams find everything. CTOs who try to obscure technical debt or team gaps destroy credibility. Those who present problems with context and a clear plan build trust instead.
⚠️
Underestimating the team assessment
Investors and acquirers are often betting as much on the team as the technology. CTOs who don’t prepare their technical leaders for interviews β€” or can’t articulate how decisions are made β€” leave value on the table.
⚠️
Assuming it only matters at late stage
Technical due diligence is critical even for early-stage companies. Identifying unscalable architecture or poorly planned R&D budgets early β€” whether by an investor or by the CTO themselves β€” prevents far more costly interventions later.
⚠️
No documentation of IP and architecture
Diligence teams want to see that your technology is documented, understood, and not locked in someone’s head. Lack of documentation signals key-person risk and organizational immaturity β€” two things that directly affect valuation.

How to Prepare β€” Before the Request Arrives

The best time to prepare for technical due diligence is well before you need it. Here’s the framework I share with the CTOs I coach:

  • Audit and document your technical debt Maintain a living document of known technical debt β€” what it is, why it exists, and what it would cost to address. Contextualized debt is manageable. Undocumented debt is a red flag.
  • Document your architecture decisions Architecture Decision Records (ADRs) or a well-maintained architecture document show that your technology choices are intentional and understood by more than one person. This directly addresses key-person risk.
  • Align R&D spend to your roadmap Be able to show clearly how engineering investment maps to product strategy and business goals. Investors want to see that spending is purposeful, not reactive.
  • Prepare your technical leaders Your engineering managers and architects will be interviewed. Make sure they can articulate how decisions are made, how the team is structured, and how you manage quality and velocity. This isn’t coaching people to perform β€” it’s making sure your team can speak to what they actually do.
  • Know your IP and security posture Document ownership of your intellectual property, your security practices, and any open source dependencies and their licenses. These are standard diligence questions that should never catch you off guard.
  • Run an internal diligence exercise The most effective preparation is to conduct your own technical due diligence before an investor or acquirer does. Identify the gaps, address what you can, and build a clear narrative around the rest. This is exactly what Hoola Hoop’s engineering due diligence playbook is designed to facilitate.

The engineering leaders who handle due diligence well aren’t the ones who scramble when the request arrives. They’re the ones who were already prepared β€” and who can tell a clear, honest story about their technology.

Related Reading

Need help preparing for technical due diligence?

Book a 30-minute introductory call to talk through your situation.

Book a meeting with Leigh β†’

Leigh Newsome - CTO Coach

Leigh Newsome

Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. He has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown, and successfully led TargetSpot (backed by Union Square Ventures, Bain Capital Ventures, and CBS) through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft β€” but never all three at once.

Share this:

CTO Leadership and Coaching: The Essential Pillars of Success

As a Chief Technology Officer (CTO) in today’s dynamic tech landscape, mastering the core responsibilities of technology leadership is crucial for organizational success. Through years of CTO coaching and technology leadership experience at Hoola Hoop, we’ve identified four fundamental pillars that determine a technology executive’s effectiveness and impact. Whether you’re a new CTO or a seasoned technology leader, understanding these essential elements will help you drive both technical excellence and business growth.
πŸ—ΊοΈ
01

CTO Leadership Fundamentals: Developing Technical Vision & Strategy

The cornerstone of successful technology leadership lies in aligning technical decisions with business objectives. As a CTO, your primary responsibility is crafting and executing a technical strategy that directly supports your organization’s growth trajectory. This means:

  • Carefully evaluating and selecting technology stacks that align with business goals
  • Making architecting decisions based on company and customer needs rather than following trending technologies
  • Creating sustainable technical roadmaps that support long-term scalability
πŸ‘₯
02

Engineering Leadership: How Successful CTOs Build High-Performance Teams

A CTO’s success is intrinsically linked to their team’s performance. Building and nurturing high-performing engineering teams requires:

  • Creating an environment that promotes innovation and continuous learning
  • Developing strong technical leaders who can drive execution independently
  • Implementing clear accountability frameworks while making necessary strategic decisions
  • Fostering a culture of excellence and professional growth
πŸš€
03

Technology Delivery Excellence: A CTO’s Guide to Execution

While direct coding may not be a daily responsibility, ensuring efficient software delivery remains crucial. Key aspects include:

  • Implementing and maintaining scalable engineering practices
  • Developing strategies for managing technical debt effectively
  • Striking the optimal balance between development speed and product quality
  • Establishing robust delivery processes that support sustainable growth
🀝
04

CTO Coaching: The CTO as Strategic Business Partner

Modern CTOs must excel in business leadership as much as technical expertise. This involves:

  • Converting complex business challenges into implementable technical solutions
  • Building strong partnerships with other executive leaders
  • Effectively advocating for essential technical investments
  • Skillfully managing competing priorities across the organization

At Hoola Hoop, our experience working with organizations of various sizes has consistently reinforced the importance of these foundational elements. We understand that successful technology leadership requires a balanced approach that combines technical expertise with strategic business acumen.

Want to learn more about effective technology leadership and CTO coaching? Contact our team to discuss how we can support your organization’s technical strategy and growth.

Ready to talk about CTO coaching with Leigh?

Book a 30-minute introductory call to explore whether coaching is right for you.

Book a meeting with Leigh β†’
Leigh Newsome - CTO Coach
Leigh Newsome Partner, Hoola Hoop Β· CTO Coach

Leigh Newsome is a Partner at Hoola Hoop and a CTO coach with 25 years of experience scaling product and engineering teams. Leigh has worked with a wide range of startups and global enterprises, including Avid, Digidesign, WPP, and Kantar/Millward Brown. He successfully led TargetSpot, backed by Union Square Ventures, Bain Capital Ventures, and CBS, through its acquisition to Radionomy Group (Vivendi). When he’s not coaching CTOs, you’ll find him teaching digital audio to graduate students at NYU, building audio and signal processing applications, or flying fixed-wing aircraft β€” but never all three at once.

Share this:
Let’s Talk

Thank you for your interest in Hoola Hoop’s approach to executive coaching.

We’re excited to help you unlock your and your organization’s full potential. Please share a few details about yourself and your coaching needs. Let’s start this transformative journey together.

    *Required fields