Human + AI: The New Managed Services Workforce Model

Let me start with a confession. When I first heard the phrase “AI will replace managed services jobs,” my instinct was to treat it with healthy scepticism. We’ve been here before; with cloud, with automation, with offshoring. But something feels different this time, and I’ve spent the last several months trying to understand exactly what that “different” is. 

Here’s what I’ve concluded: AI isn’t coming for our workforce. It’s empowering them. And the service providers who figure that out first, and build for it deliberately, are the ones who will define what managed services looks like in the next decade.

The Ground Is Actually Moving

I don’t say this to be dramatic, but the February 2026 correction in Indian IT services stocks wasn’t just a market blip. It was a signal, investors finally pricing in something that many of us in the industry have been quietly watching for a while: that agentic AI can now execute the kind of complex, multi-step knowledge work that was once the exclusive domain of mid-to-senior professionals.

Autonomous AI platforms capable of executing software engineering, legal documentation, compliance analysis, and workflow management tasks are no longer experimental, they’re in deployment. The creative destruction of traditional delivery models, has accelerated. Providers heavily reliant on linear, labor-and-location delivery are already seeing margin compression that isn’t going away. 

And yet, this is the part that deserves more airtime, the answer isn’t to panic or pivot blindly. The answer is to think carefully about what humans are for in this new model.

The Old Workforce Triangle Is Breaking

For decades, the managed services industry operated on a familiar pyramid: a large base of execution talent at the bottom, a thinner layer of specialists in the middle, and a narrow band of strategic minds at the top. Volume was the business model. Scale drove margin. The more people you could deploy efficiently, the more profitable you were.

That pyramid is collapsing, not because humans are less valuable, but because the base layer is being claimed by machines.

Gartner’s research describes this transition as a shift from “pyramids to diamonds,” a workforce model where the broad base of task-execution roles shrinks, and value concentrates in experienced specialists who architect, govern, and supervise AI-enabled delivery. This isn’t a metaphor. It’s a fundamental redesign of what a managed services team looks like

What does a “diamond” look like in practice?

The top of the diamond is your strategic and domain layer; people who understand the client’s business context, who can translate business risk into technical governance requirements, and who can have the hard conversations when an AI agent makes a wrong call. These roles are more critical than ever, not less.

The middle, and this is the growth area, is your orchestration and assurance layer. These are the engineers, analysts, and architects who design the AI workflows, monitor their outputs, set the guardrails, and intervene when needed. Think of them as air traffic controllers, not pilots. The planes fly themselves, but someone needs to manage the airspace.

The narrow base is what remains of execution, but it’s no longer commodity labor. It’s specialists brought in for tasks that genuinely resist automation: edge cases, high-stakes decisions, relationship moments, creative problem-solving.

The Governance Gap Is Real, and It’s Our Problem to Solve

Here’s something that doesn’t get enough attention: clients are scared. Not of AI specifically, most of them want it. They’re scared of black boxes. They’re scared of losing visibility into systems that are making decisions on their behalf. 

Gartner’s research on service sovereignty captures this well. Their analysis finds that by 2029, half of all services contracts will include explicit sovereignty requirements, covering not just data residency, but the logic and automated decisions driving critical processes. That’s a massive shift from the roughly 5% of contracts that include such requirements today.

As a service provider, this hits us in three places simultaneously. First, there’s data and AI sovereignty, clients demanding that we can demonstrate where their data lives, how it’s processed, and that their information is never being fed into our training pipelines. Second, there’s operational sovereignty, the requirement that critical AI-driven services can continue to function under local control, without dependency on a distant centralized stack. And third, there’s technological sovereignty, clients wanting meaningful exit rights, portable licenses, and architectures that don’t lock them into our ecosystem forever.

This isn’t just a legal or compliance conversation. It’s a trust conversation. And trust, in our industry, is built by people, by the human professionals who can sit across from a CIO and say, with confidence: “Here’s exactly what our AI is doing. Here’s the audit trail. Here’s your kill switch.”

EY’s work on AI-enabled managed services makes a related point: success in this environment requires embedding agility and governance into every phase of the engagement lifecycle, not just during contract negotiation, but actively throughout delivery. That means governance teams, regular audits, and a human-in-the loop mandate for critical decisions. It means treating AI as a managed actor, not an invisible one.

Rethinking What “Human-in-the-Loop” Actually Means

I want to push back on a lazy interpretation of “human oversight” that I hear too often. It doesn’t mean a human rubber-stamp every AI output. That would defeat the purpose and create the worst of both worlds, the speed and scale of AI without any of its efficiency benefits, plus the illusion of human accountability without the substance.

Real human-in-the-loop design is about deliberate architecture. For AI-enabled IT services the new framework should treat AI as a distinct actor within governance frameworks, explicitly defined in RACI matrices, with humans retaining the accountable role (not the responsible role) for AI-driven task outcomes. The AI does the work. The human owns the outcome. The contract specifies which is which.

This matters enormously for how we staff and train our teams. The skills we’re hiring for are shifting from “can you execute this task?” to “can you govern this process?” from delivery skills to orchestration skills. The professionals who will thrive in managed services over the next five years are those who can read an AI output critically, identify when something doesn’t smell right, ask the right questions of the system, and escalate appropriately.

Forrester’s 2026 tech leadership predictions put it plainly: your workforce will be a mix of humans, bots, and gig workers, and managing that mix requires a new kind of leadership skill. A third of CIOs are already moving to adopt gig-worker protocols to support multi-role IT employees who work alongside AI agents. The workforce is already hybridizing; providers who don’t build for that reality will find themselves offering a model that clients no longer need.

The New Value Proposition Has to Be About Outcomes, Not Inputs

Here’s the commercial question that I think about most: if AI compresses the cost of execution, what are clients actually buying from us?

The honest answer is: judgment, accountability, and continuity.

Clients are not paying for hours anymore. They’re not even paying for outcomes in the traditional sense. They’re paying for trusted outcomes, results they can stand behind in a board meeting, a regulatory audit, or a post incident review. That trust requires human professionals who can explain, defend, and be held responsible for what their AI-augmented service delivered.

This is why the shift from labor arbitrage to what Gartner calls “technology arbitrage” is so significant. Providers can no longer compete on who can deploy the most people for the least cost. They compete on who has the best AI orchestration capability, the deepest domain expertise to govern it, and the governance maturity to give clients confidence. Gartner predicts that through 2028, 60% of new IT services contracts will require explicit GenAI transparency, explainability, and auditability. That’s not a nice-to-have. That’s a procurement requirement. 

EY’s perspective on value-led AI adoption reinforces this: success in AI is not about the technology itself, it’s about whether you’ve integrated it into a workflow that delivers measurable, repeatable business impact. Clients are becoming more sophisticated buyers. Fewer than a third of decision makers can currently tie their AI investments to financial growth, according to Forrester’s research, and as a result, CFOs are tightening the purse strings. Providers who can demonstrate a clear chain from AI capability to client business outcome will win deals. Providers who can’t, will lose them.

What We’re Doing About It

At Progressive Techserve, we’ve been building what we call a “hybrid delivery model,”  and it’s less exotic than it sounds. The core idea is simple: AI agents handle the high volume, structured, repeatable work. Our human professionals handle the contextual, relational, and governance-intensive work. And critically, we invest heavily in the connective tissue between them. 

It’s not just a dashboard or a monitoring tool. It’s a set of escalation protocols, a set of skills for interpreting AI outputs, a governance cadence, and a commercial model that prices outcomes rather than inputs. Building that takes time, and it requires investing in the people who can operate at that boundary between machine execution and human judgment.

We’ve also been thoughtful about sovereignty from day one. Clients, especially in regulated industries, want to know that their critical processes are not running through some black box in a jurisdiction they can’t influence. That means making deliberate choices about where our AI models are deployed, how client data is handled, and what controls clients have over the systems acting on their behalf. It means the ability to say clearly: your data doesn’t train our models, your processes run in your jurisdiction, and here’s the audit trail to prove it.

The Talent Question Is the Hardest One

I’ll be honest: finding and keeping the right people for this model is the hardest part of the whole transition.

The skills gap is real. Forrester’s research notes that the time to fill senior developer positions is effectively doubling as demand for people who can combine deep system architecture expertise with AI fluency outstrips supply. The same dynamic is playing across managed services roles. We’re not looking for generalists who dabble in AI. We’re looking for experienced domain specialists who can architect, govern, and supervise AI enabled delivery, and there simply aren’t enough of them yet.

Our response has been to invest in internal upskilling more aggressively than we ever have before. Not checkbox AI training, but substantive programs that teach our people how to work with AI agents, how to design workflows around them, how to evaluate their outputs critically, how to catch the failure modes, and how to maintain accountability when something goes wrong. We’re also rethinking our talent acquisition strategy to stop competing on volume and start competing on depth. 

The “diamonds” we’re building aren’t just an organizational structure. They’re a culture shift from delivery as the measure of performance to governance and judgment as the measure of value.

A Closing Thought

The managed services industry has survived and adapted through every wave of technological disruption. We’re not in a different situation today, we’re in a harder version of a familiar situation. The technology is more powerful, the change is faster, and the governance stakes are higher. But the fundamental challenge is the same one we’ve always faced: how do we make ourselves genuinely indispensable to our clients in a world where the tools keep changing?

The answer, I think, has always been the same too. Not by owning the tools, as tools commoditize. But by owning the judgment, the accountability, and the trust that clients need when those tools are making decisions that affect their business.

Human + AI isn’t a transition we’re managing. It’s a model we’re building. And in my view, the providers who build it well deliberately, with governance at the center and human expertise at the top, will find this moment to be not a threat, but the biggest opportunity our industry has seen in a generation.

I wouldn’t claim that we’ve already arrived at this future. Like most service providers, we are still very much on the journey. A significant portion of our conversation today continues to be traditional managed services. Some clients are comfortable with that model and prefer it to remain exactly as it is. And that’s perfectly fine our role is to meet clients where they are, not force them into a model they’re not ready for.

What we are doing, however, is proactively introducing a different possibility. Alongside legacy delivery, we increasingly engage clients in conversations around outcome-based, AI-enabled managed services models where automation and AI handle the repetitive work, and our teams focus on governance, expertise, and business outcomes.

Sandeep Khanna is Director & COO at Progressive Techserve, where he leads business strategy, client partnerships, and the firm’s AI-enabled service delivery transformation.

Scroll to Top