Our IT NPS was in the red.
I didn't fix it by upgrading the ticketing system.
That was the first instinct — something technical is broken, find the technical fix. It's the IT reflex. The score is low, so we look at the tools. Maybe the portal is clunky. Maybe ticket routing is wrong. Maybe we need a new ITSM platform that users can actually navigate. We started scoping tool changes.
Then we actually read the feedback.
What the data actually said
When we dug into the feedback — real comments, not just the scores — the pattern wasn't about tool performance. The portal worked. Ticket routing was reasonably functional. Resolution times were within SLA for the majority of categories.
The problem was communication. People didn't know when their issues were being worked. They didn't know why something was taking longer than expected. They felt invisible to IT — like requests went into a void and something eventually came back out, but nobody explained what happened in between.
The specific comments that showed up repeatedly: "I didn't know if anyone was working on it." "I had to follow up myself just to find out the status." "It said resolved but my issue wasn't actually fixed." "I got a canned message saying it was closed and I didn't even know what happened."
None of that is a tool problem. All of it is a service behavior problem.
What we changed
We changed what we measured and what we reported. Not system metrics. User experience signals.
We tracked three things that we hadn't been systematically tracking before. First response time — not just resolution time. Most IT teams track how long it takes to close a ticket. Very few track how long it takes to acknowledge one. From the user's perspective, that acknowledgment is the first signal that something is actually happening. It determines whether they feel heard or ignored, before the issue is even started.
Second: whether users were proactively updated mid-ticket on anything open beyond 24 hours. Even a short message — "we're investigating, expect an update by end of day" — changes how a user experiences the wait. It removes the uncertainty. That uncertainty is what generates the bad NPS scores, not the wait itself.
Third: how often our resolution actually fixed the problem versus just closed the ticket. We had too many tickets that were technically resolved but practically still broken from the user's perspective. We closed them, the auto-message fired, the user was frustrated. The score suffered. The ticket count looked clean.
Killing the auto-resolve message
The most specific change we made: we killed the auto-generated "Resolved — please let us know if you have further questions" message for anything that had been open longer than 48 hours.
That message is not a conversation. It's a system event. It tells the user that a system marked their ticket closed. It doesn't tell them anything about what was actually done, or whether the fix is expected to be permanent, or what to do if the issue comes back. For a password reset resolved in two minutes, it's fine. For an issue that took four days and involved three teams, it's inadequate. Users noticed the difference. We weren't acknowledging it.
We replaced auto-close messages on complex tickets with actual human follow-up. A note from the tech who worked the issue, in plain language: what we found, what we did, and what they should expect. Not long. Often one paragraph. But human.
The service desk team who made this real operated differently from day one. That's what drove the number — not a new tool, not a new process deck. Actual service behavior. The NPS score is a lagging indicator of something that's already happening at the interaction level. If you want to move the score, you have to change what happens in those interactions first.
Why IT NPS is the metric most IT teams don't track — but should
Most IT organizations track ticket volume, resolution time, SLA compliance, first-call resolution rate. These are input metrics. They tell you how the system is running. They don't tell you whether users trust the system or feel well-served by it.
IT NPS measures something different: willingness to recommend IT as a service. That sounds abstract until you realize what it actually represents in an enterprise context. An employee who gives IT a negative NPS score is an employee who routes around IT when they can — buys SaaS tools on the corporate card without involving IT, escalates directly to the CIO when they have a problem, builds shadow IT instead of using the standard stack. Low IT NPS isn't just a feelings problem. It has real operational and security consequences.
Conversely, high IT NPS means the organization trusts IT enough to come to it with problems, to adopt new tools IT recommends, to participate in programs like Zero Trust rollouts or device migrations without resistance. That trust is the foundation that makes large-scale IT initiatives possible. You can't mandate it. You earn it in the small interactions.
The AI era makes this more important, not less
AI is resolving more Tier 1 tickets automatically. That's the right direction. Routine requests — password resets, VPN access, common hardware troubleshooting — should be handled by automation. The efficiency case is clear.
But the tickets that reach a human now are more complex, more frustrated users, and more edge cases. The user who waited through automated triage before getting to a human is already slightly more impatient than they were five years ago. The issue they have wasn't solved by the bot — which means it's either unusual, ambiguous, or the user already tried the standard fix and it didn't work.
If your team's habits were built for easy, high-volume tickets, they'll struggle with the ones that need actual care. The skills that matter for the AI era of IT support are judgment, communication, and follow-through — not speed. Teams that understood this and invested in service quality before AI deflection kicked in are seeing their NPS numbers hold or improve as automation handles more volume. Teams that deployed AI and called it transformation are discovering that the remaining human interactions are harder than what came before.
The gap between "closed" and "actually resolved" matters more when every ticket that reaches a human is a complex one. If you're not measuring it, you're not managing it.
"The score is a lagging indicator of something that's already happening at the interaction level."
Where to start
If your IT NPS is below where it should be and you're not sure what's driving it, the fastest diagnostic is to read the free-text comments — not just the scores. The scores tell you there's a problem. The comments tell you what kind.
If the comments are about speed, it's a capacity or process issue. If the comments are about communication, it's a behavior issue. If the comments are about being routed to multiple people without resolution, it's a knowledge management issue. Different root causes. Different fixes. The number alone won't tell you which one you're dealing with.
The changes we made weren't expensive. No new tools. No organizational restructuring. A tracking framework adjustment, a follow-up protocol on complex tickets, and a team that genuinely engaged with what the data was saying. Twenty-five points in two review cycles.
The tools aren't what users remember. The interaction is.