March 2020. Leadership called. The request: get the entire company onto remote work. 4,000+ employees. 12+ countries. Every function. 10 days.
No playbook. No precedent. The buildings were still technically open when we started.
Day by day: what it actually looked like
Day 1: Inventory. Before we could move, we had to understand what we were moving. What do people actually need to work? Voice, video, access, devices. We mapped the gaps by region — because what a sales team in Boston needs is different from what a support team in Budapest needs. You can't run a global deployment without a regional dependency map.
Day 3: The contact center question. This was the hardest part. We had hundreds of support agents whose entire workflow was built around being in a building — physical headsets, dedicated call queues, local network paths. They weren't built for remote. We moved them anyway. A forklift migration while the offices were still open. In parallel with everything else.
Day 7: The call volume spike. Once we had everyone remote, demand exploded. 65,000+ monthly sessions running. The system wasn't designed for that load under those conditions. We identified the bottleneck using live telemetry dashboards — the same monitoring infrastructure our Enterprise Engineering team had spent the previous two years building. Then we did something nobody planned: we recruited 120 volunteers from across the company, trained them on the call queues, and deployed them within 48 hours. Not IT staff. Volunteers. From marketing, finance, operations.
Day 10: Done. No major SLA breach. Full company on remote. Infrastructure holding.
None of this happened because of decisions made in those 10 days. It happened because the infrastructure to support it had been built in the two years before anyone needed it.
What made it possible — and what we almost got wrong
The UC and network infrastructure had been architected to not depend on physical proximity. That sounds obvious in retrospect. In practice, most enterprise IT stacks in 2020 were built around the assumption that employees would be in buildings. VPN capacity sized for 20% remote usage. Contact center platforms requiring on-premises connectivity. Video conferencing licensed for meeting rooms, not home offices.
We had made different choices. Not because we predicted a pandemic — nobody did — but because the team had been pushing for infrastructure-independent architecture as a design principle for years. Cloud-first voice. Global SD-WAN over point-to-point MPLS. SaaS-native endpoints. Those decisions looked like over-engineering before March 2020. They looked like foresight after it.
What we almost got wrong: device inventory. We didn't have a complete, current picture of who had what. In the first 48 hours, there were employees in certain regions who couldn't work from home because they had desktop machines and no laptops. We solved it — procurement, logistics, emergency shipping — but it cost time and energy that should have been spent elsewhere. The lesson: endpoint management and asset visibility isn't an IT operations checkbox. It's a business continuity capability.
"The infrastructure decisions you make in quiet times determine how you perform in loud ones."
The playbook that came out of it
After the deployment, we documented everything — not as an after-action report that would sit on a shelf, but as a living playbook. What worked. What broke. What we'd do differently. Which vendors held up and which ones struggled under load. What the escalation chain looked like when three simultaneous crises were happening in different time zones.
That document became the foundation of our Infrastructure Autonomy Playbook — the pre-built framework we later used to execute M&A separations, carve-outs, and other high-stakes infrastructure changes at speed. The 10-day COVID deployment taught us that the value of preparation isn't the document. It's the shared understanding of how decisions get made before the pressure hits.
Why this story matters right now
AI is the new forcing function.
In 2020, a global event forced IT organizations to rebuild how work happened — in days. Today, AI is forcing IT to rebuild its entire operating model on a similar timeline. The companies struggling with AI deployment in 2025 and 2026 are the ones that skipped the foundation. They're trying to bolt automation onto infrastructure that wasn't designed for it. They're deploying AI assistants on top of data models that weren't built for machine consumption. They're implementing agentic workflows on top of identity architectures that can't support the access patterns those workflows require.
The pattern is the same as March 2020. Different technology. Same underlying dynamic: the teams pulling ahead are the ones who built the foundation before the pressure arrived. The teams struggling are the ones who treated foundational work as optional — until it wasn't.
The lesson from 2020 still applies. You don't get to build the foundation when the pressure hits. You build it before. And whether that foundation is remote work infrastructure or an AI-ready data and identity layer, the principle is identical.
The companies that are ahead on AI right now didn't get there because they moved faster in 2025. They got there because they made different infrastructure decisions in 2022 and 2023 — just like we made different infrastructure decisions in 2018 and 2019.