Blog Post

The Human Connection Blog
4 MIN READ

When the Lights Went Out at Heathrow: A Crisis That Was Never Meant to Be “Won”

JonPaulGabriele's avatar
8 days ago

In the early hours of March 21, 2025, a fire broke out at the North Hyde electrical substation in West London, just a few miles from Heathrow Airport. Within hours, a local infrastructure incident had triggered widespread disruption across the global aviation ecosystem.

Flights were grounded, operations were halted, passengers were stranded, and local residents were left without power. Suddenly, one of the most connected airports in the world found itself completely disconnected.

This wasn’t just a power failure, it was a systems failure.

The fire itself was severe yet containable, but what unfolded afterward exposed far deeper vulnerabilities. It has since been claimed that Heathrow had “enough power” from other substations, which now raises difficult but fair questions:

  1. If there was enough power, why shut the airport down completely?
  2. If there wasn’t, why wasn’t the site resilient enough to handle a failure like this?

  3. And most importantly, how did one single point of failure have this much impact on such a critical national and international asset?

These are the questions that will dominate the post-crisis scrutiny, but while many rush to applaud or condemn, I think the truth lies somewhere more uncomfortable.

Crisis leadership isn’t about perfect outcomes

Crisis response is never clean. It’s messy, fast-moving and incomplete. You make decisions with partial data, under pressure, in real time. And in the majority of cases, you choose between bad and worse – which is exactly what Heathrow’s leadership team faced:

  • Compromised infrastructure
  • Uncertainty about the integrity of power and systems
  • Thousands of passengers on site and mid-flight en route to the airport
  • Global operations and supply chain at risk

The common response is, “we need to tackle all of these problems” – and rightly so – but what people often forget is that in a crisis, you don’t have the resources, time, or information to tackle everything at once. Heathrow's leadership chose safety and containment, and in just under 24 hours, they were back online again. That’s impressive. That’s recovery under pressure, and that’s business continuity in action. 

But it doesn’t mean everything was done right, and it certainly doesn’t mean we shouldn’t ask hard questions.

“Enough power” means nothing without operational continuity

Having backup power doesn’t mean having functional operations. Power alone doesn’t run an airport – systems, processes, and people do. If the backup didn’t maintain critical systems like baggage handling, communications, lighting, or security, then the airport was right to shut down.

However, the next question is, why didn’t those systems have their own layers of protection, and where was the true resilience? This leads us to the real issue: this wasn’t just about Heathrow, it was about the entire ecosystem.

Resilience isn’t just a plan – it’s a whole system of dependencies

The recent disruption is a real reminder that resilience doesn’t just live inside an organization. It lives across every partner, vendor, and hidden dependency. In critical services like aviation, the biggest vulnerabilities are often outside the walls of your own operation. There’s a web of partners involved in keeping an airport running:

  • Power providers
  • Facilities management
  • IT and communications vendors
  • Outsourced security
  • Maintenance crews
  • Air traffic systems
  • Second and third-tier subcontractors

Many of these providers sit outside the organization’s direct control, yet their failures become your crisis in an instant. True resilience requires more than internal readiness, it demands visibility across the whole supply and vendor chain, coordination protocols with external stakeholders, and clear ownership of critical functions.

When something breaks in the background, you won’t have time to figure out who’s responsible; you’ll only care about who can fix it. So identifying and (most importantly) testing and exercising your supply chain is paramount.

This wasn’t a “winnable” crisis – and that’s the point

I’ll discuss this concept further in my upcoming webinar, The Unwinnable Crisis: How to Create Exercises That Prepare Teams for Real-World Uncertainty, but the Heathrow disruption is a perfect case study.

This was never going to be a clean “win.” No plan could have delivered a flawless response, and no leader could have avoided disruption entirely. Instead, this crisis asked a different question:

When everything seems to be falling apart, can you contain the damage, protect your people, and recover quickly?

That’s the real test. It’s what separates the theoretical resilience plans from the operational reality. Heathrow passed parts of that test, but the system around it has questions to answer, and every other organization watching should be asking the same thing: “How many hidden dependencies are we one substation, one outage, one contractor failure away from exposing?”

The next crisis may not give you a warning, and it certainly won’t give you time to figure out who’s holding it all together. Crisis leadership isn’t about perfection; it’s about being ready for the moment when no perfect option exists. The question now is, what did it reveal that we can’t afford to ignore?

Ready to prepare for true crisis readiness?

Join me for the upcoming community webinar, The Unwinnable Crisis: How to Create Exercises That Prepare Teams for Real-World Uncertainty on April 11. We’ll explore what true crisis readiness looks like and how you prepare your team to lead when there is no “win” – only choices.

Published 8 days ago
Version 1.0
No CommentsBe the first to comment