It’s an old, sad, zero-sum story: sharing data across organizations to inform plans and policies is a fundamental business need, but sharing is also an immovable corporate liability. To understand the zero-sum game and why it lives on like something out of a grade-B zombie flick, let’s consider a historical problem that looks like its mirror image.

In 1943, British codebreakers at Bletchley Park faced an agonizing paradox. They had cracked the Enigma cipher and could read U-boat communications perfectly, in near real-time. They knew where the wolf packs were hunting. They could have routed Allied convoys around every submarine in the Atlantic.

But they couldn’t act on what they knew—not fully. Every convoy mysteriously re-routed, every U-boat ambushed with suspicious precision, risked revealing that Enigma had been broken. The Germans would then certainly change their encryption, and the Allies would once again be in the dark. The result was a grim calculus: which convoys to save, which to sacrifice, and how much of this knowledge to use without exposing the secret. There were fictional spy networks to launder this new data, decoy reconnaissance missions to spot what was already known.

At one point, the Allies nearly went too far. Admiral Karl Dönitz received reports of “impossible” encounters—U-boats stumbling into British warships in locations where no patrol should have been. In one incident, three submarines rendezvoused at a tiny island in the Caribbean, and a British destroyer appeared almost immediately. Dönitz demanded a review of Enigma’s security, but the German High Command simply did not believe that it could be broken, and attributed the coincidences to bad luck and British radar. But by acting on the intelligence, the Allies nearly revealed its existence. Ultimately, the cost of protecting Ultra was measured in Allied ships sunk and sailors drowned, because the alternative—losing the ability to read Axis communications—would have been worse.

Now reverse the polarity. Instead of knowing data and not being able to use it, let’s talk about the problem of not being able to use data just because you can’t know it.

The Modern Inversion

Where Bletchley Park had data it couldn’t act upon, modern institutions could act on data they cannot access. The constraint has flipped, but the underlying logic remains: using information means exposing it, and exposure carries unacceptable risk.

Consider oncology research. Tumors don’t respect institutional boundaries, but patient records do. A researcher studying rare cancers might need data from dozens of hospital systems to build an accurate predictive model. Each hospital has only fragments of the picture. Together, they could accelerate treatments and save lives. Apart, they know too little to be effective. But “together” requires sharing patient data—detailed medical histories, genetic information, and treatment outcomes. No hospital administrator will risk a HIPAA violation, a breach, or a lawsuit by sharing in this way. And privacy regulations exist for good reasons: the 1940 U.S. Census taught us what happens when sensitive data collected for benign purposes falls into hostile but powerful hands.

This same zero-sum pattern repeats across sectors. Banks possess detailed but limited information about fraud patterns, but sharing with competitors to gain a rich picture—even to catch criminals—means exposing customer behavior and proprietary risk models. Intelligence agencies have learned hard lessons about compartmentalization. Even close allies limit what they share, because every additional pair of eyes multiplies the risk of compromise. Pharmaceutical companies running clinical trials cannot pool safety data across studies without revealing competitive intelligence about their pipelines.

In each case, the logic is the same as that which constrained Bletchley Park. Information has value only if you can use it. Using it means exposing it. Exposure creates risk. Therefore, don’t use it—or use it so sparingly that most of its value goes unrealized.

As a result, we have spent eighty years refining the art of keeping secrets. Encryption protects data at rest and in transit. Access controls limit who can see what. Anonymization attempts (but fails) to strip identifying information while preserving analytical utility. All of these techniques share a fatal assumption: that at some point, someone must decrypt the data to compute on it.

What if that assumption was wrong?

Computing in the Dark

Fully Homomorphic Encryption (FHE) represents a fundamental break from the security paradigm we’ve inherited. Where traditional encryption protects data only until you need to use it, FHE allows computation on encrypted data directly—without ever decrypting it.

The mathematics of FHE sounds like science fiction, complete with hyperdimensional lattices and polynomial toroids, but the intuition is straightforward. FHE exploits a mathematical property: certain operations on encrypted data produce encrypted results that, when decrypted, are correct. Encrypt two values, compute on them, decrypt the result, and you get the correct answer “in the clear”. The computation doesn’t require access to the data—only to its encrypted form.

FHE isn’t theoretical, but it’s still almost always too slow—a mathematical curiosity rather than a business tool. But algorithmic improvements and specialized hardware are collapsing the performance gap, crossing the threshold from research prototype to production deployment and dissolving tradeoffs that have constrained data sharing since Bletchley Park and before.

Return to the oncology researcher. With FHE, she can train a predictive diagnostic model using data from many hospitals secure in the knowledge that none of that data leaves any hospital unencrypted. No patient record ever leaves institutional control. No privacy regulation is violated. The math guarantees what policy and promises cannot: the data was never exposed because it was never decrypted.

The same architecture transforms financial fraud detection. A consortium of banks can pool transaction patterns, identify coordinated criminal networks, and share threat intelligence—all without any bank revealing its customers’ behavior or its proprietary risk algorithms. The computation happens on encrypted data; only the security-relevant outputs emerge.

The End of Zero-Sum

We are still living with the information security paradigms of 1943, accepting as inevitable a tradeoff that technology has now rendered obsolete. Every medical insight not discovered because hospitals couldn’t share data, every fraud ring that persisted because banks couldn’t collaborate, every policy that failed because agencies worked from incomplete pictures—these are the costs of a zero-sum game we no longer need to play.

The organizations that recognize this inevitable sea change first—that understand FHE not as a cryptographic curiosity but as a fundamental change in what collaboration can mean—will define the next era of data-driven decision making. 

The secret that couldn’t be used is becoming the secret that never needs to be kept.

Leave a comment