Security is PvPvE
Like much of the gaming world lately, I’ve been playing a lot of Arc Raiders. The game is
described as PvPvE: Player vs. Player vs. Environment. Players explore a shared
post-apocalyptic landscape to scavenge rare materials. They can cooperate or attack
each other, stealing whatever loot their opponents have collected. But patrolling these
abandoned space ports and half-buried cities are killer robots, who attack everyone
regardless of alliance or intent.
The field of computer security works the same way, but most people think of it as
exclusively Player vs. Player: will the security engineer outsmart the hacker, or vice-versa?
We define our philosophies as white hat or black hat and describe our activity as
red or blue teaming.
But security work also has a PvE element. When securing systems, we struggle not just against
agents with opposed intentions, but against the environment itself. The computers are trying to
kill us. We don’t need to ascribe intent here; it’s observable from their behavior.
This makes sense when you realize that computer security is, for the most part, a subset of software quality. It shouldn’t be
possible to hack software of the highest quality. (Social engineering and wrench attacks
sidestep the software entirely, so they don’t count.)
Problems like denial-of-service, technical debt, or poorly-designed software are indistinguishable
from “real” security bugs. Unreliable, calcified systems are extremely difficult to secure because
it’s not always possible to test their behavior and prove their security properties.
The history of computing is full of these environmental failures. Early networks weren’t encrypted
because they assumed trusted users on trusted infrastructure. Multi-user operating systems shipped
without password protection because designers couldn’t imagine adversarial access. The environment
changed, but the systems couldn’t adapt without heavy intervention.
The same pattern plays out in modern systems. Flashloan attacks in Solidity weren’t possible until
DeFi protocols composed in ways the original developers never anticipated. Invariant-based design
could have prevented this—not by predicting flashloans specifically, but by encoding constraints
that hold regardless of the execution environment.
Why do these environmental failures keep happening? Because the environment actively fights you.
Brittle code, legacy systems, poor abstractions, missing tests—a codebase with little in the way of test coverage
or proactive assertions of invariants can’t be refactored safely. A system built on fragile assumptions breaks under novel inputs. A service with ten dependency layers fails in ways no one can predict. These aren’t just obstacles
to security work. They are security problems.
Yet this isn’t how most people think about security, because environmental issues are less obvious
and sexy than a major compromise. It also doesn’t fit well into the offensive security mindset oriented
toward time-boxed evaluations: pentests, code audits, contests, and bug bounties. These
problems are more like a disease that slowly spreads and only becomes visible when it’s
already lethal.
The PvPvE frame lacks the narrative drama of hacker-versus-hacker showdowns, but it better
explains the actual work. You’re not just fighting attackers. You’re fighting the computers themselves.
And if you ignore the environment, you’re already halfway to losing the game.