Slop In A Bucket
LLM-generated code is everywhere. The speed is seductive, the YOLO mode irresistible. Despite the horrible security problems
this entails, I also like to live --dangerously.
I’ve been thinking about whether you can vibe code complex applications and still ship secure code. Here are some lines of
thought I’ve been exploring.
Fragment 1: Linters as LLM Guardrails
The main defense is linters and static analysis tools. They’re deterministic, which matters enormously. You can integrate them into CI or commit hooks to block code that doesn’t meet standards. Better yet, high-quality tools often include suggested fixes or even auto-fix flags that can be applied directly to the codebase. With reasonably high-signal information like that, you can run the linter, have the LLM look at the output, and watch it fix its own mistakes.
Ask an LLM to code an app from scratch and run it against common linting tools. You’ll find dozens of errors. The LLM can look at those reports and fix the code it just made. Suddenly, even though it used dangerous patterns, it’s able to correct the ones we can lint for. It’s like having a sloppy chef who’ll clean up after themselves if you point at the spills.
SlopSquatting is interesting here because it’s a new bug class introduced by LLMs while also being a problem we can address via linting.
Fragment 2: The Limits of Linting
We need lint rules with high signal-to-noise ratios and suggested fixes. The more patterns we can detect and auto-correct, the better we can constrain generated code. But we get no security guarantees for unlintable problems. Logic errors, for instance, or authentication flows that span multiple files.
Linters raise the floor of quality on a codebase generated by an LLM, but vibe coding is going to remain an insecure process. The main problem is keeping context and intent across several distinct files. It’s easy to mess up an authentication system in a web app because you have to track state across user endpoints, authorization and routes, database schemas, refresh tokens, lockouts, and so on. It’s a bunch of complicated mechanisms glued together, and it’s easy to miss one line or make changes in one set of commits that undoes work in another set.
Fragment 3: The Problem of the Missing Mind
During a security review of a complex codebase, it’s common to go in and not understand how things work. But you can usually piece together the complex features with enough time and with the implicit trust that someone got this to work well at some point. There’s some sense of order lurking behind even a chaotic codebase. It may be that no single person on the team knows how it all works, but you get the blind men touching the elephant thing. Talk to enough people, read enough documentation, and you can usually assemble the whole from the parts, even if no one person sees the complete picture.
This won’t be the case with LLM-generated code. There is no intent creating a through-line in the implementation. No original architect who made deliberate choices, no accumulated wisdom from code reviews, no debugging sessions that revealed why something had to be done a particular way. Just slop in a bucket, tokens arranged in plausible patterns. The linters catch what they can, but the ghost in the machine isn’t haunted by understanding—just by training data.
Fragment 4: Everything is Perl Now
I have a soft spot for Perl despite its shortcomings. Maybe I love it because of what other people consider its shortcomings.
It’s fun to develop your own little dialect and completely forget about it ten minutes later. It’s infamous as a write-only language because no one can read its arcane symbols or figure out what the hell anyone else is doing.
But now a ton of new code is getting generated this way. The harder you vibe, the less you read the code. I have
entire applications where I haven’t yet opened all of the files. Code gets written, not read. Vibe-coding is a continuum,
and I expect the ratio of vibe-to-artisanal code to increase as the LLMs get better and as more people get comfortable with
the tools. Everything is Perl now.