Does AI make tech debt more expensive?

Yes, AI makes existing tech debt more expensive.
When you try to manage tech debt with AI, what you get back is… garbage. Confident, well-formatted, but utterly wrong automated garbage.
Sounds familiar? I found a discussion on Hacker News that blew the lid off a simple yet uncomfortable truth: AI makes tech debt more expensive. Because it doesn’t only struggle with messy, legacy code, but… it actively punishes you for having it.
The good, the bad, and the ugly of AI in your codebase
Let’s get one thing straight: AI coding tools are genuinely useful. At Pragmatic Coders, we’re all-in on them – in fact, 83% of our developers use AI daily for writing code. But we’ve learned that they have a very specific comfort zone.
The good: AI loves yak shaving
When you’re working on a modern, clean codebase, AI assistants are fantastic. They’re like a super-powered intern who never gets tired of the boring stuff. Developers on Hacker News praised AI for:
- Automating the grunt work: cloverich said it best: “LLM saves me time by being excellent at yak shaving, letting me focus on the things that truly need my attention”.
- Performing unit tests: “It’s a huge motivator,” said MarcelOlsz, to write a piece of code knowing you can “send it to the LLM to create some tests and then see a nice stream of green checkmarks”.
- Getting you started: The most effective users of AI treat it not as an infallible oracle but as an intelligent, yet sometimes naive, junior partner. The common workflow is to give a clear interface and let the AI “take the first crack at it”. The key is human review. “I almost never end up taking the LLM’s autocompletion at face value,” cheald clarified, “but having it written out to review and tweak does save substantial amounts of time”.
The bad: The part where AI goes off the rails
I let AI write the parsing and hoooo boy do I regret it
The second you step outside that clean, predictable world, things get ugly. As perrygeo put it, “LLMs make the easy stuff easier, but royally screws up the hard stuff”.
- The “Weirdness” Wall and the Hallucination Trap: An LLM’s competence is bounded by its training data. “As soon as your codebase gets a little bit ‘weird’ (i.e. trying to do anything novel and interesting), the model chokes, starts hallucinating, and makes your job considerably harder”. This is because, at their core, LLMs don’t understand code; they are statistical models predicting the next most likely word or token. “The fundamental problem with LLMs is that they follow patterns, rather than doing any actual reasoning” (perrygeo). When faced with new or complex problems, they lack patterns to solve them. Instead, they just give you confident nonsense.
- Subtle Bugs and Debt Generation: The failures of AI are not always obvious. leptons wrote about asking an AI for code to list all objects in an S3 bucket. The generated code worked in testing but had a critical flaw: it didn’t handle pagination. It would fail in production as soon as the bucket contained more than 1000 objects. The code AI produces is often “irregular, inconsistent” and laser-focused on solving the immediate prompt – creating a “shortest path rather than the most forward-looking or comprehensive” solution. The result is a new layer of bad architecture code. As mentioned in HackerNew’s discussion thread: “I let AI write the parsing and hoooo boy do I regret it”.
“I think of LLMs as a really smart junior developer full of answers, half correct, with zero wisdom but 100% confidence.” – perrygeo
Is it tech debt or code maturity?
But what actually is “tech debt”? What looks like a mess is often business knowledge encoded in software. dkdbejwi383 argued much of it is a “sign of maturity,” the “scars” and “little patches of weirdness” that exist because business rules rarely fit clean patterns.
This phenomenon is described by the principle of Chesterton’s Fence: don’t tear down a fence until you understand why it was put up in the first place.
I recently watched a team speedrun this phenomenon in rather dramatic fashion. They released a ground-up rewrite of an existing service to much fanfare, talking about how much simpler it was than the old version. Only to spend the next year systematically restoring most of those pieces of complexity as whoever was on pager duty that week got to experience a high-pressure object lesson in why some design quirk of the original existed in the first place.
Fast forward to now and we’re basically back to where we started. Only now they’re working on code that was written in a different language, which I suppose is (to misappropriate a Royce quote) “worth something, but not much.” – bunderbunder
An LLM is like a junior engineer: it sees the fence, calls it waste, and suggests removal, but it’s unaware of the bug fixes and domain logic behind it. Joel Spolsky warned in “Things You Should Never Do” that throwing away old code also discards years of fixes and knowledge. AI cannot tell accidental complexity (real tech debt) from essential complexity (business scars).
The real cost of tech debt isn’t what you think – it’s worse
The true cost of your tech debt in the AI era isn’t just slower development or more bugs. It’s the opportunity cost. Your competitors with clean code are attaching an AI rocket booster to their development. While they’re blasting off, your team is stuck in the mud, trying to explain the complexities of your legacy system to a confused algorithm.
That’s the real price of your tech debt: getting left in the dust.
A pragmatic plan to get back in the race
If you’re waiting for a model to magically understand your unique, complex problem, “you will be waiting until the heat death of the universe” (bob1029). Here’s a pragmatic plan to make use of AI for managing technical debt now.
- Stop waiting for a magic bullet. The AI isn’t going to save you. You need a plan led by human experts who understand your business and can make smart, strategic decisions about what to fix and what to leave alone.
- Clean your house, but do it smart. Don’t fall into the “big bang rewrite” trap. The team in the story (or TSB Bank developers) learned that lesson the hard way. Instead, refactor iteratively. Target the most painful parts of your system first, untangling them piece by piece. Methodologies like the Strangler Fig pattern, mentioned in the thread, modernize by gradually replacing old components with new, clean services.
- Remember who the code is for. Good code isn’t for AIs; it’s for people. The best strategy for collaborating with AI “turns out to be the same as for collaborating with humans”. If your own developers can’t understand a piece of code, you can’t expect a machine to. The data backs this up: on average, AI generates 100 suggestions per developer daily, but only 30% are accepted. That human filter is everything.
- Build a “Scaffold of Quality” First: Before you can effectively refactor, you need confidence, and confidence comes from establishing a robust foundation of quality. As NitpickLawyer pointed out, a mature codebase with “strong test coverage, both in unit-testing and comprehensive integration testing” can be refactored with such confidence. Paradoxically, AI can be really useful here. Once you define the testing strategy, an LLM can quickly build out test suites for safer refactoring.
- Document the why. That weird piece of code that handles a bizarre edge case for your most important client? It needs a comment. A clear, human-readable explanation of why it exists is the most valuable gift you can give to future developers – and their AI assistants.
Need help?
AI is a mirror of your old code. It’s showing you exactly where your problems lie. The good news is, fixing that tech debt is your single biggest opportunity to unlock a new level of innovation and competitiveness.
If your AI assistant is struggling with your codebase, that’s a sign. Reach out to us and we’ll plan to clean things up.
FAQ
What are the types of tech debt?
Technical debt can be categorized by its origin, nature, and business risk. It can be inherited from legacy systems or caused by rapid growth, and it can be deliberate or inadvertent. There can even be technical debt that’s specific to a given industry. In the fintech industry, specific business risks introduce unique types of technical debt, such as regulatory and compliance debt, security debt, and integration debt, as detailed in this article: 16 Types of Technical Debt in FinTech
Can AI tools handle legacy codebases effectively?
AI tools struggle with legacy codebases due to inconsistent patterns, poor documentation, and domain-specific logic. Since most AI coding assistants rely on pattern recognition, they often misinterpret or hallucinate solutions in non-standard environments, leading to unreliable or broken code.
What is Chesterton’s Fence in software engineering?
Chesterton’s Fence is a principle that advises against removing existing code or structures without understanding their purpose. In software engineering, it means you shouldn’t refactor or delete “weird” or complex code until you understand why it was written that way—often it encodes valuable business rules or handles critical edge cases.
Why do AI coding assistants struggle with tech debt?
AI assistants struggle with tech debt because they are trained on clean, conventional code patterns. Legacy systems often contain irregularities, implicit knowledge, or undocumented logic. Without context, AI generates superficial fixes or incorrect code that can worsen the problem rather than solve it.
How can I refactor legacy code with AI safely?
To refactor legacy code safely using AI:
– Establish strong test coverage first (unit + integration tests).
– Use AI for low-risk tasks like writing tests or simple refactors.
– Apply the Strangler Fig pattern to incrementally replace old code.
– Ensure human review of all AI-generated code.
– Document code decisions to aid both humans and AI.
What are the risks of using AI in legacy codebases?
– Hallucinations: AI generates incorrect but confident answers.
– Subtle bugs: Issues like missing pagination or edge-case handling go undetected.
– New tech debt: AI may favor quick fixes over sustainable design.
– Misinterpretation of logic: Without context, AI often removes or rewrites valuable code structures.
What are the benefits and limitations of AI coding assistants?
Benefits:
– Automate repetitive tasks (e.g., boilerplate code, unit tests).
– Speed up prototyping and refactoring.
– Help junior developers explore solutions.
Limitations:
– Struggle with complex, undocumented, or legacy code.
– Generate incorrect code without understanding context.
– Require human oversight to ensure quality and accuracy.
What’s the difference between tech debt and code maturity?
Tech debt refers to code that needs improvement due to shortcuts, poor design, or lack of refactoring. Code maturity, on the other hand, includes intentional complexity built up over time to address real-world edge cases and business needs. Not all “messy” code is debt—some is essential for system stability and correctness.