Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 8 additions & 2 deletions src/components/CaptionedImage.astro
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import { Image } from 'astro:assets'
import type { ImageMetadata } from 'astro'

interface Props {
src: ImageMetadata
src: ImageMetadata | string
alt: string
caption: string
class?: string
Expand All @@ -18,6 +18,12 @@ const {
---

<figure>
<Image {src} {alt} class={className} />
{
typeof src === 'string' ? (
<img src={src} alt={alt} class={className} />
) : (
<Image {src} {alt} class={className} />
)
}
<figcaption>{caption}</figcaption>
</figure>
34 changes: 34 additions & 0 deletions src/content/blog/.ais-compounding-disinterest-pt-2.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
title: "AI's Compounding Disinterest: Pt 2"
description: ''
pubDate: '2026-04-08'
# updatedDate: '2026-04-08'
# heroImage: '../../assets/images/your-image.jpg'
---

## The Broken Cycle

Software development used to have the same cycle as anything else. Your company has junior and senior devs.

- Devs leave company via attrition, retirement, etc
- You hire new juniors
- Juniors grow into seniors
- Seniors help level up the juniors
- Repeat

With the advent of more capable models and tools like Claude Code, many companies have completely stopped hiring juniors and shifted their focus to having senior devs work with AI as if the AI were the juniors. The senior devs are tasked with more orchestration, architectural decision-making, requirement-gathering, PR reviews, etc, and the models handle writing most of the code.

But thinking even one step further reveals how we're trading short term gains for long term losses. When we stop planting seeds, we rob ourselves of next year's harvest. We're borrowing money from the future with no way to pay it back. If we don't hire and train juniors now, especially while we have senior devs who learned how to code BEFORE the advent of AI coding assistants, we will have no one to take the reigns when those seniors leave the industry. Not only that, but even if we realize our fault years from now and try to start hiring juniors again, we'll have two major issues.

1. There will be far fewer juniors to hire. Who would study and train to enter a career that is not hiring?
2. At that point, we may only have AI-native devs left to train the juniors.

If we only train juniors on how to work with Claude Code and not on the underlying fundamentals, they'll be extremely reliant on AI tooling to do their everyday work. What happens when there's an outage? What happens if it becomes financially impractical to purchase these tools for every dev, especially when they're using them for ALL of their work? What happens when they encounter bugs that only the keen eye of a seasoned dev who has written the code by hand would catch?

> "And you, the human in the loop — the reverse centaur — you have to spot this subtle, hard to find error, this bug that is literally statistically indistinguishable from correct code. Now, maybe a senior coder could catch this, because they've been around the block a few times, and they know about this tripwire.

> But guess who tech bosses want to preferentially fire and replace with AI? Senior coders."

- Corey Doctorow in [Reverse Centaur](https://doctorow.medium.com/https-pluralistic-net-2025-12-05-pop-that-bubble-u-washington-8b6b75abc28e)

To make things worse, not only are we putting a halt to planting the seeds of new devs, but we're ripping out the fully-grown senior devs simultaneously in the name of "doing more with less". Similarly, we're relying on Claude too much, which prevents us from practicing our craft and keeping our skills sharp.
46 changes: 46 additions & 0 deletions src/content/blog/ais-compounding-disinterest-pt-1.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
title: "AI's Compounding Disinterest, Pt 1: The Echo Chamber of the Status Quo"
description: "AI coding assistants default to what dominated their training data, creating a feedback loop that compounds over time. Here's what that means for technical innovation — and how to break out of it."
pubDate: '2026-04-08'
heroImage: '../../assets/blog_posts/ais-compounding-disinterest-pt-1/won-young-park-zn7rpVRDjIY-unsplash.jpg'
---

import CaptionedImage from '@/components/CaptionedImage.astro'
import aiCycle from '../../assets/blog_posts/ais-compounding-disinterest-pt-1/ai-recommendation-cycle.svg?url'

_Photo by <a href="https://unsplash.com/@pefont?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Won Young Park</a> on <a href="https://unsplash.com/photos/a-worn-spiral-staircase-with-dark-wood-and-faded-designs-zn7rpVRDjIY?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>_

I can't stop thinking about an AI feedback loop lately. You ask AI to create a website. It picks React - not because it weighed every option, but because React dominated its training data. As more people use AI coding assistants, they'll build more, which means more React apps. When we train new models, what will their training data contain? React apps as far as the eye can see. So when someone asks what to build using those new models, guess what framework will be at the top of the most statistically probable responses.

<CaptionedImage
src={aiCycle}
alt="The AI recommendation cycle: popular tools dominate training data, AI recommends what it knows, developers build with AI's choices, more of the same enters the training pool — and the bias strengthens each time around."
caption="An infinite feedback loop of recommendations"
/>

Don't get me wrong, I love React, but this is just a stand-in for any technology we currently have that was popular in say 2023/2024 when many of the current models were trained. Kubernetes everywhere. Python scripts when bash will do. Just using what you know and never branching out has always been a problem, but AI exacerbates and compounds the issue when future models are trained on past models' decisions and output.

The focus around AI has largely been how it'll enable more **product innovation**. How we can build quicker, ship faster, and turn ideas into reality in days instead of weeks or months. That _may_ be true, but what about **technical** innovation? New products will need new frameworks, technologies, and paradigms that enable them to get built. Without technical innovation, product innovation will eventually be stifled. Right now, <mark>why would anyone build the next big frontend framework if AI will never recommend it?</mark>

Even if a dev knew they wanted to use some new framework or technology, they may just comply in advance, knowing that Claude will say that another framework has better support or that Claude just knows how to work with it better. There are plenty of devs who are already saying that we shouldn't care about these kinds of choices at all and should just guide the architecture, but leave these types of decisions and the actual code-writing to Claude and tools like [Gastown](https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04) — an orchestration platform for running dozens of AI agents in parallel with minimal human involvement. This isn't a bug, this is a feature of the stage of AI-assisted dev that we're in, which makes it scarier. If you let AI make those decisions, it will surely use what it "knows" best.

Where will new frameworks, tools, stacks, etc come from? AI won't choose them unless prompted to do more research, and we (humans) may see that recursive feedback loop looming and decide that it's not even worth it to build that new product if AI will never choose it.

And yes, we can always tell Claude to research the best tools for the job or even suggest the tools we want it to use, but:

1. I don't believe that the majority of AI users will do that. The path of least resistance will win at scale - especially for all the people who can suddenly create software that they wouldn't have been able to on their own.
2. To get AI to understand your new tooling and pick it up, the documentation needs to be easily readable by AI, which [as we've seen with TailwindCSS](https://github.com/tailwindlabs/tailwindcss.com/pull/2388#issuecomment-3717222957), isn't great for current business models. Many business models rely on advertising (this business model should die off IMO, but that's a separate discussion), selling premium support, or by selling extra products, like Tailwind's UI Kit. Those all rely on getting human eyes on a web page, which goes away if AI is looking up documentation for us or building all our components on its own.

So what do we do?

Right now, I'm not entirely sure how we fix the systemic issues that I see looming, but there are some things we can do to break out of the loop ourselves.

1. Use the [Research, Plan, Implement](https://www.alexkurkin.com/guides/claude-code-framework) framework when working with AI code assistants.

Being intentional about how we use these tools makes a big difference. The research phase is where the loop breaks — instead of letting AI reach for what it already knows, you're forcing it to evaluate actual options for your specific problem. It might still land on React. But at least you made a choice instead of inheriting one.

2. If you're building a new tool or framework, be stubborn about getting it visible. Share it, write about it, get it into real codebases. Think of it as a new kind of SEO — instead of optimizing for search engines, you're optimizing for agents: AI-readable documentation, MCP servers that expose your library's functionality, and things like agents.txt so AI tools know your project exists. That said, don't cater exclusively to agents. Humans still need to read your docs, understand your API, and choose to adopt your tool in the first place. The goal is both. Discoverability for agents is its own discipline now, and nobody's cracked the monetization side of it yet — premium MCP tiers might be part of the answer.

3. Talk about the loop in the open. We need more people aware of these feedback cycles that are happening with agents and between agents and humans. The more input you give an agent, the more you can pull it back from returning the statistically average response and get it to do better quality work.

There's another loop I'm seeing that's related to this — one that's less about the code we write and more about who we're training to write it. I'll get into that in part 2.
5 changes: 3 additions & 2 deletions src/styles/global.css
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ h5 {
strong,
b {
font-weight: 700;
@apply text-accent-primary;
}
p {
margin-bottom: 1em;
Expand Down Expand Up @@ -259,11 +260,11 @@ hr {

/* Link styling */
a {
@apply text-accent-primary decoration-accent-muted underline transition-all duration-200;
@apply text-accent-primary decoration-accent-primary underline decoration-2 transition-all duration-200;
}

a:hover {
@apply text-accent-hover decoration-accent-primary;
@apply text-accent-hover decoration-accent-hover decoration-4;
}

/* Selection color */
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Every time we build with AI's recommendations, we feed the next model's training data. The bias doesn't stay flat. It compounds. I wrote about why that's a problem for technical innovation, and what we can do about it. [link]
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Every time you ask an AI coding assistant what framework to use, it reaches for what it already knows.

That seemed fine at first. But the more I thought about it, the more unsettling it got.

We keep building with AI's choices. Those choices end up in the next model's training data. The bias doesn't stay flat. It compounds. At some point you have to ask: who's going to build the next big framework if AI will never recommend it?

I wrote about this loop and what we can actually do about it. Link in the comments.
Loading