Sel-conscious Hammers
There is a lot of excitement — and anxiety — around the idea that AI models are now “training their own successors.” This kind of hype makes for good headlines, but flawed discussion. Let’s break this down.
AI models are not “training their own successors.” Yes, they are now good enough at coding to be used as tools to write and optimize training code for future models. That is an impressive development and an important step. I am a fan. But it’s not self-replication.
You can use a hammer to forge a new hammer. But we wouldn’t say the hammer became self-replicating. Today’s models are being used as tools in human-designed pipelines. The goals, architectures, data selection, and evaluation criteria are still defined by humans and institutions. Those choices matter. These systems are still socio-technical, not self-directed.
The real story here is not autonomous reproduction. It’s who makes these socio-technical decisions: Who holds the hammer? That raises serious questions about power, decisions, and institutional responsibility.
Science fiction metaphors won’t help us answer them.