ta.fo Journal

Scale Is Not Essence

Having launched this journal and immediately posted three science pieces in a row, I front-loaded the heavy topics under the excuse of balance. My current interests are actually ordinary life, parenting, music, and photography. I plan to write about those plenty in the future, yet the trilogy still left something open. Let us conclude the technical series with one more.

In the previous parts, I laid out my skeptical view of simulation cosmology and the AGI discourse. I looked at the "Singularity Is Coming" scene in Korea as a social phenomenon while using physics to push back on the fantasy of a fully immersive virtual world. However, I have not yet treated AGI itself technically.

I debated whether to write this final part at all. Physics is a cleaner fight because it has laws, constants, and consequences. Once you cross into computer science, especially AI, the conversation turns evasive. Everyone wants to talk about AGI, yet no one can explain what it is without smuggling in faith. There is no shared definition, no measurement, and no agreed target. This is not an accident. Right now, AGI is not a scientific term, but a marketing term.

Let us start with what we actually have in our hands. Large Language Models are mathematical structures trained to predict the next token based on probability. Whether they have a billion parameters or a trillion does not change what the system fundamentally is. Scale adds a layer of plausibility. It makes them faster and smoother, yet it does not change the underlying mechanism. Anyone paying attention sees the gap immediately. This architecture does not become intelligence simply by increasing its size.

The LLM is the constant. The user is the variable.

Put the same model in front of different people, and their reactions split. Some mistake fluency for intelligence, while others recognize it as a trap. Some are seduced by the confident tone, while others check first whether any of it is actually tethered to the truth. Before you talk about AGI, you must understand what an LLM is and what it is not. Skip that work, and you are no longer evaluating the system. You are simply projecting your own belief.

The gap becomes obvious the moment you look at what the model is actually trained to do. It is trained merely to continue text. This objective rewards plausibility rather than correctness. It does not care about truth, logic, or causality. If those things appear in the output, they are merely statistical accidents resulting from the training data. If they vanished tomorrow, nothing inside the system would even notice.

Furthermore, even that limited competence does not compound the way true intelligence does. Training happens once and then it stops. The weights are frozen. While a conversation can feel like learning, it is never a durable update. It is merely a performance inside a temporary context window. Close that window, and the lesson entirely evaporates. A system that cannot reliably carry experience forward simply cannot accumulate a world model in the way a human mind does.

Also, the text used to teach an LLM is not the actual world. It is merely a compressed record of it cleaned up for humans. The constraints of intelligence that matter most are learned through physical contact. This requires perception, action, failure, and consequences. Fluency can climb forever without that continuous feedback loop, yet real-world correctness remains entirely lacking.

The situation gets worse as the flow of training data changes. High-quality human text is finite. Synthetic text spat out by AI is exploding. Models will increasingly learn from their own outputs. As errors and biases compound and diversity shrinks, the system structurally falls into a swamp of self-reference. The surface may look polished, but the core inevitably erodes.

Physics does not negotiate with hype. Scaling demands exponentially escalating costs rather than linear growth. Compute, memory, bandwidth, power, cooling, and hardware supply all climb much faster than actual performance gains. You can call that progress if you want, but you cannot call it a paved road to a new kind of intelligence. The word possible is fundamentally different from the word sustainable.

Let us return to the basics. If AGI is a scientific claim, it needs a strict definition. It needs a test, proper measurements, falsification conditions, and clear criteria for failure. Without those elements, saying it will happen soon is not research. It is promotion. The current discourse skips that crucial step. It repeats the word "soon" without any definition or test.

That is not science. That is faith.

#Critique #Dev #Science