Armchair Theorizing: Good and Bad

Daniil Gorbatenko
10 min readJun 12, 2023

What Mises and Searle can teach Eliezer Yudkowsky and other humans tempted by erroneous abstraction.

Ancient Greek philosophers came up with the theory that the whole world (even human bodies) consisted of four elements: water, air, fire and earth. They never attempted to rigorously explain how blood is like air except that it is hot and wet but nonetheless, this hopelessly metaphoric thinking dominated western medicine until the 17th century.

To most people, the theory of 4 elements is just an ancient curiosity but to me, it was always fascinating how people could consider such a ridiculous set of ideas to be the height of human intellectual achievement. How could even the Ancient Greek thinkers that came up with incredible insights about the world with the rudimentary tools they had believe in it?

The answer, of course, is that human minds (even the greatest ones) are often too tempted to succumb to the false obviousness of pseudo-insights based on superficial similarity. Especially if they form their theories through unaided chains of reasoning or armchair theorizing.

Armchair theorizing and AI

And if you thought that armchair theorizing as a tool for constructing influential arguments was dead and buried, think again. Its most recent influential incarnation is the doomerist side of the AI safety debate, whose greatest champion is Eliezer Yudkowsky.

Yudkowsky spent decades formulating and refining his case in debates within a small circle of the so-called ‘rationalist’ community. He is completely convinced that AI development on the current trajectory will eventually result in a super-intelligence that will inevitably and quickly eliminate humanity.

The recent dramatic public unveiling of GPT3.5 and 4 and their competitor consumer-facing large-language models galvanized Yudkowsky. The quick major improvement over GPT3.5 shown by GPT4 made him certain that barring drastic measures, humans are already toast. And the emergence of a host of other people concerned about AI trends, including the AI industry leaders like Bengio and Hinton, gave Yudkowsky’s views a veneer of respectability.

Seizing his chance, Yudkowsky called for nothing other but a complete ban on AI research/ He even went so far as to call for bombing major data centers into the ground if certain countries were to deviate from such a ban.

Is armchair theorizing always bad, though?

Someone could tell me, though, “Wait, not all armchair theorizing is bad, I know one clear example of it which you will recognize as correct and awesome. The Mises economic calculation argument against socialism.”

The gist of the theory is as follows. Think of all the resources in the modern economy. All the humans with their different capabilities, all the machines, raw materials, etc. There is a potentially infinite number of ways one could combine resources to produce stuff. How does one decide which resources to use for what? Worse, yet, how does a socialist central planning board decide it for the economy as a whole?

Mises’s profound realization was that even if prices for consumer goods were preserved, in the absence of private property in the means of production and, consequently, a free market for capital goods, there would be no way to rationally compare resource uses. The central planning board would be running blind.

Even the USSR eventually wasn’t an example of socialism that Mises was discussing. Because it could copy western technology and see what goods that it could create people in the West wanted to consume. And, still, the USSR was an economic disaster, producing tons of stuff no one needed and misallocating resources at a massive scale in other ways. It eventually shocked even western economists by collapsing rapidly like a house of cards. It shocked my parents even more by turning their supposed savings (which weren’t supported by any real productive capacity) into vapor, when sums of money supposedly sufficient to buy a car turned out to only suffice for buying a chewing gum. Both western economists and my parents could have seen it coming if they had read Mises.

Why is Mises’s argument an example of armchair theorizing, though? Well, Mises had no empirical evidence of how socialist economies operate in practice. When he published the book, let alone started writing it, the USSR hadn’t really implemented a planned economy. It only attempted price controls on grain.

And the information about what exactly was happening in the USSR was probably not really available to Mises in any detail, anyway.

And yet, Mises turned out to be amazingly prescient, even though it took many decades to prove him right, and even if even most economists still don’t even try to understand his argument and believe that the USSR failed economically just because of poor incentives. Why can’t Yudkowsky be the ignored Mises of our days?

What makes Mises different from Yudkowsky

The difference between Mises and Yudkowsky lies at the level of concept construction. The concepts Mises uses for his argument — capital goods, consumer goods, prices, profits and losses, production plans, etc. — are all generalizations from phenomena with which one can get familiarity without special tools or quantitative data.

It is a great example of abstraction done right, the way already discussed by Aristotle, who, ironically, himself failed to adhere to it all too often.

What does Yudkowsky do? The whole question of what the development of AI could lead to and how dangerous it might be boils down to the nature of intelligence. Perhaps, the questions of sentience and intentionality also matter for considering whether a potential superintelligent AI would want to kill humans but they only matter if a superintelligent AI is actually possible.

However, Yudkowsky does not really seem to care about what intelligence (or the mind) is. In fact, for him everything from a dog to the chess program Stockfish to GPT4 to humans is a mind.

It [ChatGPT — D.G.] is significantly more general than the previous generation of artificial minds. Humans were significantly more general than the previous generation of chimpanzees, or rather Australopithecus or last common ancestor.

Humans are not fully general. If humans were fully general, we’d be as good at coding as we are at football, throwing things, or running. Some of us are okay at programming, but we’re not spec’d for it. We’re not fully general minds.

So, what is the superintelligence that is about to eliminate us by turning our atoms into paper clips or creating rapidly and universally fatal microbes (per Yudkowsky)? Simple, it’s just a universal Stockfish that outplays humans on every task. Says Yudkowsky:

When it makes a chess move, you can’t do better than that chess move. It may not be the optimal chess move, but if you pick a different chess move, you’ll do worse. That you’d call a kind of efficiency of action. Given its goal of winning the game, once you know its move — unless you consult some more powerful AI than Stockfish — you can’t figure out a better move than that.

A superintelligence is like that with respect to everything, with respect to all of humanity. It is relatively efficient to humanity. It has the best estimates — not perfect estimates, but the best estimates — and its estimates contain all the information that you’ve got about it. Its actions are the most efficient actions for accomplishing its goals. If you think you see a better way to accomplish its goals, you’re mistaken.

But the vision is only seemingly simple and compelling because it puts the nature of intelligence under the rug. What dogs, Stockfish and GPT4 do looks superficially similar to what humans do: they solve problems within their constraints.

But genuine intelligence is not primarily about that. It is about conceptual grasp of the world. Of things like everyday objects of our experience or numbers are, which children learn from just a few examples. Or about what prices and capital goods are, which we may learn from economic theory.

Of course, most modern philosophers (just like most of their ancient counterparts) will argue (wrongly) that we don’t really have good concepts as we can’t define what a cat is, nor can the definition of a cat preclude the possibility that one day, a cat could give birth to a puppy. In fact, a Greek cynic beat them to it by thousands of years by ridiculing Plato for defining a human as a featherless biped. Or some could say that some overly zealous fans of Mises are an example of what happens when one tries to reach all the conclusions deductively.

But as modern Aristotelian philosopher Doug Rasmussen rightly countered,* that would be a misunderstanding of what definitions are. Definitions are there to enable us to differentiates instances of a particular concept, to be able to say that this lazy, purring fluff ball with whiskers and retractable claws is a cat and not a dog or rabbit.

Do AI’s have conceptual grasp of the world?

Setting aside the question whether dogs or even chimpanzees can form concepts, do the (in my view, misnamed) AI’s have a conceptual grasp of the world? Already, when one learns how they are trained, one may start to doubt it. They need millions upon millions of examples of a given category/activity to learn it. And then you can change the scenario (by, say, adopting a bizarre Go strategy), and AIs are completely lost. Sometimes with potentially tragic consequences.

Why is ChatGPT so impressive at some tasks, though? Why can it answer millions of questions correctly, which no human alive can do? Because it has ingested an enormous chunk of literature created by humans who do understand the world, however imperfectly.

Ask GPT4 what the consequences could be if Boris Johnson were to run for US president in 2024, and it will correctly respond that he renounced his US citizenship that he had initially gotten by birth, and is thus, ineligible.

One could stop there and decide that humans are doomed or at least, that whole professions will soon be rendered unnecessary. But one can also wonder where GPT4 could have gotten the inspiration from, and find this BBC article from 2014.

But GPT4 certainly has no understanding of what citizenships or a presidency are. It just reproduces a pattern it was trained on.

AIs can’t escape the Chinese Room

More generally, even though there may seem to be a world of difference between AIs like GPT4 that are based on massive statistical models and the computer programs that we are more familiar with, let alone those that existed in 1980, AIs are a still a perfect illustration of an argument that was first presented in a 1980 book. Ironically, that paper also involved (good) armchair theorizing.

I am, of course, talking of the most famous argument from Searle’s Minds, Brains, and Programs. Searle asks us to imagine a person locked in a room but able to communicate with Chinese speakers outside using the instructions and hieroglyphs. What if they succeed in duping them into believing they understands Chinese even though they don’t know a single Chinese word? Can they be said to understand Chinese?

Searle rightly claims that they can’t be, and that no other system (whatever its computational power) that only manipulates symbols (has “syntax but no semantics”) can understand anything, either. Later, Searle would realize that computers don’t even have syntax because only humans can see syntax in them, but I digress.

Does it really matter that AI’s don’t understand anything?

I imagine someone like Yudkowsky saying, though, that whether AI’s have or can ever have a genuine conceptual grasp of the world doesn’t really matter all that much. Stockfish probably doesn’t know what pawns and rooks are but no human can beat it, so the argument about a universal Stockfish still stands, doesn’t it?

It doesn’t. First, whether AI’s can actually understand matters from the economic or adoption standpoint. If something doesn’t really understand anything, it will be subject to (often unpredictable) failure in edge cases or adversarial scenarios, and that may hurt or even preclude its adoption where edge cases matter. Just ask companies working on driverless cars. And given the massive computational costs powerful AIs require, their suppliers will need significant revenue-generating adoption pretty soon to maintain the AI improvement efforts.

But more importantly, as the AlphaZero humiliation by an amateur Go player showed, if an AI doesn’t really understand an activity, it can be outsmarted by humans that actually can. And if an AI specifically trained to excel at Go can be beaten by an amateur player, the universal-task-oriented-AI… you get the idea.

The world badly needs more abstraction done right

The upshot of this piece is that, however impressive that response that you got from GPT4 or that dance by those Boston Dynamics robots is, everyone should stop worrying that a super-intelligent AI could kill all of us (or our descendants). Everyone should realize that the idea that computer programs (or, for that matter, dogs) can be intelligent in the human sense is just a prominent example of bad associative thinking to which humans are so susceptible.

And that thinking is still pervasive, even in 2023. Hundreds of millions of people still believe in immaterial beings because they can imagine something like a smoky ghost materializing to talk to them and because they don’t feel their thoughts as physical.

Most economists believe that it’s OK to use wildly distortionary models making assumptions about agents that know all the combinations of all the possible goods or similar things.

And even academic physicists believe that it is sensible to talk about the shrinkage of space or time because they conflate space with an idealized sheet of rubber and time with what a clock shows.**

The world would be a much better place if way more people, especially of intellectual and academic disposition, embraced the correct approach to abstraction.

*Can’t find the link, unfortunately, but I am pretty sure it was from one of Rasmussen’s papers.

**This is not to say that the equations of the relativity theory are nonsense, just the underlying explanation of what they imply.

--

--

Daniil Gorbatenko

PhD, economics (2018) from Aix-Marseille University, independent blockchain adoption consultant based in Aix-en-Provence, France, Email: daniilgor2004@gmail.com