The man who got me into writing was a history professor. Though he’s now internationally celebrated for his discoveries (and has long since fallen out of touch with me), before I was born he was a rock’n’roll star. He speaks a dozen languages, can play two separate wind instruments at once—one with each nostril, each running a different harmonic line of a Bach composition (I’ve watched him do it)—is a gifted sculptor, has a comprehensive command of geopolitics (another interest of mine he originally sparked), and can do third-order integrations in his head.
He’s well-respected in his field, has a measured IQ north of 180, can accurately extrapolate arguments and patterns of thought after reading the first page of an academic book—without knowing its subject or title beforehand—and is one of the few thousand smartest humans ever to walk the earth.
And he’s not the smartest person I’ve ever known. Not by a long shot.
Between spending the bulk of my childhood on a college campus, hanging out in faculty lounges, hustling pool with grad students, and living for thirty years a stone’s throw from Silicon Valley while moving in circles that the tech people and defense contractors frequented, and being involved in the occasional startup myself, I have spent most of my life surrounded by big-brained intellectual (and, often, financial) giants.
You see a lot of shit when you’re in that kind of world. One of the things you see, again and again, is how intelligence works—and how it doesn’t.
What is Intelligence?
Intelligence.
Loaded term, right? I’d better define it.
Intelligence, in the grossest sense, is patternicity—the ability to apprehend patterns in the world and bring them into focus. Language, music, math, creativity, business, finance, social engineering, all are simply different manifestations of the recognition and manipulation of patterns. Think of intelligence as the engine, and the flavor or orientation of that intelligence as the transmission which takes the engine’s raw power and translates it into useful action.
Intelligence and Artifice
We are, of course, no longer limited to our native intelligence. The current Big Deal Trend™ in the tech world is Artificial Intelligence, which, we are told, is going to supplant humans in industry after industry as it becomes more powerful than human intelligence—and just imagine what uses it might be put to!
Douglas Adams suggested that the ultimate use of AI might be to uncover the big answers that humanity has always searched for (and he was not the first science fiction author to do so). In The Hitchhiker’s Guide to the Galaxy, an AI is tasked with finding the answer to the ultimate question of life, the universe, and everything. After much deliberation, the AI announces that the answer is “42.”
The users, it seems, didn’t actually ask the ultimate question, so a bigger AI had to be built to reverse engineer the actual question so that the answer would make sense.
Users are the gods of computing. When you conduct a Google search—or any other database query—you’re telling the computer to find a particular pattern in the data. It’s your obedient slave, subservient to your will. Your intelligence is directing the inquiry towards patterns that you understand.
But, as Hitchhiker’s so delightfully shows, there are limits to human intelligence. It’s quite possible that we are already at or near the limits of our biological capacity for raw intelligence (unless we find a way to reliably induce savant syndrome).
Even if we could significantly raise human intelligence with neurohacking or selective breeding, we will always be limited in the attention we can give any given thing.
To formulate a question requires a value judgment:
You must be curious enough about something to find it worthy of your attention.
Seems like a trivial quirk, but it isn’t. The “is it worth it” filter is the most basic value judgment, and is foundational to our evolution as life forms. The world is cruel, food is hard to get, so constructive and strategic laziness is an amazing competitive advantage.
One of the great secrets of those whom history remembers as “geniuses” isn’t that their IQ is staggeringly high (sometimes it is, and sometimes it’s not), it’s that they found ways to ask unusual questions that nobody before them had thought to ask. This suggests that insufficient curiosity, rather than insufficient intelligence, is a major impediment to human scientific and material (and, perhaps, social?) progress.
The railroad and the steam engine both existed in ancient Rome, but nobody thought to combine them (or, indeed, to put the steam engine to productive use). The Gutenberg Press simply combined a wine press with wood-block stamp technology. Both of these inventions—and the social change they brought—could have been invented thousands of years earlier, but nobody was curious enough about the right things to ask the right questions to bring that alternate-world about.
What amazing insights are we missing right now that are right under our noses, simply because the relevant questions don’t occur to us?
The Promise of AI
Whether you dress it up in the garb of “Generative AI” or “Large Language Models” or “Crime Prediction” or “Weather Modeling,” the great hope of artificial intelligence is that it will solve this problem, because computers lack two things that humans possess in abundance:
Values
and
Curiosity
An intelligence with no values won’t prejudicially discard “uninteresting” patterns. Humans can’t do this—every brain does multiple sorting passes for “relevance” on all information entering it before the information ever reaches the conscious centers of the brain. Careful thinkers can learn to not then do more prejudicial sorting during thinking, and some top-notch thinkers can sometimes can even retrain what their lower brains considers relevant, but nothing like this could or would ever approach the level of indiscriminate patternicity that a value-free computer might achieve.
And, since computers can’t be curious, they also can’t be bored.
Thus, the nature of Artificial Intelligence programs is that they are, to some degree, able to find patterns in large data sets without being told what patterns to look for.
And if the AI is “generative,” it can also use those patterns as algorithms to break down information into arbitrarily small bits, then reorganize that information to put together something that looks “creative.” When this is done at the request of a user, the AI is using the user’s input as a value set to govern what information to find, reconfigure, and present (as well as how to present it).
Pretty impressive, isn’t it?
Flies in the Artificial Ointment
Impressive, sure.
But it isn’t “thought.”
It is, instead, a simulation of the information sorting that precedes thought, including some of the feedback loops that happen during information processing. In some ways, AI is not a very great technological leap beyond what computers already do: processing information according to rules dictated by the user—the only real difference is that AI can also (to some extent) process information according to rules that it has independently extracted from its dataset.
“Intelligence,” in the sense of “thinking, evaluating, and making independent decisions” are not in the cards for AI. Any thinking tools, values, and decision constraints must be input, directly or indirectly, by the user—though a good AI system makes it a lot easier to give the computer those instructions.
But even with this more limited notion of what kind of intelligence AI is, and what kind of things it can do (even in theory), a couple problems reveal themselves.
First, humans don’t actually want unfiltered patternicity. If we did, engineers wouldn’t already have rushed to insert censorship protocols into every public-facing Large Language Model. In other words, one of the chief advantages of AI—its value-neutrality—is already being sacrificed.
Second, the ability of generative AI to create plain-language (or human-readable image) responses to queries—and the way they treat such queries as imperatives—means that they are prone to simply making shit up. “Making shit up” is essentially their job, and, being relatively value-free agents, they are unable to tell the difference between confabulating conversation and confabulating information. For an interesting illustration of how this works, take a look at this video from Rob Ager.
Third, the inability of such models to think makes them trivially easy to hack. Humans are hard to confuse and deceive, so much so that the deception of human cognition is called artistry. Con artists, fiction writers, advertisers, propagandists, painters, visualFX artists, photographers, filmmakers, magicians, and dramatists are all in the business of manipulating human judgment, and it’s hard to get good at any of these things because humans, being social creatures, are evolved to detect error and deception unless those errors and deceptions are presented just so. AI’s hackability makes its reliability suspect from the get-go.
But all of these issues pale in comparison to the granddaddy of all problems looming over the current efforts at so-called Artificial Intelligence.
The Oldest Problem in Computing
I opened this post talking about some of the terrifyingly smart people I’ve known. I neglected to tell you what I consider the most interesting thing about them:
All of them, without exception, believe and can convincingly defend propositions that are absolutely, and without a doubt, entirely incorrect. One of them believes (and can prove through argument) that the physical fabric of the universe is held together by the sexual behavior of living beings. Another is deeply devoted to a very rigid religion (which he can prove, through argument, dates back to the foundation of the human race) because he once saw a particular, named spirit-being laughing at him when he was on LSD—the fact that this particular spirit-being was never named or dreamed of until the reign of Alexander the Great, and that the “ancient” religion from which it hails was created much later than that out of nearly whole-cloth, is irrelevant to this man’s apparently bullet-proof reasoning. Another still is convinced that the apple was created by God and placed in the Garden of Eden in order to reveal the shape of the universe (a hypersphere) to physicists. Yet another is convinced that the Electric Universe is conclusively proved, and yet at the same time that several of the necessary correlates of this paradigm are irrefutably false (Don’t ask. It would take several pages to explain).
I have met some pretty smart people in my day. And if you were to ask me about the smartest people in history, I’d be hard-pressed to point you to anyone smarter than the great mystics of human history.
And yet many of those mystics turned out to be entirely mistaken once their models were tested against experience.
Computer programmers have a word for this phenomenon:
GiGo
Garbage in = Garbage out.
In other words, even the best thinking process is only ever as good as the data it’s operating on.
Galen was a genius of anatomy and physiology. His medical thinking dominated Europe for thousands of years...and it was almost entirely incorrect.
Now, consider one of the great selling points of modern AI:
They are trained on the greatest repository of human knowledge and communication in existence—in other words, the Internet.
The Internet, ninety-odd percent of which is composed of marketing, spam, porn, and streaming video.
That’s a hell of a lot of “garbage in.”
But somebody is bound to notice that eventually, right? If I noticed it, and I haven’t been in the tech world for over a decade, then it must have occurred to someone else. Surely you can fix the problem by giving your AI a better data set.
So where are you going to find one?
Academic journals? Those bastions of truth and integrity that have a documented history of publishing fraud, bullshit, and corporate propaganda? Those very same venues whose dataset is admitted to be between seventy and ninety-percent false? Based on papers by scientists sponsored by corporations that bury unfavorable experimental results before they ever near publication?
Okay, so we train it on the raw datasets that scientists and statisticians are using for their work, right?
Well, given how much of such data has been revealed as fabricated, and how the academic incentives are skewed towards favoring those who cheat, maybe not.
Our civilization is awash in garbage data to the point where we humans can’t even tell what garbage looks like. And it is we who must feed the AI that information upon which to train and operate.
Garbage in, Garbage out.
The Uses and Limitations of Intelligence
Assuming it isn’t killed in its crib by either censoriousness, carelessness, or paranoia, AI will doubtless enhance and extend the tech revolution in ways both congenial and terrifying. I expect its most effective uses to emerge in the fields of astronomy, tyranny, and diagnostic medicine. I also expect the inherent limitations outlined above, and the product liability issues they raise, to hamper its general dispersal. Expect mission-specific uses for the forseeable future. Expect, also, massive legal pushback as its inherent prejudicial nature creates what humans perceive as massive injustices at every scale—computers, after all, deal in datasets, not individuals. No computer can administer justice (or its precursors) either in the criminal sense or in the sense of “business fairness” in a society where the unit of moral concern is the individual.
AI might destroy individualism entirely—technology often has the effect of re-shaping culture in its own image—or it might be little more than a drip in the ocean and a minor dot on the curve of gradually-advancing automation that’s been a dependable part of life since the seventeenth century.
But of all the aspects of the social obsession with intelligence—natural and artificial—one continues to elude the people thusly obsessed.
Intelligence is like a powerful engine: on its own, it’s useless. It’s only useful when you fuel it well and hook it up to something. And, just as you can hook a Ferrari engine up to an air horn and spend all that glorious horsepower to annoy your neighbors, so too you can have the most magnificent brain in human history and still accomplish absolutely nothing of value—or, worse, a tremendous amount that has negative value.
For intelligence (of any sort) to be of any value at all, it must be directed towards useful ends.
What counts as useful?
That is a question that humans have never been able to answer a priori. We must guess, and then see how our guesses pan out over time.
Hey! I got an idea! What if we used AI to help us answer that question...
...anyone got a good dataset handy?
If you found this essay helpful or interesting, you may enjoy the Reconnecting with History installment on Understanding Before Thinking, and this essay on learning to think through language and story: Are You Fluent in English?
When not haunting your Substack client, I write novels, literary studies, and how-to books. You can find everything currently in print here, and if you’re feeling adventurous click here to find a ridiculous number of fiction and nonfiction podcasts for which I will eventually have to accept responsibility.
I think you missed one key element of the current AI hype--it is in the interests of the people creating these models to overinflate their capabilities in order to get funding. Like many VC based tech initiatives, there's an element of pump and dump that needs to be considered when analyzing all claims about what the tech can do.
I have a friend who worked in early-AI development who yells at the clouds that:
1 - AI isn't "intelligence" as that word is used to reference human beings; because
2 - Human intelligence is an evolved characteristic that is a byproduct of our interaction with our physical environment. Specifically, working with our hands and those wonderful ten digits to solve problems.
Every time I think that he might be wrong, I'm brought up short by something else I see or experience.
In sum, I don't think HAL is ever going to make the "jump" that AI enthusiasts claim or hope that it will. Worse yet, a lot of the hype sounds eerily similar to scams that have come before, particularly now that the Mil-Information-Censorship complex has started using terms like "cognitive infrastructure" to refer to the pesky habit of human beings to have diverse opinions and takeaways from identical everyday events. i.e. We can pull patterns and make connections from seemingly unrelated events that our minders would rather not have us make. I strongly suspect that AI will instead be used to explicitly - and authoritatively - DENY that such connections can be (allowed to be) made. Watch.