AI Ain’t

“Would you like to play a game?”

Science fiction has been talking about artificial intelligence for a very long time and it has manifested in every pop culture medium you can think of, including some well known and much beloved movies. “War Games” is a gem of a flick featuring a baby Matthew Broderick and a defense computer (unfortunately named WOPR) that decides it has a better plan for nuclear war and decides to try it out. The “Terminator” franchise goes a little darker with the Skynet system “getting smart” and straight up deciding it had no use for mankind and setting out to wipe us out. Yikes. Good thing time travel gives us a chance to take a do over. There was even a really bad Spielberg project called “A.I.” that took a more romantic view with the Pinocchio theme – “I wanna be a real boy!” – applied to an AI robot. Even Spielberg lays an egg once in a while. But here’s the thing. While I love sci-fi a lot, it occasionally leads us to create false dichotomies around what is and what isn’t, especially when it enters the popular canon. What we are now blithely and universally referring to as AI is no such thing.

Take the Large Language Model. This is a rabbit hole of epic proportions and I don’t pretend to have more than a fundamental grasp of all of its ins and outs, but what I do know is that they are not thinking and they certainly aren’t intelligent. They are enormous data sets being processed by insanely powerful processors that have – and here is a crucial point – been programmed to do so. This is something that has been theoretically possible for a very long time and has only come to fruition recently as the capacities and speeds of computers reached a tipping point. Look at it this way. For about as long as we have been reading and writing, the people who were best at it tended to be the people that did the most of it. We casually threw around the idea of literacy as a binary concept when in fact it falls along a massive spectrum. The more you read the better you get at it. Your vocabulary expands, your understanding of syntax and structure grows and if you do enough of it you eventually will be capable of reproducing it. I once had an in-class writing prompt in high school English to produce a single short scene of period piece fiction. I inherited a deep love of Louis L’Amour from my grandfather and had read dozens of his books so it was very easy for me to essentially recreate a scene from one of those books. It wasn’t quite plagiarism because it wasn’t direct reproduction, but it was close. I could do it because I had read so much L’Amour I knew what it was supposed to look and sound like and could mimic it. Strictly speaking I “created” it, but only because I had consumed and processed so much of what I was being asked to create. 

When you prompt an LLM to create whatever, that is what it is doing, just on an unimaginable scale. Because it has read everything. And it has been programmed to identify patterns, predict syntax, and project semantics. But that is not thinking. I invite you to read up on Alan Turing and his famous test. This cat was smarter than all of us put together and if he doesn’t think it is intelligence I’m gonna side with him. He only basically invented the computer. That is not to say this is not very cool and it is the first step towards what has the potential to be the most consequential advancement of civilization since fire. But it is only the first step and is so because it was the easiest, essentially being brute data collection and processing power. And what are we using it for? To produce prodigious amounts of garbage. It is handy for simplification of tedious tasks and for facilitating communication, maybe. Another way of looking at it is that it is useful for getting out of doing it at all or just straight up cheating. And what did our teachers always tell us about cheating? That you were only cheating yourself. Every time a college student asks AI to write an essay for them they have lost the opportunity to learn how to do it and gain the knowledge that would have come from doing it. There was a great piece in the Atlantic by Nicholas Carr that predicted – back in 2008! – that the internet was making us dumb (I love the title: “Is Google Making Us Stoopid”). We now know this to be true on both a cognitive and physiological level and I think this is manifested in our usage of LLMs. I think you have probably figured out I am not a fan. The LLM is only one element of what we claim as AI but is an instructive example. One of my biggest fears is that since we are impressed by this narrow manifestation of “AI” capabilities we are rushing to expand it in terrifying ways. Back to WOPR and Skynet.

Anthropic recently got into a row with our Secretary of Defense (still a frontrunner for worst cabinet member despite stiff competition from Noem and Kennedy) because they were uncomfortable with the DoD’s stated aims for their technology. Turns out Petey wanted to put Anthropic to work on two things: unfettered privacy invasion of American citizens and, wait for it, autonomous weapons. This did not sit well with Anthropic. Perhaps they know their not so distant history of pressing ahead with technological advances before adequately assessing their risks and potential long term consequences. Maybe they read “American Prometheus” or saw “Jurassic Park”, but good on ‘em for remembering that it isn’t just whether or not you can but whether you should. In any event, they told Hegseth to piss off – they sure as shit don’t need a government contract to stay afloat. He responded with a “you can’t quit, you’re fired” moment, surprising exactly no one.  Unfortunately, I am confident Anthropic will be the exception rather than the rule and someone will gladly pursue weapons that fire themselves. (On a topical note, there is some speculation that the US jets that were recently shot down by Kuwaiti air defense – friendly fire has to be one of the dumbest phrases ever – might have been because the SAMs might have been released from their failsafes, letting them fire at any radar contact rather than it being confirmed by an actual human.)

Then there is the physical problem with this headlong rush to release the latest genie. Data centers are environmental disasters. Here’s a couple of sobering factoids. Every ChatGPT search uses 10 times the electricity of a Google search. Considering that there are 2.5 billion queries per day you are talking more than a couple of terawatts. In fact, at current development rates data centers will consume more electricity than Japan by 2030. And since Trump doesn’t want to use anything but fossil fuel for electricity we can’t even calculate with any degree of accuracy how much carbon that will pump into the air. Then there is the water. It is estimated AL will demand between 4 and 6.5 billion cubic meters of water by 2027. If you’re like me numbers that big have no meaning, but that equals about half of ALL the water used by the United Kingdom annually. To keep AI data cool. And of course the most likely place to build data centers is in places that are already facing water problems. Think Arizona.

So by all means let us continue down this path. Destroy the environment so we can get dumber, chase dubious outcomes and flood our lives with slop. Good thing we have intelligent, thoughtful sober thinking people keeping an eye on all of this for us. Thanks for reading.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top