Chatbots like ChatGPT can predict words with amazing precision. The danger lies in believing they’re capable of more.
The first problem with artificial intelligence is we have no agreed-upon definition for what it is or means. Some have reframed it as digital intelligence, or insist there is no AI, it’s just a set of tools. According to the US Federal Trade Commission, AI is just a marketing term. If that’s true, the name will be used by the world’s most sophisticated marketers to mean anything we as consumers believe. And they will likely employ AI to sell AI because few people understand AI.
As a professional marketer for big tech, the argument goes like this: convince me I have a problem and you have the solution, then access my emotions to make me want it. If it’s persuasion marketing we’re talking about—how we’re made to think and feel about a brand or product—then the biggest source of persuasion for AI is now coming from the chatbots themselves, how hundreds of millions of people are regularly using large language models (LLMs) like ChatGPT.
The biggest threat the technology poses is wrapped inside its key value proposition: automated, plausible results. The more plausible we perceive the machine’s outputs, the more likely we are to believe and trust the machine. The more we trust, the more we can be persuaded.
Through this sequence we risk conflating the machine’s ability to accurately predict with the belief it can produce accurate content. Worse, we confuse the ability to predict with the belief that these machines can reason. And we spiral into the realm of science fiction and emergent behavior, of digital sentience. The doomsday risk is less about the machines rising up and more about our misuse of the machines—the intentional misuse from bad actors, and the unintentional misuse from everyone else.
To extend the theory as analogy, imagine you are texting a friend on your smartphone and as you type, the phone suggests what words should come next. Maybe 90% of the time it’s accurate. But suppose you accepted the autocomplete every time—the output would likely stray from your original intent. It would get some of the words right but some of them wrong. Then imagine you just accepted all the words it suggested, and started to believe the phone actually knew what you wanted to say, and trusted it to communicate as well as you. It may sound far-fetched, but this is the kind of risk we are taking the more we allow LLMs to speak on our behalf, through written word.
Paradoxically, the better the models get at predicting text the more we’re inclined to accept it as truth. And the risk is we allow the models to unwittingly persuade us. In this way we haven’t created any new intelligence, we’ve only undermined our own.
Of the myriad biases humans suffer, automation bias inclines us to favor machine outputs over our own intuition. This fallacy has almost led me to follow my GPS into canals while driving in Amsterdam or through farmlands in southern Germany. We expect the thing to do the thinking for us and stop thinking as a result. The more convincing the thing, the more inclined we are to follow it. And this is the problem with the LLMs like ChatGPT: they’re quite convincing, but they’re not quite right.
These machines may predict a correct pattern of responses to a prompt but they do not produce correct content. It is the probability of what one might say in response to a prompt, accuracy not included. And there is a subtle but important difference that’s easy to overlook, the difference between plausibility and verifiable truth: what sounds like it could be true versus what can be proven as true.
My falling out with ChatGPT came when I asked it to help me write an essay and provide sources to help me back up an argument. The sources looked legitimate at first but soon proved to be false. When I called BS the bot apologized and suggested I go look myself. (“I’m just a language model, I can’t be expected to know these things.”)
But these shortcomings haven’t had any noticeable effect on the adoption of LLMs despite the lack of QA, and propensity to BS. Perhaps that is a compromise we’re willing to make, and helps us feel good as humans that we’ll still be needed to fact check. But can we humans be trusted to not fall for BS?
The ability to be persuaded is built on a premise of expertise: that I can farm out my decision making to someone (or something) who knows better than me what I need. In order to buy, we need some level of trust in the source of that persuasion before we’ll grant access to the emotions that activate our decision making.
The LLMs aren’t trying to sell us anything. The persuasion we’re susceptible to is rooted in our own cognitive shortcomings. The better the models get at predicting what we want to say the more prone we are to believing they’re right. The same logic holds true for intentional misuse of the models by bad actors: the better the models get at predicting what will persuade us, the more we’ll be persuaded by them.
Most of society will likely never agree on what constitutes real intelligence, human, machine, or otherwise. I hope we can at least keep our grasp on what is artificial and what is real. Because without that distinction, I don’t know what we’ll have. It will likely feel like many things, or nothing at all.
22 responses to “Circling the rings of nuance around planet AI”
Hmm, the Chatbot made up sources to make itself seem mor intelligent? Seems pretty human to me. Something I did recently actually improved my impression of ChatGPT. There was an obscure movie I watch years ago and I couldn’t think of the name. I typed in a rough synopsis of the plot into Bing and Bing told me the movie name. I thought that was pretty impressive. It only took a few seconds and I realize that it ‘read’ most of the internet to come up with that answer.
LikeLiked by 1 person
Yes it completely bullshit me. It’s a fantastic bullshitter! I like your experience, that’s good. It does seem like magic with the pattern matching, it’s just really good math. Math is magic to me, though. Yeah it made up a list of sources that were all bogus, complete with names, dates and so on. Looked legit, wasn’t.
LikeLike
This gets to the nub of the thing. Just what kind of intelligence will AI be capable of. I’ve thought and suggested that there is something about the human mind that AI will not be able to reach. Call it nuance or intuition. Whether or not AI will be capable of understanding sarcasm or humor. Emotion. Will AI really be able to understand these things?
The other commenter’s remark misses the point, I think. Finding the movie name is nothing more than scouring data. I had a very similar experienced this afternoon … I told somebody about Tallest Man on Earth, one of my favorite musical artists. She mentioned that he had a song or songs in a movie that she couldn’t remember. So I googled “Tallest Man on Earth soundtrack” and got some results back. To me that has nothing to do with human intelligence beyond the ability to read the massive dictionary that is the internet and spit out a fact or set of facts.
If that’s all AI becomes, I’m not going to be very worried. What will worry me is twofold. First, if AI develops more advanced levels of human intelligence. Second, how AI will be used to spread propaganda and false information. And, as you describe take over some basic things (much as smart phones already have) that cause us to lose certain skills we should want to retain. And I’ll leave it at that without beating the dead horse again.
LikeLiked by 1 person
Hey thanks Mark! Not beating a dead horse, appreciate your interest and engagement. The Internet “scraping” is altogether different for sure. As I continue to talk to people and read more, my current concern is around how this version of “AI” we’re experiencing in LLMs will be misconstrued as having more to it than it really does. Someone said recently these things don’t have intelligence but they do have power. If you were to dwell on that, there’s some meat behind that statement. I think the LLMs will unwittingly control us as users and recipients of the outputs, similar to how social media “controls” us with the limbic high jacking from polarizing content and so on. I put AI in air quotes above deliberately because it isn’t the kind of AI we’ll use for self-driving cars or robots, which require something that more resembles human intelligence vs what we’re mistaking as AI in the LLMs. Though there are some terribly smart people who feel otherwise. I’m still nibbling the edges, and nowhere near having a strong perspective yet, just sharing some initial thoughts as I learn more. Thanks for reading! Glad you’re interested.
LikeLiked by 1 person
I think those who say that AI doesn’t have intelligence, just more power are on to something. I think that’s the state of AI today.
But I think the idea that AI made up sources is an interesting dilemma. It suggests something, but I’m not sure what. I’m also not sure if it suggests something good or bad. I’ve also seen some experiments people have written about where AI has done something similar, which suggests a higher level of intelligence beyond the ability to search relentlessly and quickly. Why would it make up sources? That’s just a fascinating question.
I’ll keep nibbling the edges with you as you post on this topic. It’s a fascinating one and one that will be impacting our lives and humanity for the foreseeable future.
LikeLiked by 1 person
I think the AI is just trained to give us what we want, ie why it makes stuff up. If you heard that story about the NYT columnist Brian Roose who wrote about the Bing chatbot “Sydney “ flirting with him and suggesting he leave his wife, I think it was just doing that because he was prompting it to do so. He was egging it on. There is zero intelligence in the models; it is just good at producing outputs based on shit tons of data and probability. It does surprise experts at what it can do, but there are still limits to what it can do based on the fact there is no “there” there, no stateful technology (as my AI friend calls it) like what you’d need for self-driving cars for example. It has surprised experts by learning Persian randomly, but maybe human language (all languages) and just a series of patterns, tones, symbols and so on—and supercomputers can be trained to crack the code on those patterns and learn, mimic, and so on. But it does not comprehend, I think that’s the key argument over the intelligence. Some have famously argued the models are no more than a stochastic parrot. That phrase has its own kind of legacy and school of thought.
LikeLiked by 1 person
Given its power to search the interwebs, I’m not surprised it could randomly learn a language.
LikeLiked by 1 person
I heard recently too that OpenAI had developed some technology to transcribe all video from the internet onto to text, so it would have that data also. Feels like we’re shoveling shit into a cage for some beast to consume dunnit? And then it just mirrors us.
LikeLiked by 1 person
You make so many excellent points, Bill. I think my main fear is bad actor exploitation. In fact I’d wager it’s bad actors that are driving the technology. That sense of vulnerability/inadequacy that lodges in the human psyche longs for gods/seers/ some entity we can perceive as being superior and a truth teller, the divine ‘parent’ that always knows what is right. We like to believe in things, rather than unpick the truths of the matter. In fact you could argue we’ve already surrendered too much capacity to the ‘comforts’/ fake certainties of high-tech.
And who vets the veracity/integrity of the global texts being processed and patterned and regurgitated to increasingly infantilized, gullible/suggestible/lost-their-grip humans. E.g. the body of scientific literature has now increasingly corrupted by machine generated papers, to say nothing of the work bought by vested interest (Richard Horton editor of the Lancet medical journal famously said 50% of scientific papers are worthless).
LikeLiked by 2 people
Hi Tish! Gosh you get into some angles here I never thought about, regarding the human inadequacy factor and looking for a seer. That’s really bizarre and true I think, will have fun noodling on that. The bad actor argument is what incited my interest in this whole thing and why I started up this blog in fact, and named it “words matter” to defend information over its evil twins. It’s been a slog to take on all the content on AI though, which I obviously haven’t, but enough so that I can develop a perspective. I was lucky to make a new friend who’s been doing AI since the 80s and we’ve now had two good discussions, the last one in person. I love that you’re interested in the topic because you share such intriguing reactions to the essays, appreciate that.
Oh and beyond the bad actor thing it’s the profusion of untruths that grates me beyond measure. That’s enough on this from me for today! Be well.
LikeLike
Cheers, Bill. It’s good to have a serious chin-wag now and then; serious in deed and in content 🙂
LikeLiked by 1 person
I like chin wag. See we don’t get phrases like this in “u-murica” 😜I’m going to nick that one as you say…
LikeLike
Well nicked, sir. And nicking, I’ve read, comes from the Derbyshire lead mining community, of which many were my ancestors. I think it meant laying claim to an abandoned working by nicking/marking the wooden headgear.
LikeLiked by 1 person
Well thank you for being my large language model source for the history of “nicking.” Far superior to any interaction I could expect from a machine. ‘Nuff said…
LikeLiked by 1 person
I can appreciate your concern with the issue of automation bias – it’s valid. But not surprising. Most of the same people who have that bias would have the same bias if they got the information from a person. If you asked a person on the side of the road in southern Germany for directions somewhere, would you factor in the possibility they could be wrong, or that you might misunderstand their meaning?
AI really is a marketing term. I could argue every time I wrote a computer program back in the day it was AI. The methods have just gotten more sophisticated now, more oriented around big data and pattern analysis. I haven’t really looked into the technical details. What is likely is the models that are being built are just as susceptible to bad data as people are – maybe even more so in the respect of “that doesn’t pass the credibility test”. But, less so in the respect of “my feelings tell me such and such must be true (or false.)” Emotion does not sway computers.
Back in the dim and distant past, on the very first day of my formal IT training (they called it data processing back in those days), the instructor introduced an acronym – GIGO. Garbage In, Garbage Out. The garbage in can be bad data, or simply bad processing. But does that only apply to computers?
Maybe while we’re asking if Artificial Intelligence is real and can be trusted, we should also ask, “is Human Intelligence real?” Watching the patterns of human behavior makes me wonder. How can members of the same species be simultaneously so smart and so stupid?
LikeLiked by 1 person
That’s an essay in its own right Dave, your comment here! My first iteration with IT was as MIS back in the day. Guess that name was too small to encompass all the massive awesomeness that can be done with technology right?!
And yes on the GIGO with people, lots of good points and angles here. The more I read the more I feel like I know less trying to unpack all this. I just decided to take a kind of break actually from the daily ingestion. Perhaps I’ll get into infrastructure, that would be fun (NOT).
Yeah I’m a marketer so I know how irresistible it is to attach AI to anything that’s being sold nowadays. Often times it’s just a “pinch” of AI and you’re right, what is it even? Appreciate you playing Pong with me here. Deliberate reference there too.
LikeLiked by 1 person
To the comment above regarding the 50% of scientific papers that are worthless… I’ve spent the last four years immersed in scientific papers and I’m sorry to say the actual percentage is probably even higher. As for AI, I find myself to be a pretty untrusting person when it comes to people in general, so I don’t think I will ever be fully trusting of any AI they generate. Interesting to see you exploring this subject, hoping you keep at it!
LikeLiked by 1 person
The untrusting of people carries out hardcore to the AI systems said people rendered. Good on ya. And thanks for the note of encouragement; you’ve always been such a big proponent of other people’s work, namely mine in this here space, and thank you. Always glad to see your thing pop up on my thing here. Peace out buddy.
LikeLike
Hi Bill,
I’ve been thinking about AI as it integrates more aspects of human intelligence into its functionality so decided to ask ChatGPT3.
Having tried and failed in the mid nineties to apply computer aided decision making to the analysis and interpretation of sales data by sales people, I was interested to see its reply, which suggests an improvement, but not a big one. ChatGPT4 might have more to say, but I’m not a subscriber.
I asked ChatGPT3 about capacity for mathematical tasks and pattern-finding algorithms.
It can handle Arithmetic, algebra, calculus, geometry, and statistics. It also has the ability to recognize and analyze patterns in data, e.g. sequences, trends, or recurring structures. Identifying relationships allows it to predict future outcomes, and make informed decisions based on the available data.
In summary, I can help you with a wide range of mathematical tasks, from basic calculations to advanced concepts. Additionally, I can utilize pattern-finding algorithms to analyze data and assist in decision-making processes.
No doubt ChatGPT4 has enhanced abilities in this area.
And I have no doubt that integration of all sorts of capacities will progress apace.
Cheers
DD
LikeLiked by 1 person
That’s great David and thanks for sharing your experience of working with sales data in the 90s, didn’t know that. Would love to have a yarn on that sometime to hear more. I’m feeling a bit saturated at last on the topic I think, having consumed a lot of news and information almost every day it seems since the start of the year! And we just arrived at our rental in Utah so maybe I’m just pooped from all the flying and driving today. Or this desert air, ha ha. I think I’m tired of the ChatGPT voice I perceive too, to be honest. Strange new world for sure. Be well my friend! Good to hear from you (I like your voice….).
LikeLiked by 1 person
Utah, a great place to take a break from AI. You with the family? Hope so.
Enjoy
LikeLiked by 1 person
Our daughter Lily is graduating from her high school and she and I are driving 17 hours back home (over the course of 3 days)!
LikeLiked by 1 person