• Sample Page

Words Matter

  • Circling the rings of nuance around planet AI

    May 7th, 2023

    Chatbots like ChatGPT can predict words with amazing precision. The danger lies in believing they’re capable of more.

    The first problem with artificial intelligence is we have no agreed-upon definition for what it is or means. Some have reframed it as digital intelligence, or insist there is no AI, it’s just a set of tools. According to the US Federal Trade Commission, AI is just a marketing term. If that’s true, the name will be used by the world’s most sophisticated marketers to mean anything we as consumers believe. And they will likely employ AI to sell AI because few people understand AI.

    As a professional marketer for big tech, the argument goes like this: convince me I have a problem and you have the solution, then access my emotions to make me want it. If it’s persuasion marketing we’re talking about—how we’re made to think and feel about a brand or product—then the biggest source of persuasion for AI is now coming from the chatbots themselves, how hundreds of millions of people are regularly using large language models (LLMs) like ChatGPT.

    The biggest threat the technology poses is wrapped inside its key value proposition: automated, plausible results. The more plausible we perceive the machine’s outputs, the more likely we are to believe and trust the machine. The more we trust, the more we can be persuaded.

    Through this sequence we risk conflating the machine’s ability to accurately predict with the belief it can produce accurate content. Worse, we confuse the ability to predict with the belief that these machines can reason. And we spiral into the realm of science fiction and emergent behavior, of digital sentience. The doomsday risk is less about the machines rising up and more about our misuse of the machines—the intentional misuse from bad actors, and the unintentional misuse from everyone else.

    To extend the theory as analogy, imagine you are texting a friend on your smartphone and as you type, the phone suggests what words should come next. Maybe 90% of the time it’s accurate. But suppose you accepted the autocomplete every time—the output would likely stray from your original intent. It would get some of the words right but some of them wrong. Then imagine you just accepted all the words it suggested, and started to believe the phone actually knew what you wanted to say, and trusted it to communicate as well as you. It may sound far-fetched, but this is the kind of risk we are taking the more we allow LLMs to speak on our behalf, through written word.

    Paradoxically, the better the models get at predicting text the more we’re inclined to accept it as truth. And the risk is we allow the models to unwittingly persuade us. In this way we haven’t created any new intelligence, we’ve only undermined our own.

    Of the myriad biases humans suffer, automation bias inclines us to favor machine outputs over our own intuition. This fallacy has almost led me to follow my GPS into canals while driving in Amsterdam or through farmlands in southern Germany. We expect the thing to do the thinking for us and stop thinking as a result. The more convincing the thing, the more inclined we are to follow it. And this is the problem with the LLMs like ChatGPT: they’re quite convincing, but they’re not quite right.

    These machines may predict a correct pattern of responses to a prompt but they do not produce correct content. It is the probability of what one might say in response to a prompt, accuracy not included. And there is a subtle but important difference that’s easy to overlook, the difference between plausibility and verifiable truth: what sounds like it could be true versus what can be proven as true.

    My falling out with ChatGPT came when I asked it to help me write an essay and provide sources to help me back up an argument. The sources looked legitimate at first but soon proved to be false. When I called BS the bot apologized and suggested I go look myself. (“I’m just a language model, I can’t be expected to know these things.”)

    But these shortcomings haven’t had any noticeable effect on the adoption of LLMs despite the lack of QA, and propensity to BS. Perhaps that is a compromise we’re willing to make, and helps us feel good as humans that we’ll still be needed to fact check. But can we humans be trusted to not fall for BS?

    The ability to be persuaded is built on a premise of expertise: that I can farm out my decision making to someone (or something) who knows better than me what I need. In order to buy, we need some level of trust in the source of that persuasion before we’ll grant access to the emotions that activate our decision making.

    The LLMs aren’t trying to sell us anything. The persuasion we’re susceptible to is rooted in our own cognitive shortcomings. The better the models get at predicting what we want to say the more prone we are to believing they’re right. The same logic holds true for intentional misuse of the models by bad actors: the better the models get at predicting what will persuade us, the more we’ll be persuaded by them.

    Most of society will likely never agree on what constitutes real intelligence, human, machine, or otherwise. I hope we can at least keep our grasp on what is artificial and what is real. Because without that distinction, I don’t know what we’ll have. It will likely feel like many things, or nothing at all.

  • Through a monolith darkly: AI in the time of Kubrick

    April 10th, 2023

    It’s hard to know what was going on in the mind of Stanley Kubrick at the time he made 2001: A Space Odyssey. But if films echo the time in which they’re made, in the mid 1960s technology would have played a key role. America was engaged in two races with the Soviets—one over space, the other nuclear arms—and had just signed a treaty agreeing not to use space for military purposes. IBM was advancing its mainframe computer systems, lowering the cost and size, with some models now capable of 8MB of memory. The term “artificial intelligence” had debuted 13 years prior and Marvin Minsky, one of the leading voices in the field, would soon predict that within the next three to eight years “we will have a machine with the general intelligence of an average human being.” All this had to have weighed heavily on the minds of Kubrick and his creative partner, Arthur C. Clarke.

    The temptation for Kubrick to anthropomorphize the film’s key antagonist, the AI named HAL, must have been irresistible. As if landing on the moon wasn’t mind-blowing enough in the 1960s (or destroying the planet in a nuclear war), now we had reasoning robots, machines with artificial neurons like the Perceptron, and programs you could chat with like ELIZA. No wonder we thought machines were mere extensions of humans. Many of us still do.

    As tempting a theme to Kubrick would have been the book Frankenstein; or The Modern Prometheus and the 1931 film starring Boris Karloff. In this film we get the first instance of the monster’s extended arms and zombie-like pose. Kubrick couldn’t resist a hat-tip to Frankenstein in his 2001 scene where the robotic pod’s mechanical arms stretch out menacingly to cast astronaut Frank Poole into the depths of outer space. To drive home the horror, the scene cuts between the snatching mechanical arms, the ominous red eye of HAL, and the silent twirl of the drifting, dead astronaut.

    HAL’s prideful claims of its zero-error rate, coupled with the accusation that humans are often to blame for mistakes, sets up a conflict where HAL is proven wrong for the first time by misdiagnosing a failed part on the spacecraft. In one of the most chilling scenes, the two astronauts seal themselves off in a pod out of “earshot” of HAL to discuss what to do about the machine’s performance. At the end of the scene the camera cuts back to the wide-angle POV of HAL, revealing the machine has read their lips.

    It is here that Kubrick really has some fun with us, as the scene cuts to black and just reads INTERMISSION.

    Rewatching the film recently I had to wonder if there was a problem with my Roku, because after the caption fades, the screen remains dark for an extended period of time. But then I connected what Kubrick was doing with a theory by theologian and religious scholar Gerard Loughlin, and this regards another iconic element of 2001, the alien monolith.

    In 2003 Loughlin wrote that the monolith is Kubrick’s representation of the cinema screen itself: “it is a cinematic conceit, for turn the monolith on its side and one has…the blank rectangle on which the star-child appears, as does the entirety of Kubrick’s film.” Loughlin’s theory was carried significantly further five years later.

    Imagine what it would have felt like to sit in a dark movie theater in 1968 as the screen cut to black, and remained so, for a cinematic eternity. (In fact, 2001 is one of the first films shot in the then-newly created widescreen format Cinerama. Compared to my measly Roku the effect would have been profound.)

    What was Kubrick doing here? Was he just having fun, suggesting the screen itself was a kind of monolith, a portal into the infinite? That interpretation takes on added depth in 2023, considering how much time we spend on our devices. And Kubrick would have a good, deep laugh on that. Caption: The Infinite Scrolls of the Internet.

    What I like most about the film is that Kubrick and Clarke draw on the influences of their time to offer a point of view of how technology could play a role in our evolution without providing all the answers; instead it’s more like they’re sharing the question with us and some possible outcomes. The monolith represents a tool for extraterrestrial life to communicate with Earthlings, and HAL, a warning if we put too much stock into technology. Whether that winds up being good or bad for humankind is unclear.

    It is a celebration of art imitating life because, guess what? The film’s full meaning remains a mystery today. Instead, we are moved to feel the possibilities of ‘what if?’ without the satisfaction of really knowing.

    And perhaps this is the key lesson of 2001 we can revisit in 2023 as AI becomes more “HAL-like” in its possibilities: to recognize how little we really understand about the universe and to just embrace—and enjoy—the mystery.

    Perhaps there’s an Easter egg for computer scientists too: to mind how much control we concede to the black box.

  • Calls for AI slow-down

    March 29th, 2023

    Three news stories caught my eye today, two reported by Wired. In case you don’t subscribe, I lifted a few quotes for quick reference.


    In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT

    Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.

    The story centers around an open letter prepared by the Future of Life institute calling for an immediate six-month pause from all AI labs on “the training of AI systems more powerful than GPT-4.”

    As a crude distinction between ChatGPT and GPT-4, ChatGPT resembles an advanced form of autocomplete technology whereas GPT-4 “exhibits human-level performance on various professional and academic benchmarks.” GPT-4 is a more powerful algorithm; recently it tricked a human on a CAPTCHA test.

    I’m glad to see this call for a pause and it will be interesting to see how Microsoft, OpenAI and Google respond.


    Now That ChatGPT Is Plugged In, Things Could Get Weird

    Letting the chatbot interact with the live internet will make it more useful—and more problematic, too.

    Last week, OpenAI, the company behind ChatGPT, announced that a slew of companies including Expedia, OpenTable, and Instacart have created plugins to let the chatbot access their services. Once a user activates a plugin, they will be able to ask ChatGPT to perform tasks that would normally require using the web or opening an app, and hopefully see the dutiful bot scurry off to do it.

    Will Knight, Wired

    What this could signal is more monetization for OpenAI and companies that can profit from the move, but also more risks for misuse of the technology. That’s because previously, ChatGPT and GPT-4 were cut off from the live internet and unable to interact with current information and websites. With these recent announcements the technology will be able to interact live on the internet to conduct its work.

    Should we be panic? I think we should keep following this story.


    Voice deepfakes are calling – here’s what they are and how to avoid getting scammed

    Most of us have likely been spammed by bots for years now. What’s new with ChatGPT is that the voice sounds more realistic and interactive, and thereby can dupe more people with scams or to elicit private information from the elderly or unsuspecting.

    Chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

    Matthew Wright, Christopher Schwartz – The Conversation

    Related, I was surprised recently to learn that Spotify is using generative AI powered by OpenAI for their new AI DJ feature. Surprised, because I didn’t know there was a link between the two companies or the technology. And also for how lifelike the DJ sounds. Spotify explains how they trained the AI DJ by modeling the voice of one of their executives.

    And recently I called my local car dealership to schedule service, and did so by interacting with an AI assistant. Not bot-like, more real.


    This technology is so cool and powerful but man, I’m ready for a slow-down. What’s the rush? What do you think?

  • Warnings from Shakespeare on AI

    March 24th, 2023

    No one can predict the extent of AI’s disruption on society. Are there clues in the myth of Prometheus, the tragedy of Macbeth, or the advent of smartphones?

    I remember the first time I saw a bunch of kids waiting for the bus with each of them on their phones. I wrote a poem about it, because it looked dystopic. It was the time of year we start to lose light, almost Halloween, and with their faces glowing in the dark they reminded me of jack o’ lanterns. The scene felt instructive, eerily so.

    (more…)
  • The age of AI has begun

    March 21st, 2023

    Are you ready for some good news about AI? I was glad to see this essay today by Bill Gates, co-founder of Microsoft, Harvard drop-out, business magnate and philanthropist. It put the downside of AI into perspective for me and panned the camera out to include the broader world of people who could benefit from AI beyond the “get-rich-quick.” Here’s a sample from the essay, which he posted today on LinkedIn:

    In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface—the forerunner of every modern operating system, including Windows. The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. If you can do that, I said, then you’ll have made a true breakthrough. I thought the challenge would keep them busy for two or three years. They finished it in just a few months.

    Bill Gates

    I’ve been trying to keep an open mind about AI by taking in content from a variety of sources. Bill Gates paints a vision for how AI could help lower child mortality rates in poor countries, help healthcare workers spend more time on patients than paperwork, and address the world’s worst inequities. They’re big visions, but more compelling to me than generating images of Daliesque cats.

    If you’re interested in how machines actually learn, from the time AI basically started in the 1950s to the present, check out Brian Christians’s book The Alignment Problem. It’s very well researched and balanced and gave me a real appreciation for the incredible challenges associated with training machines. For example, if we’re to imbue human values upon machines how do we express those values? And how do we agree on what those values should be? How do we train it so those values can change over time, similar to us humans? It’s mind-boggling stuff.

  • ‘A’ stands for ambiguous in AI

    March 17th, 2023

    Does ChatGPT give us the power to mimic the truth or ruin it for good?

    Every day, AI is becoming more real to more people, and we are just waking to what it means. Or could mean. Because a) no one knows, and b) the narrative keeps changing. Some fear they’ll be replaced, others want to be augmented. Anyone could be right, and that’s ambiguous.

    Ambiguity, like so many reclaimed corporate power words, has been foisted upon white-collar workers to make them feel better than they should about a condition that basically sucks.

    Navigate through ambiguity. Deal with ambiguity. Thrive on ambiguity.

    I write marketing for a living, for big tech. I’ve learned to position concepts like ambiguity to appeal to a broad audience—to frame the story with a balance of facts and glitter. But the ambiguity I feel about ChatGPT isn’t the corporate go-getter kind but more, a multitude of ways we could be screwed.

    That’s because ChatGPT has dazzled millions with its ability to sound plausible—to give the illusion it knows what it’s talking about when it doesn’t. That matters if you’re using it to attribute an essay but matters less if you’re writing a slanted news article, or Facebook post designed to mislead. While it’s exciting to think we could automate writing, are we just automating ambiguity—further blurring the truth?

    OpenAI recently published a white paper on how large language models like ChatGPT could fuel disinformation campaigns across the ABCs of disinformation, a framework where ‘A’ stands for manipulative actors, ‘B’ stands for deceptive behavior, and ‘C’ stands for harmful content. The upshot of the paper is the risks of disinformation likely increase, and we own the response plan—that to mitigate the threat will require a “whole of society approach.”

    “We believe that generative models will improve the content, reduce the cost, and increase the scale of campaigns; that they will introduce new forms of deception like tailored propaganda; and that they will widen the aperture for political actors who consider waging these campaigns.”

    Josh A Goldstein et al 2023: 22-23

    I was angered by this because it reminded me of the struggles we’ve had with our children through the advent of smart phones and social media. The technology gets released but then it’s on us to self-regulate, or for policymakers to do so on our behalf.

    The white paper authors make it clear that here in the US, our current response to disinformation is “fractured among technology companies, fractured among academic researchers, fractured between multiple government agencies, and fractured on the level of collaboration between these groups.” (Goldstein et al: 64)

    The Biden administration has released a Declaration for the Future of the Internet (which includes aligning the use of AI with democratic principles and civil liberties), but can policymakers really keep pace with big tech? On the other hand, if it’s up to us, will the whole of society care enough to act?

    It reminds me of a project management tool called the RACI chart (Responsible, Accountable, Consulted, Informed) and the beef I have with that: it’s ambiguous. We trip over the distinction between ‘responsible’ and ‘accountable’ because the meanings are too alike, not everyone knows the difference. But if everyone is responsible for mitigating the threats of disinformation, is anyone? If no one is accountable for the outcomes, how will we control misuse?

    I am not seeing how the benefit of using ChatGPT to draft emails or meeting notes outweighs the risk of continuing to erode the truth and trust in what anyone is saying. Or to give would-be dictators access to such power when truth and trust are already at risk for so many people around the world. Truth is not trending.


    To me the origin of the word ambiguous feels suited to our time, tracing back to the Latin ‘to drive on both sides.’ I picture the Romans who first uttered the word at an intersection where either way is plausible and thus, ambiguous. For us it’s more like one of those hellish Italian roundabouts with multiple exits and road signs you can’t see, names your navigator can’t pronounce but still tries to. But if it’s really a roundabout with multiple exits, do we know who’s leading the way?

    As a word lover, I own Eric Partridge’s classic Origins: A Short Etymological Dictionary of Modern English. I use it to find hidden meanings in the history of words and did so for this essay, beginning with ambiguity and expanding my search further.

    I started with legend, stemming from the Latin word ‘to gather,’ a word that begets pages of blood relatives ranging from analogy to apology, dialect to diligent, intellect to intelligence, negligee to negligent. It got me thinking, do legends need to be true, or just hold the potential for truth?

    I thought the word knowledge, which sounds like ‘legend,’ could also be derived from the same word, only to learn knowledge stems from can. Perhaps what we know isn’t limited by what did happen—but extends to what could have happened, too.

    This made me think that knowledge and legends aren’t true as much as they are persistent. And to me that sounds like good marketing. Because the underpinning is belief, and the point of legends isn’t to teach as much as it is to spread the word. That’s where propaganda comes in: from the Latin propagate and pact, the Catholic Church’s Congregation of Propaganda, “a council dealing with propagating the faith.”

    This is how stories advance from myth to legend, to what’s taught in schools, to what we accept as knowledge. Put another way, the facts are less important than the fact they remain. Faith doesn’t demand evidence, it demands adherence. Truth originates from those who control the narrative, through ‘make believe.’

    It is not much different than what Orwell laid out in 1984—what OpenAI refers to in their paper as, “eroding community trust in the information environment overall…creating the perception that any given message might be inauthentic or manipulative…(which) may lead people to question whether the content they see from credible sources is in fact real…” (Goldstein et al: 11)

    Disinformation is nothing new—only now it can be automated at scale, with better results.

    The zeal to advance despite these risks—to whet our appetites for AI with ChatGPT in a march toward artificial general intelligence—is propelled by a vision held by the few who are betting their efforts will benefit all of humanity in the name of AI safety. In other words, a safe version of an artificial human brain that won’t harm humanity.

    Whether we’re on board or not, we are all bound to that vision.


    We can feel several ways about this, but one way I hope we won’t feel is ambivalent—because ambivalence is the death knell for our values.

    The origin of the word value is related to ambivalent; value denotes power and strength. We put our values into words like valiant and valor, it is our power to prevail. But by being ambivalent we nullify our values with the attitude we don’t care. By choosing ambivalence we leave the collection of facts and lies to others. And we remove ourselves from history.

    Whether hopeful or frightful for AI, I think we must learn more about what it can do so we can form opinions about what it should do. We cannot allow automated ambiguity to ruin the vast good that can be done with AI—or to further ruin the truth.

    Because regulation will be extremely difficult, OpenAI is probably right: mitigating the risks of disinformation will require a whole of society approach. As one expert has said, “the more we increase the population-level literacy in AI, the better we are protected against bad faith actors.” Reading OpenAI’s white paper is a great place to start. The Center for Humane Technology offers free training and resources too.

    As with any search engine or model designed to support humans, I believe the more balanced and varied the inputs, the richer the outputs—the more ‘what we get from AI’ will reflect all of us by what we give to AI—ideally, the best in humanity.

    Whenever there is hype around new technology, I think what people care about most is, will it make my life better or worse? Right now, it’s ambiguous.

Proudly powered by WordPress

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
 

Loading Comments...
 

    • Subscribe Subscribed
      • Words Matter
      • Already have a WordPress.com account? Log in now.
      • Words Matter
      • Edit Site
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar