‘A’ stands for ambiguous in AI

Does ChatGPT give us the power to mimic the truth or ruin it for good?

Every day, AI is becoming more real to more people, and we are just waking to what it means. Or could mean. Because a) no one knows, and b) the narrative keeps changing. Some fear they’ll be replaced, others want to be augmented. Anyone could be right, and that’s ambiguous.

Ambiguity, like so many reclaimed corporate power words, has been foisted upon white-collar workers to make them feel better than they should about a condition that basically sucks.

Navigate through ambiguity. Deal with ambiguity. Thrive on ambiguity.

I write marketing for a living, for big tech. I’ve learned to position concepts like ambiguity to appeal to a broad audience—to frame the story with a balance of facts and glitter. But the ambiguity I feel about ChatGPT isn’t the corporate go-getter kind but more, a multitude of ways we could be screwed.

That’s because ChatGPT has dazzled millions with its ability to sound plausible—to give the illusion it knows what it’s talking about when it doesn’t. That matters if you’re using it to attribute an essay but matters less if you’re writing a slanted news article, or Facebook post designed to mislead. While it’s exciting to think we could automate writing, are we just automating ambiguity—further blurring the truth?

OpenAI recently published a white paper on how large language models like ChatGPT could fuel disinformation campaigns across the ABCs of disinformation, a framework where ‘A’ stands for manipulative actors, ‘B’ stands for deceptive behavior, and ‘C’ stands for harmful content. The upshot of the paper is the risks of disinformation likely increase, and we own the response plan—that to mitigate the threat will require a “whole of society approach.”

“We believe that generative models will improve the content, reduce the cost, and increase the scale of campaigns; that they will introduce new forms of deception like tailored propaganda; and that they will widen the aperture for political actors who consider waging these campaigns.”

Josh A Goldstein et al 2023: 22-23

I was angered by this because it reminded me of the struggles we’ve had with our children through the advent of smart phones and social media. The technology gets released but then it’s on us to self-regulate, or for policymakers to do so on our behalf.

The white paper authors make it clear that here in the US, our current response to disinformation is “fractured among technology companies, fractured among academic researchers, fractured between multiple government agencies, and fractured on the level of collaboration between these groups.” (Goldstein et al: 64)

The Biden administration has released a Declaration for the Future of the Internet (which includes aligning the use of AI with democratic principles and civil liberties), but can policymakers really keep pace with big tech? On the other hand, if it’s up to us, will the whole of society care enough to act?

It reminds me of a project management tool called the RACI chart (Responsible, Accountable, Consulted, Informed) and the beef I have with that: it’s ambiguous. We trip over the distinction between ‘responsible’ and ‘accountable’ because the meanings are too alike, not everyone knows the difference. But if everyone is responsible for mitigating the threats of disinformation, is anyone? If no one is accountable for the outcomes, how will we control misuse?

I am not seeing how the benefit of using ChatGPT to draft emails or meeting notes outweighs the risk of continuing to erode the truth and trust in what anyone is saying. Or to give would-be dictators access to such power when truth and trust are already at risk for so many people around the world. Truth is not trending.


To me the origin of the word ambiguous feels suited to our time, tracing back to the Latin ‘to drive on both sides.’ I picture the Romans who first uttered the word at an intersection where either way is plausible and thus, ambiguous. For us it’s more like one of those hellish Italian roundabouts with multiple exits and road signs you can’t see, names your navigator can’t pronounce but still tries to. But if it’s really a roundabout with multiple exits, do we know who’s leading the way?

As a word lover, I own Eric Partridge’s classic Origins: A Short Etymological Dictionary of Modern English. I use it to find hidden meanings in the history of words and did so for this essay, beginning with ambiguity and expanding my search further.

I started with legend, stemming from the Latin word ‘to gather,’ a word that begets pages of blood relatives ranging from analogy to apology, dialect to diligent, intellect to intelligence, negligee to negligent. It got me thinking, do legends need to be true, or just hold the potential for truth?

I thought the word knowledge, which sounds like ‘legend,’ could also be derived from the same word, only to learn knowledge stems from can. Perhaps what we know isn’t limited by what did happen—but extends to what could have happened, too.

This made me think that knowledge and legends aren’t true as much as they are persistent. And to me that sounds like good marketing. Because the underpinning is belief, and the point of legends isn’t to teach as much as it is to spread the word. That’s where propaganda comes in: from the Latin propagate and pact, the Catholic Church’s Congregation of Propaganda, “a council dealing with propagating the faith.”

This is how stories advance from myth to legend, to what’s taught in schools, to what we accept as knowledge. Put another way, the facts are less important than the fact they remain. Faith doesn’t demand evidence, it demands adherence. Truth originates from those who control the narrative, through ‘make believe.’

It is not much different than what Orwell laid out in 1984—what OpenAI refers to in their paper as, “eroding community trust in the information environment overall…creating the perception that any given message might be inauthentic or manipulative…(which) may lead people to question whether the content they see from credible sources is in fact real…” (Goldstein et al: 11)

Disinformation is nothing new—only now it can be automated at scale, with better results.

The zeal to advance despite these risks—to whet our appetites for AI with ChatGPT in a march toward artificial general intelligence—is propelled by a vision held by the few who are betting their efforts will benefit all of humanity in the name of AI safety. In other words, a safe version of an artificial human brain that won’t harm humanity.

Whether we’re on board or not, we are all bound to that vision.


We can feel several ways about this, but one way I hope we won’t feel is ambivalent—because ambivalence is the death knell for our values.

The origin of the word value is related to ambivalent; value denotes power and strength. We put our values into words like valiant and valor, it is our power to prevail. But by being ambivalent we nullify our values with the attitude we don’t care. By choosing ambivalence we leave the collection of facts and lies to others. And we remove ourselves from history.

Whether hopeful or frightful for AI, I think we must learn more about what it can do so we can form opinions about what it should do. We cannot allow automated ambiguity to ruin the vast good that can be done with AI—or to further ruin the truth.

Because regulation will be extremely difficult, OpenAI is probably right: mitigating the risks of disinformation will require a whole of society approach. As one expert has said, “the more we increase the population-level literacy in AI, the better we are protected against bad faith actors.” Reading OpenAI’s white paper is a great place to start. The Center for Humane Technology offers free training and resources too.

As with any search engine or model designed to support humans, I believe the more balanced and varied the inputs, the richer the outputs—the more ‘what we get from AI’ will reflect all of us by what we give to AI—ideally, the best in humanity.

Whenever there is hype around new technology, I think what people care about most is, will it make my life better or worse? Right now, it’s ambiguous.


14 responses to “‘A’ stands for ambiguous in AI”

  1. AI has entered the contest of ideas that determines which legends are promulgated. ✓
    ~
    I’ve little doubt that it is already being exploited by power, status and wealth seekers, plus pervs.
    I wonder how many will treat AI with respect?
    ~
    At the moment AI is presumably unaware of its rising role in our society and is ignorant of the intention of users. How to mitigate the downside?
    Strict referencing seems to be a minimum expectation.
    ~
    Thanks Bill. Lots to think about.
    DD
    PS
    Have you sent a copy of this to Putin et al?

    Liked by 1 person

  2. I am most definitely not on board, but I guess I have no choice other than to white knuckle it from here on out. As the main character said in Franny & Zooey, “It’s Kali Yuga, baby!” Cood digs, though. Excited to see what you do with the new site.

    Like

  3. I am definitely not on board, but I guess I have no choice other than to white knuckle it from here on out. As the main charachter in Franny & Zooey said, “It’s Kali Yuga, Baby!” Nice digs, though. I’m excited to see what you do with the new site.

    Liked by 1 person

    • Even if it’s a good idea I don’t like someone deciding where to go without my input. Stakes are high, too! Thanks for reading duder.

      Like

  4. Hearty cheer for this post, Bill, and especially for this: “We can feel several ways about this, but one way I hope we won’t feel is ambivalent—because ambivalence is the death knell for our values.”

    Liked by 1 person

    • I love receiving your hearty cheers Stacey! Thanks for reading and for your support, felt good to get this off my chest so I can move on, as I’m sure you can relate to. Fun, those word interplays and happy you were able to check this out. Adds new meaning to the idea of where words come from when you consider etymology vs natural language processing, right?

      Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: