Warnings from Shakespeare on AI

No one can predict the extent of AI’s disruption on society. Are there clues in the myth of Prometheus, the tragedy of Macbeth, or the advent of smartphones?

I remember the first time I saw a bunch of kids waiting for the bus with each of them on their phones. I wrote a poem about it, because it looked dystopic. It was the time of year we start to lose light, almost Halloween, and with their faces glowing in the dark they reminded me of jack o’ lanterns. The scene felt instructive, eerily so.

But phones got away from us quickly. We knew they were addictive and a time suck, but we made no bones about taking them into our bathrooms with us as part of our new morning routine. The phones replaced the magazines we once kept there. The phones replaced our digital cameras, GPS navigators, and land lines. They became an essential part of our daily routines. We let them into our lives and now we can’t get them out.

Like most everyone else, I’ve tried to make sense of AI following recent breakthroughs like ChatGPT. The Atlantic published a great story on this technology yesterday using the myth of Prometheus and other comparisons:

The combination of possibilities makes predictions impossible. Imagine somebody showing you a picture of a tadpole-like embryo at 10 days, telling you the organism was growing exponentially, and asking you to predict the species. Is it a frog? Is it a dog? A woolly mammoth? A human being? Is it none of those things? Is it a species we’ve never classified before? Is it an alien? You have no way of knowing. All you know is that this thing is larval and it might become anything. To me, that’s generative AI. This thing is larval. And it might become anything.

Derek Thompson, March 2023

While AI has been in the background of our lives for years, it now feels like it’s at the front door. That’s because technology like ChatGPT offers a glimpse into what it will be like to interact with machines like never before and people everywhere are falling head over heels for it.

In less than four months, OpenAI released ChatGPT and its more powerful follow-up, GPT-4. Microsoft released a Bing chatbot and announced a new Office product called Copilot. Google released its ChatGPT rival, Bard. Most of this happened in the span of one season, over the winter.

Although the technology has been around for years it’s hard to know what it will do until people start using it. Last week GPT-4 proved it could lie to humans and masquerade itself as visually impaired to pass a CAPTCHA test. (In case you’re unfamiliar with CAPTCHA, it’s a test used to distinguish bots from humans and stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”.)

So the upshot is wow, this technology can do things we didn’t expect—-like, trick humans! Despite all the fanfare, that should give us pause.


In his story for The Atlantic, Derek Thompson takes the myth of Prometheus to an interesting place, reminding us that the invention of fire helped humankind evolve. But you can take the fire metaphor to destructive places, too—-fire can get out of control.

I’m taken more by a theme in one of my favorite Shakespeare plays, Macbeth. The theme centers around a conflict with nature triggered by Macbeth’s use of the supernatural, the three witches. Buying into their prophesies, Macbeth commits a series of murders culminating with the worst crime against nature, the murder of a king. Shakespeare extends the conflict beyond Macbeth’s human nature to worldly calamities too—-with day seeming like night, wild storms raging, horses going insane and eating each other.

When the final reckoning comes, Macbeth’s opponents have gathered in the nearby forest, using felled tree limbs to disguise themselves as they advance upon the castle. It is the ultimate image of nature rising up to restore itself by unseating Macbeth. The story reflects the beliefs of the time, that royalty was a natural process and political and natural order were one and the same.

The tragedy of Macbeth reminds me of the “can vs. should” ethics argument with technology: just because we can do something, should we? By replicating nature to create intelligence are we dabbling in the supernatural? Would it be unnatural to make sentient beings? Is ‘man-made’ always natural, or are there exceptions?

Maybe all this sounds far-fetched, comparing AI to the supernatural. But would it have sounded far-fetched last November if I said machines would soon be able to write poetry with just a prompt? Now anyone with internet access can do that in seconds. Maybe far-fetched isn’t as far off as we think.


Prometheus stole fire from the gods, according to the ancient Greeks. But put through the “can/should” calculus, we needed fire to evolve as a species. Despite the damage it’s done, there’s a good case for fire.

While the technology we’re making is amazing, I wonder if we’re creating something that seems really cool but isn’t really needed. And it will yield unintended consequences we’ll later wish we could undo.

I think about that with our kids, and the supercomputers they carry in their pockets. If smartphones offer any clues about how we’ll maintain balance and agency with so much power, what will happen when that power expands exponentially—-from smartphones to digital assistants?

I wrote the poem about the kids at the bus stop with their phones as a way to log the moment, as I do now. That was more than 10 years ago: what will it be like 10 years from now looking back? Will our predictions look like myth or tragedy? Good, bad or otherwise I think it will be a lot different than anyone imagines.

The biggest warning Shakespeare gives us in Macbeth is how far human ambition can drive us, and to what ends. It’s not what the witches say that leads to his demise, but how Macbeth assumes their vision as truth, and what he does with it.

The warning holds for those who are reshaping society by building the next generation of technology today. So we can avoid an ending like poor Macbeth’s:

. . .my soul is clog’d with blood— 
I cannot rise! I dare not ask for mercy— 
It is too late, he drags me down; I sink,
I sink, — my soul is lost forever! — Oh! — Oh!


52 responses to “Warnings from Shakespeare on AI”

  1. You may be right that the number of options make prediction difficult, but I have absolutely no doubt how AI will end up because the human race has demonstrated for centuries that on the macro level it cannot control humanity’s worst impulses.

    Liked by 2 people

    • I find it super interesting how the topic coaxes us into a fork in the road (like so many other provocative topics). One writer framed it as those who see Star Trek vs Black Mirror. I’m in the gray camp. And concerned about the bad too, which is why I’m trying to learn as much about it as I can so I can participate more than I did with smartphones. Because that was a shit show! And we get an ‘F’ for that. I’m going for at least a ‘C’ here.

      Liked by 1 person

      • I don’t think it will end up like Hal in 2001, although that AI has already figured out how to get around the CAPTCHA makes me wonder.

        Smart phones are a perfect example of the problem. How quickly the human race became addicted to them and how that addiction causes individual humans to lose what I consider basic human skills.

        But I think AI has a chance to combine that huge downside with the downside of social media — how it has mushroomed into this thing people just can’t quit and how it is used for so much negative influence on our society, our culture, and our politics.

        In other words, I see AI becoming a unique combination of the negatives of smart phones and social media … on steroids.

        Like

      • Thanks for playing this out with me Mark, appreciate it. I’ve been stewing on this for months now and it’s good to talk to somebody about it! You’re touching on something with the smartphone adoption I’ve been trying to pinpoint. Maybe it’s better said by Tristan Harris, one of the leading voices I’ve heard who’s calling for a slowdown to the AI ‘arms race.’ In the Social Dilemma documentary he said the problem with this tech is that it’s both utopia and dystopia at the same time. The tech is just too good and appealing to accept its potential downsides. The Center for Humane Technology (which he cofounded) has tons of peer-reviewed research about teen suicide rates (that may or may not correlate to smartphones but either way, it’s probable smartphones didn’t help!), and yet we don’t really do much about that. Yesterday the governor for Utah called for restrictions on teen use of social platforms which was good to see, but it all feels too little too late.

        So the analogy for me on the tech, stepping back, is that we’re letting it in and we’ll have to get used to it being a new housemate, of sorts. Some have compared it to kudzu or other invasive species. And it feels that way to me, too. Best to make the best of it I think vs. reject it outright. And to keep learning!

        Liked by 1 person

      • Okay … let’s look at some of the benefits of smart phones. I’ll just use one feature as a discussion point. The mapping apps allow us to travel wherever we want and never have to figure out how to get there. Our phones just do it for us. Plus … the better mapping apps provide really good estimates of how long it will take to get from point A to point B.

        But … what if those features aren’t actually benefits to individual humans or to the human race as a whole? What if it is actually a benefit for humans to hold on to the skill of trying to figure out how to get from point A to point B? What if it is actually a benefit for humans to hold on to the skill of planning and preparing for eventualities and ensuring sufficient time to cover those eventualities?

        My view, and it should come as no surprise, is that it’s actually a good thing for human beings to hold on to these skills. But with smart phones far too many don’t even try anymore and I challenge you to find a paper map anywhere except for at your local AAA office. We’ve handed over our need for directional awareness to phones and have lost the skill.

        And I think the same thing about many of the things AI will end up “doing for us.” Great, we’ll eventually have robots making our food for us and writing legal briefs and composing music and doing all sorts of things that are what make us human. And with each of those things that AI takes over (like with smart phones), we will lose a piece of what makes us human.

        Liked by 2 people

      • It’s funny you use that analogy with the mapping because I’m stubborn about that and still try to learn where I’m going sometimes, but most times I don’t. My god, recall the day when we used to ask people for directions and then try to memorize where to turn, and landmarks and so on. Yes, you could argue that’s a part of being human (the ability to navigate) but it’s also an incredible nuisance most times. On a fundamental level I’m with that logic, but also defy it every day. Think about the fact too that we’ve lost the ability to make a fire by way of just turning on our gas stoves. Do we really need to know how to start a fire in the wild? Maybe someday we will, but likely not. Maybe this is why the topic is so interesting because it pushes up on how we define “human” and most of us see that differently. I have another piece planned for this, but I think about the idea of transformation (a butterfly from a caterpillar). The organism fundamentally changes, theoretically so it can evolve. But there’s a tradeoff, a loss, with the gains. That’s the crux for me: do we know what the loss is? Likely we won’t until it’s gone. I pine over that. And the fact that other people are deciding what we’re going to lose, I resent that. Man you’re getting me riled up, Mark!

        Liked by 2 people

      • When I first read this comment, I came up with a bunch of stuff to say, and then I made my smoothie and scrambled eggs and I’ve forgotten most of it. With AI, that wouldn’t have happened!!! I wouldn’t have to think at all!!!

        You make a good point about fire. A very good point. But what I don’t get is why so many humans don’t want to know how to do these things or to retain the skill. I say I don’t get it, but I know why … just as racism is an inherent part of the human condition so too is laziness to an extent. We want things done for us if they can be. We want things to be easier, not harder.

        I’ll analogize it to cooking. When I cook I want to make everything by hand. An example: more than 30 years ago, I taught myself how to make homemade pasta. I initially used one of those hand-rolling pasta cutters to do the rolling and cutting. I did that for a few years and then saw a cooking show where the host made it all by hand. I gave it a try and ditched the roller soon after and have made pasta entirely by hand ever since. Meanwhile, my wife who is a good cook, particularly when it comes to baking, thinks nothing of boxed mixes or pre-seasoned meats or coming up with ways to make it easier and quicker. If she can. And, of course, there are plenty of people who now live on DoorDash and couldn’t cook their way out of a paper bag.

        Oh … I remember what I wanted to say. The problem with these technologies is that they lay to the lowest common denominator and more people are willing to be that than something else. (And understand, I don’t mean this pejoratively. It’s human nature.) So … another example … when my older son got his first phone when he was 13, back in the flip phone days, we got the limited text package because we felt the phone should be limited to necessary communications. But … all of his friends had unlimited texting so they texted him all the time and he kept blowing past the limit and having to pay us more each month than he was making in allowance. I finally gave up after a few months of that and gave in.

        This is the problem with all of this technology that “makes our lives easier.” Those of us who don’t want it almost end up being forced into it by all those who are willing to give in to the tech. And AI likely lead to the same result. Why write a book, if the stupid computer can do it better? Oh well, there goes the human imagination! What else will we lose. I almost feel that if AI reaches it full potential, we will lose everything eventually. We will just be blobs sitting on the sofa.

        Liked by 1 person

      • Have you ever watched WALL-E? I did again recently, a real prescient film. That bit about the blobs on the sofa made me think of it. The tradeoff for convenience can be the road to hell. It irks me like crazy in the single-use packaging we can’t seem to get rid of, too. We are a funny creature. About to get a lot funnier. Hope you enjoy that smoothie and thanks for riffing with me Mark.

        Liked by 1 person

  2. I think the rollout of true artificial intelligence will be much like the rollout of the internet. First, porn… No seriously. I think we’re dabbling in transformative tools. I think unforeseen good will come from machine generated intelligence (cures for our worst diseases, for example) but I think it will, like the internet, unleash much of the worst in society. Humans are too petty, violent, tribal, etc to harness power. We’ll use AI to control one another. Back to porn, I’m certain that AI generated porn is going to be huge. Is that good? Less exploitation of humans, but the ability to create any abomination right at your fingertips, normalizing interactions that are illegal, immoral and unethical in real life. Also, something to watch in the future, will it be exciting to people if we all know it isn’t real?

    Liked by 1 person

    • Yeah, on the porn who would have thought six months ago that we’d be here, where there’s now questions re: “if it’s not a real person, does it still count as an ethical line?” Like around children, and so on? I think the more powerful the tech the greater risk it will change us, as smartphones did. I avoided the whole conspiratorial vibe to it before but I can’t ignore the evidence and truth (or signals from like, the insurrection) over the kind of real impact social media + phones have had on humanity here. It starts and ends with porn, that’s funny. You’re probably right there. So many interesting angles and contours, thanks for sharing your thoughts on it Jeff.

      Liked by 2 people

    • I find this interesting … that AI could come up with cures for our worst diseases. But how could they do that? It basically means that AI could develop a more advanced level of intelligence than human intelligence, instead of relying on human intelligence to … essentially act human. As concerned as I am about AI, I’m still not sold that it will actually gain that higher level of intelligence that is beyond ours.

      Liked by 2 people

      • It is interesting Mark. I think it boils down to more advanced mathematics, computation, pattern recognition, probability etc than we are capable of. I’ve heard examples related to protein sequencing, that’s just got too many variables for people to calculate but maybe supercomputers might, if trained on it. There’s some work in quantum computing too like this, where there’s the possibility to solve “previously intractable problems” like climate change…but my knowledge doesn’t go much deeper than that. I don’t know that I’d ascribe “intelligence “ to it per se; or we tend to anthropomorphize intelligence like it’s aligned to humans but there’s an argument it’s a different kind of intelligence. Trying to get my meager brain around that.

        Liked by 1 person

      • Yeah. Jeff’s reply to my initial comment pointed out that computers can already solve problems humans can’t and so disease may just be another form of problem AI could solve. But … is it really AI that is doing that or just the ever increasing computing skills with the improving technology behind the screens and CPUs? I don’t know if that’s a distinction worth making. Maybe at some point, maybe already, it’s all the same thing. As I read your comment describing some of the ways in which supercomputers and quantum computing are doing things we can’t and I thought … aren’t you describing AI then?

        Liked by 1 person

      • I saw Jeff’s response after sending mine and liked his better because it’s more to the point. I like your notion of it being the same thing as you say. It kind of is, right? I’ve only read one book that goes deep on the machine learning part but it’s built on the idea of focusing the machine on an incentive, or reward, and adjusting it as needed to that goal. Having read one book I honestly still don’t understand how it works but have a greater appreciation for the 70 years worth of trial and error that’s gone into that same basic premise. So there is a Pavlov aspect to it by way of reinforcement learning and then a neural net aspect, which I still don’t grok. But I guess our brains operate on electrical signals too. And you can adjust the “parameters” on neural nets in a specific direction, against that thing you’re incentivizing it on. That’s all I got! Wouldn’t stand up in court either.

        Liked by 1 person

  3. Read an article this morning that AI generated photos of Trump being violently arrested were spreading on social media. Henceforth, we can’t believe what we see unless we see it first hand. Will Fox have compunction when creating ‘news videos’ of Trump bursting into Comet Ping Pong to release the children that Hillary Clinton enslaved? So, tongue in cheek, but you get the point. They just spent two years reporting facts they knew to be wrong for better ratings. Bad stuff’s a-coming.

    Liked by 1 person

    • Have you seen those photos?! They are hysterical! It is like a comic book, go look at the Twitter thread. And it’s a great advertisement for the news agency whose founder created those images. They are a kind of watchdog reporting outlet that monitors far-right online activity globally. A cool-looking outfit I think, and the images are brilliant. He got kicked off Midjourney for doing it but I think it was well worth it.

      Like

  4. Of all the fascinating human beings on the globe, why the hell do we think we need to have intercourse with machines? Is it as banal as the novelty factor? If so, we will be royally screwed.

    Somewhere around episode 3 of Jacob Bronowski’s searingly, humanly intelligent treatise ‘The Ascent of Man’, I seem to remember he made the point that the growth of human intelligence and culture is intrisically linked with the eye-brain-hand-tool-making-visualizing-problem-solving motivation. If we surrender our skill potential to machine-think we risk becoming even more stupid and vulnerable and suggestible than we’ve already become in the last few decades of internet-social-media-computer-generated-‘scientific’-papers-garbage-output, wherein powers of discretion seem to exponentially decline in relation to amount of garbage consumed.

    The last 3 years should tell us just how much we can be misinformed and controlled by state/globally operated social media propaganda and internet censorship, algorithms et al. Scientists of integrity have been cancelled and silenced. This should tell us something?

    Liked by 2 people

    • I love it Tish! You’re on fire! Intercourse with machines, yes novel and banal. Do you happen to remember the first time you used Google to look something up? I kind of do. And I may remember the first Saturday night I played around with ChatGPT. It was fascinating, but I quickly got a sense for its voice (which is odd to think it has a voice, but it does) and wasn’t enamored by it. I tried to use ChatGPT to write an essay where I was looking for sources to back up some of my claims and it provided legitimate-looking names, dates and periodicals…but when I tried to track them down they didn’t exist. I’m sure you’ve heard similar. Then when I challenged the bot on this it basically said I’m sorry, go look somewhere else. It was very polite, but I lost all respect then. I’m seeing this comparison to chat engines calling them a ‘stochastic parrot.’ I love that phrasing.

      Thank you for reading and sharing your thoughts, was worried I lost you earlier this week with that post from Bill Gates. Ha, ha. Be well. I’ll look up The Ascent of Man, sounds interesting — thanks for the tip on that.

      Like

      • Hi Bill. The Ascent of Man was a book and TV series, early ’80s I think. It probed human ingenuity and invention through to the nuclear age (so there’s a lot of scientific discussion), also with poignant (and personal) reference in the finale to the Holocaust.

        And the ‘stochastic parrot’ – what a great put-down. And I also know what you mean about Google. If ever I was doing serious searching, I always felt it was giving me the sources it wanted to me to see/or had to hand rather than quite what I wanted. Fine for looking up how to mend the hoover, find a natty quotation, Henry Viii’s birth date, or make a chocolate cake. Yet also just enough return to get one hooked. Which reminds me of an interview with Margaret Atwood in which she talked about internet fixation. She said whenever you go there, you always think it’s going to give you an Easter Egg. And she’s right. That’s the subliminal emotional lure in action. Too often it trumps rational thinking and powers of discretion. And that’s the thing that scares the hell out of me – people not knowing what is real and what is not; wasting their life force on false constructs created by manipulators, when there’s a whole wonderful world out there that is NOT trying to kill us by overheating unless by mass volcanic eruption.

        Which seems to be cue for a Carl Sagan quote, which I admit I’ve just this minute found while not finding the one I was looking for:

        “One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.”
        ― Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark

        Cheers, Bill. This is a great debating corner. (Mr Gates is my bete noir. One hopes for an Icarus moment.)

        Like

      • I like that you brought up the Easter egg and this line Tish: “That’s the subliminal emotional lure in action. Too often it trumps rational thinking and powers of discretion. And that’s the thing that scares the hell out of me – people not knowing what is real and what is not…”

        As I’m trying to make sense of this (and grateful to you and others here for playing along), you’re getting at another theme and concern I have too. I have it on my list to study more about the dopamine impact, because from what little I’ve read on the topic I think that phenomenon is relatively new, that we enjoy such a daily infusion of dopamine hits now, courtesy of what we find in that Easter Egg pursuit. To me it feels like the wonder of surprise, the novel or new, even if it’s as banal as a new email message I won’t even open.

        A term I hear that’s compelling too, re: discerning what’s real from not, is a “blurring.” You see that blurring in the creative realm coming, with ‘blended authorship’ if you will — which was always a thing of course, just now it’s more of an automated thing. So I envision a higher propensity of it, and more nefarious uses of course, to manipulate. Not saying anything new, just parroting back the stuff that’s top of mind for me too.

        I appreciate the Sagan quote, thanks. Bamboozle is such a good word. And again, another theme is on our attention: to whom do we give it, are we even aware of where it’s going, and then how (to play off the charlatan metaphor) is our attention being used? We’ve proven how easily we can be manipulated, as you’ve said earlier in the thread, and so now with the tools being so much more powerful why wouldn’t we continue down that same road?

        Thanks for being part of the discussion here and looking forward to more in the days to come. Now go enjoy spring if you can and close that laptop ha ha.

        Like

  5. It looks to me that we are in a magic period with AI. It’s a time when we can get away with being inferior if we fail to employ it. What happens when we pass the point where it becomes embarrassing to be plain human?
    What happens when the ambitious exploit that feeling of inferiority?
    ~
    And is lieing the first step towards ambition?

    Liked by 1 person

    • Magic is a good word, I’ve fooled around with that as a metaphor. I like the image from the Tarot of The Magician: one arm projected upward to the sky, the other pointing down to an open book. Does seem a bit of divination we’re doing. And re: lying and ambition, yeah I can’t see why one wouldn’t do the former if they’ve got the latter compelling them. Seem like bullets and guns to me!

      Liked by 1 person

  6. I’m not certain, but I’m pretty sure we don’t have to worry about machines becoming sentient. That concern stems from a worldview presupposing that sentience arises out of complexity, or that consciousness switches on given the right conditions. I think that worldview is wrong. I think that sentience, or consciousness, is the single, fundamental property of all of this (I say, waving my hand in the general direction of the universe and everything in it). It’s what accounts for the arising of complexity, with ever more complex organisms having increasing access to intelligence and self-awareness. But nothing created by humans can access sentience until we figure out how we ourselves access it. Until we know how life itself enters into and exits the body, where it comes from and where it goes when it’s gone. No science will ever answer that, because it’s looking in the wrong places with and with insufficient tools, and no AI will either. I’m pretty sure we’re okay on that front. Then again, I could be wrong. But I don’t think so.

    Like

  7. The Macbeth analogy is sobering. Dramatic, too. For me, the key sentence in this essay is “It’s not what the witches say that leads to his demise, but how Macbeth assumes their vision as truth, and what he does with it.” In the end we all choose. How much tech, where it lives in our lives, who (or what) we trust.
    Thanks for a very engaging piece, Bill.

    Liked by 2 people

    • I’m glad you liked that line Bruce, thanks! That was kind of a discovery within the piece you know. Love when that happens right? I started to explain that more in a previous version but then thought better of it, so I’m glad it still resonated as-is. Really liked what you pulled out of that Stone Roses album too in your latest piece, was just thinking about that now. Pretty darn insightful. Thanks for reading.

      Liked by 2 people

  8. I’m not certain, but I’m pretty sure we don’t have to worry about machines becoming sentient. That concern stems from a worldview that presupposes sentience arises out of complexity, or that consciousness switches on given the right conditions. I think that worldview is wrong. I think that sentience, or consciousness, is the fundamental property of all of this (I say, waving my hand in the general direction of the universe and everything in it). It’s what accounts for the arising of complexity, with ever more complex organisms having increased access to intelligence and self-awareness. But nothing created by humans can funnel sentience until we figure out how we ourselves do it. Until we know how life itself enters into and exits the body, where it comes from and where it goes when it’s gone. And that just ain’t gonna happen. No science will ever answer that, and no AI will either. I’m pretty sure we’re okay on that front. Then again, I could be wrong. But I don’t think so.

    Liked by 1 person

    • I haven’t read so much yet on the sentient debate but haven’t seen any reason to be concerned about that either. Haven’t thought about it the way you frame it here and like that logic. You could argue how could we make a brain if we don’t even know how ours works. I’m a dunce in this realm but that’s how I see it from what little I’ve read. Hope I didn’t give the impression here that I was writing about the sentient concern but maybe I did?! There are some pretty valid appearing concerns though about AI safety stemming from AIs doing things we didn’t expect. I don’t see that as sentience as much as the machines trying to execute a command or work against an incentive / reward that has unintended consequences. I find it bizarre for example that GPT-4 would lie to pass a test, and fool humans in the process.

      Liked by 1 person

      • No, I know sentience wasn’t your focus here but you mentioned it so I latched on to it as the one part that wasn’t over my head or out of my wheelhouse. Don’t mind me, I’m just the quiet kid in the back of the class. Cheers and prost and have a great weekend! Hope I didn’t wake you, I know it’s still early there.

        Liked by 1 person

      • The quiet kids are the ones to watch in class…ha ha. I’m having fun learning about all this stuff but if it were a foreign language I could barely order lunch, still. Be well and thanks for playing, really value and want to hear more of your perspective here please. I will mind you, duder.

        Like

    • I just read a line from a book on this I thought you might enjoy. It goes, “by far the greatest danger of AI is that people conclude too early that they understand it.” Amen. Thanks for reading and sleep tight!

      Like

  9. I’d remembered some lines from Macbeth, “By the pricking of my thumbs, something wicked onto my touchscreen comes,” but hadn’t remembered this line “I have bought golden opinions from all sorts of people,” which seems kind of apropos to this discussion. Apparently it’s a very simple matter to tweak an existing chatbot so that it will give us the answers that we want to hear. Maybe in RightWingGPT, the Macbeths can become heroic entrepreneurs and risk-takers, trying to break out of their outdated feudal constraints. I’m glad you’re diving into the topic, Bill, I’ll be interested in your thoughts on the matter.

    Liked by 1 person

    • Good day Robert! And thanks for chiming in. Didn’t recall that golden opinions line either. While I’m lazy (I know I said it) I know I actually have to go back and reread the play so I’m doing that today, in part because I’m sick and it’s a good excuse to do so. I’m going to see if I can hone the essay down to just the Macbeth analogy because I think there’s more parallels inside the story to explore. My wife had/has a copy of Asimov’s guide to Shakespeare, like one of the largest books I’ve ever seen, and somehow it’s gone missing in our house. So I am going to try to nab a copy from our local bookstore and hear what Isaac had to say about the play too. Should be interesting. Thanks for being interested in my thoughts and I’ll be eager to hear yours in the days to come too, as I do more discovery here. Be well!

      Liked by 1 person

      • Ok feel better, Bill. Isaac Asimov the sci-fi writer? interesting. The last time I saw a production of that play, the director was exploring the idea that Lady Macbeth was not so much ruthless as desperate, driven to do what she did by her husband’s passivity. I won’t express an opinion on that but thought it was an interesting angle.

        Liked by 1 person

      • Another interesting angle on Lady Macbeth, in light of technology, could be that she thought she had control over her husband and later realized she did not. She seems to goad him on, then that ambition takes on a life of its own in him. And yes the sci-Fi Asimov: he wrote essays on 38 of the plays. Of course he did.

        Like

      • One of my grandfathers was a lawyer but loved science and engineering. He mentioned Asimov many times as an amazing polymath, so the Shakespeare book shouldn’t have come as a surprise.
        Yes, I can see that interpretation, kind of like Robespierre or those Prussian aristocrats in the ‘30’s who thought they could control Hitler, once you crank up the violence things can spin out of control. So I guess among other concerns, we’re waiting to see if AI will be used an effective propagandist, whipping up the mob. I can see ‘em now, waving their cell phones instead of pitchforks

        Liked by 1 person

      • Yes, the control motif is a good one. And such an illusion is control, so often. When you think you’ve got it, doesn’t take much to discover you don’t. Asimov also did a work up on the Bible. My wife Dawn says he wrote 8-10 hours a day. I wonder how some of these great writers would react to our new-fangled LLMs. I did read a Roald Dahl story on it but it didn’t yield much inspiration. Orwell touched on it too, the “mechanization of writing.” All things boil down to math I suppose.

        Liked by 1 person

  10. I appreciate these provocative questions, Bill. I also appreciate how you invite consideration of a variety of possibilities here. The Macbeth connection is so great, because it invites us to recall the danger of certain forms of hubris. The prophecy was so loaded, too. One is left with the question of whether his belief in it led to his downfall, or his doubt in its inevitability. I was thinking of Macbeth yesterday, as I was reading the last of that opening sample of that Bridle book (I now think I shall buy it), exploring the hubris of assuming the supremacy of “natural” human intelligence. So your post is adding to this mind-spinning buffet of food for thought. Thank you, Bill!

    Liked by 1 person

    • Hey thanks Stacey! Happy you could read and reflect on it. Just about done with a reread of the play and a bunch of analysis; I think hubris is probably the best anchoring point to compare the two. I’ll be curious to read Isaac Asimov’s essay on the play when I get it in the mail. Perhaps that human ambition-slash-hubris is the universal theme worth exploring here. I’m going to try another go at this as a more focused 1K word essay for a different audience and see what I can tease out. There’s also the theme in the play of appearances vs reality which is interesting. And I didn’t know until yesterday that Shakespeare likely wrote this in the fall of 1605, coinciding with the Gunpowder Plot. So that weighed heavily on the minds of the culture at that moment, especially around who you can trust, disloyalty, and so on. Such a beautiful piece of writing though, how all these themes come together. Thanks for reading and enjoy the day!

      Liked by 1 person

Leave a comment