Writers - Foz Meadows
Season 2 · Episode 4
00:00 / 1:06:40
1x
Foz Meadows joins us to discuss his "Against AI" polemic, written in response to Erin Underwood's open letter to SFWA and the SFF community in the wake of the Science Fiction and Fantasy Writers Association's announcement that works written by LLM tools and works for which LLM tools were used at any point in the writing process would not be eligible for consideration. The conversation touches on not only Foz's piece, but also the sheer stupidity of thinking we can outsource the act of thinking to machines that do not think, what the introduction of LLMs is doing to education, and the role that writers—especially science fiction and fantasy writers—have to play in resisting the inevitability narrative.
Mentioned in this episode:
- Against AI - Foz's response to Erin Underwood's open letter to SFWA and the SFF community [permalink]
- Nebula Awards Rules - section 9.11 deals with LLM tools [permalink]
- Open Letter to the Science Fiction Writers Association and Community - by Erin Underwood [permalink]
- xkcd: Duty Calls - someone is wrong on the internet
- Dodge v. Ford Motor Co. - case in which the Michigan Supreme Court held that Henry Ford had to operate the Ford Motor Company in the interests of its shareholders, rather than for the benefit of his employees or customers
- No Take, Only Throw - a webcomic strip in which a dog wants to play fetch with its owner [permalink]
Transcript
Foz:Nothing is happening here. This is just a spinning of wheels. Nobody's learning, nobody's teaching. You've automated the entire point of the thing; nothing's being automated here, because you can't automate the human experience. You can't automate the point at which you as a person actually have to do something. And everyone's looking at it going, "Oh, this is so revolutionary." No it's not. It's just stupid. It's a naked emperor with his dick out and you are so hypnotised that you've convinced yourself it's something else.
[Theme music begins]
Josh:Welcome to Politechs. A show where two software developers break the fourth wall and talk about the interlacing of technology and politics. I'm Josh...
Ray:...and I'm Ray. We see technologies like crypto, AI and surveillance infecting our politics. Meanwhile, governments and the media are making out that all this is inevitable. We can't leave our future in the hands of scheming technology leaders and their pet politicians.
Josh:We're here to raise our voice and open up the space for honest, good-faith discussions. To help us organise and sharpen our ideas, we have invited some interesting and highly respected thinkers to share their understanding and their research with us, and of course, with you.
Ray:Politechs is where we smash the myths and get to the truth.
Josh:Ray, we need to talk about writers.
Ray:There's so much slop these days. Let's hope that we can discuss writing in a meaningful way, and about the fact that writing is really about human communication, from human to human; how we sit in our thoughts. To me writing—well, reading, more than writing—is about sitting in your thoughts and being pensive about the text and thinking about what the writer meant. Living in your own world, that they've created for you, but you're also creating yourself. It's a real beautiful thing, but I'm obviously worried about writing now because, is it real? Is it is anything going be real again? So it would be wonderful if we had a writer to discuss this with.
Josh:Ray, I have just such a writer in mind. A real human being, not a slop machine.
Ray:That's good. That's good.
Josh:[Laughs] We have Foz Meadows, who has graciously agreed to join us today. So Foz, thank you so much for coming on the show, and hello. Do you want to introduce yourself to our audience of three people? [Laughter]
Foz:Hello, hi, I'm Foz Meadows. Thank you very much for having me. I am a science fiction and fantasy writer. My most recent books were a queer romantic fantasy duology, "A Strange and Stubborn Endurance" and "All the Hidden Paths". I've got another book, "The Weight and the Measure", coming out at the end of 2026. I also have a Hugo Award for yelling on the internet, AKA fan writing. [Laughter]
Josh:That's awesome. I'm actually a big fan of science fiction and fantasy. And actually, the last few years I've gotten—I was kind of out of date with my science fiction reading, especially—so I have a friend who recommended me a lot of new authors, and whilst I haven't read anything of yours yet, I will be sure to remedy that. It is really cool to see what some of the newer science fiction and fantasy writers are doing, especially around queer identities and so on. It's amazing. It's very different than Isaac Asimov, let me tell you, in a good way. [Laughter] You mentioned that you have a Hugo Award. First of all, congratulations. That's very cool.
Josh:Mad respect. The reason that you popped onto our radar, however, was to do not with the Hugo Awards, but with the Nebula Awards, I believe?
Josh:You had written a piece, which I believe was called "Against AI" [1]—nice and pithy—and that was in response to what another writer had written about something that the Nebula Awards Committee had written. So maybe I'll stop talking and you can explain all of this to us. What happened? What did you write?
Foz:OK, so the context for this is that you have the SFWA, which is the Science Fiction Writers Association, who are in charge of, among other things, setting the rules and conditions for the Nebula Awards, which is one of the premier awards in science fiction and fantasy writing. And essentially, out of nowhere, the SFWA suddenly dropped this rule amendment about the role of AI writing when it comes to the awards. [2] And it was basically two rule changes. The first of which said that no generative AI works will be allowed in consideration for the Nebulas—very good.
And then the second one was, if any AI tools are used in the creation of a work, that must be disclosed on the final ballot. And it was this that people immediately within the science fiction writing community objected to and said, "Well, hang on, saying that it'll be disclosed on the ballot implies that you will allow works that have been authored in part with the use of AI tools to be in consideration. If they're going to be on the ballot, then that means you're not disqualifying them."
There was a lot of very immediate pushback to this and to its credit, very very quickly the SFWA said, "We did not intend for this. This is bad wording on our part. We have now amended the wording to say that if you use these AI tools in the creation of a work, you must disclose that to the awards and therein after you will not be eligible." So that was the context for what happened.
And then, I think it was the next day or within a couple of days of this, there is a prominent sort of news and commentary site within science-fiction-fantasy-landia called File 770, which is run by Mike Glyer, which is frequently on the Hugo ballot. I think has won in the past for sort of fanzine, pro-zine, semi-prozine, one of those categories. I forget which one specifically, but the point is it is very well known. And onto file 770 was posted a long piece by a woman named Erin Underwood saying, "Hey, I think we need to have a more nuanced conversation about AI tools and whether or not we are just going to blanket disqualify works that use them". [3] And she used—by her own admission—she used AI to help her write this piece.
Charitably, the one point I think that she was making that had some kind of relevance to this was, "Well look, AI tools are now so common across various platforms within the publishing backend in some instances; even if they're not being overtly used by the publisher, the fact that the tools come bundled with software that the publisher might use or that the author might use means that potentially there is a situation where say someone wasn't aware that Copilot in Microsoft Word was doing their spell checking for them. And if they use that auto suggest, as you might with a text to choose their next word, does that count as having used an AI tool? Therefore, we need a nuanced approach to not just outright banning the use of AI tools—but of course, generative AI is wrong".
And it was a very, in my opinion, self-contradictory and not particularly helpful piece. Partly, I think the fact that it was written with AI and was—I don't mean to be bitchy with this—but not written particularly well. If this was the work of AI, it was not a great advertisement for AI because it was very repetitive and overlong.
Josh:[Laughs] With an amazing long bullet list right in the middle. That was spectacular.
Foz:Yeah. But the point that she was making, the key point was, "How do we define the use of AI tools if this is going to be the thing?" And my response to that was simply, look, if that's your concern, then the Nebulas—sorry, the SFWA—very very clearly responded quickly to this. They can work on the wording to specifically say, "If you've used something on the level of a spell check, we're not mad about that. We're mad specifically about generative AI. And if there is substantial text or a whole sentence, say, in the work that you yourself did not write, that the AI tool, whatever it was, produced, that's the thing that we are considering generative, and that's what we are trying to crack down on. And maybe we spiritually disagree if you want to use ChatGPT for research or for this or for that, but you're a grown adult. We can't stop you doing that. The point is that we are not wanting to consider works that have AI generated text within them". And so I wrote a response to Erin Underwood's piece
Josh:Thanks for giving us the background there, Foz. And obviously, we read your piece, we thought it was fire. What was it specifically that you felt like, "No, I gotta write something about this"? What triggered you to throw your hat in the ring, as it were?
Foz:My problem is that I am the personification, firstly, of that xkcd comic about someone is wrong on the internet. [4] [Laughter] And also, Adam Scott from Parks and Rec[reation] being like, "I don't have time to explain to you how wrong you are. Actually, it's going to bug me if I don't." [Laughter] Sadly, I am chronically incapable of shutting the fuck up.
Ray:Well that's good. You're in the right place. [Laughter]
Foz:The specific thing that got under my skin was this idea—I think the phrase she used was, "We can't put the genie back in the bottle". That AI just exists now and therefore we need to roll with it, we need to accept it, we need to take it into account. And so we shouldn't be punishing authors for the decision that a publisher might make to use AI tools on the backend when editing their work. Oh, but of course we should respect the fact that AI tools are built on plagiarism, and we want to respect the rights of authors. And it was just this maddening contradiction where it's like, look, if you're going to acknowledge that AI is foundationally unethical and built on plagiarism, is environmentally disastrous, is doing all of these other terrible things, is immoral and fucked up in all of these other kinds of ways.
But specifically for this conversation, the plagiarism is the sticking point with a lot of people—not the sticking point as in this is the one line you've transgressed—it's more—and I said this in my piece—if the plagiarism with which the tool was built was the only thing that was bad about AI, that would still be grounds for not wanting to use it. The fact that it is terrible in all of these other ways, in addition to that, just compounds the offence. But for the purposes of this conversation, to say as a writer and somebody in that community, "Oh, of course we can be mad about generative AI and we want to respect the rights of authors. We want to make sure things aren't done to their work without their consent. But we should also be planning for a world and accepting a world in which AI tools will be used by publishers or used by third parties on the works of writers without their knowledge or consent." Like that's a fundamental contradiction to me.
And the fact that she didn't seem to realise this activated the part of my brain that has to have an opinion, which frankly is not very hard to do. But that was the that was the key thing. That was the key point that made me go, "Hang on, I'm mad now and I need to sit down and write something about it."
Ray:She admitted it was written by AI, but in that part there where she says, "Oh I admit the AI helped me to write it," but what she said was, "I read it like three or four times. I revised it. And basically I wanted to make sure that what the AI wrote was aligned with my vision". Now, this seemed to me to be a strong justification for using AI because it was labour saving, ot outsourced the thinking to this machine. But then she was trying to claim ownership of the output of that machine. In other words, "I didn't have to do the thinking, but that thinking that it produced was kind of like the vision or the thinking that I had, and then I went back and I edited it and I changed it and it lined up." That's the slippery slope to me in that in that essay. She was essentially saying that the AI wrote it, but I own it because I reread it and I made some changes to it. That was a very annoying argument. That was the thing that really got me.
Foz:Yeah, I think that is to me a foundational concern with AI. It's such an irritating thing to have to be arguing in favour of ownership, because creatively I like the idea of a sandbox. I like the idea that we are intellectually and thematically and creatively riffing on each other. However, because we live in a capitalist hellscape and also because attribution matters, it does matter whose words you are citing in a given moment; you can't pretend that somebody else's work is your own. This is why we have a concept of plagiarism. And I don't like foundationally with AI this idea that the words don't matter.
Most authors that I know have, at some point or another, had the experience of having a person who is not an author come up to them at a party or a friend's gathering or at a convention and say, "Hey, I've got this fantastic idea, but I'm not a writer. And I'll be generous and I'll split it 50-50 with you. [Laughter] I'll give you the idea, and you do the writing, and then we will split the profits". I've had this happen to me multiple times. Other authors I know have had this happen to them multiple times. You'll get cold emails about it periodically. It's usually some guy in his late 60s. I don't know what it is, but I've never had a woman do it to me. It's only ever been dudes. I don't know if that's just a narrow experience, or if it's just a particular type of demographic that is prone to underestimating a particular kind of labour.
But the use of AI—and I am not the first person to say this—often feels like it exists to cater to this guy specifically: the kind of guy who foundationally devalues the labour of writing. And it's maddening, because it's very much an instance where the map is the territory. And you can collaborate; I'm not saying that collaboration between two parties, where one comes up with an idea and the other does the material work of putting it onto paper, I'm not saying that can never be something that works. There are many instances in which it does. But that is a collaboration where those two parties know each other, or they work together, or they have some reason to be doing this. It doesn't begin from a place of, "Well, clearly the idea is the most important thing and you're just gonna do the menial work and I'm gonna be so generous and allow you to benefit from my intellectual prowess when I haven't actually done anything and I don't understand that putting the words on the page is the material act of making this thing real".
And I say all that because I think at the heart of a lot of this use of AI is this idea of outsourcing thinking. And so, while it does matter that the words themselves on the page do not come from the person in the sense of being not authored by that person or being plagiarised—because you don't know whose words you might be getting, because it's all a soup; you don't know at any given moment if the text an AI spits out is stolen wholesale from some other person—but also in the sense that the intellectual process you go through to come up with that string of words in that order to express that sentiment is part of what establishes the thought as your own. And when you don't do that work, and when you cannot see the physical human being to whom you might have outsourced it in a collaboration or with whom you might be working in a collaboration, you haven't really done anything.
Any more than historically once upon a time a patron of the arts who commissioned a sculptor and said, "Hey, I want you to make a sculpture of me looking heroic sitting on a horse." They came up with that idea. They commissioned someone to do it. But we do not say therefore, that the patron made the work. They paid for it. They got somebody else to do it, but they didn't do it themselves. And this idea that coming up with a prompt, that coming up with an idea is equivalent to the labour of making that thing real, is maddening to me.
Ray: I completely agree because I think that there's this concept of this idea—the idea that was that was most famously done from the programming perspective was the Facebook idea. That Zuckerberg had this relationship with some twins. I can't remember their names now.
Ray:The Winklevoss twins. The Winklevoss twins were not programmers, but they had this idea about the social network—and I think there was eventually a settlement about this. The argument was, "Well, just because you had the idea doesn't mean that you produced the thing." And producing the thing is, like you say, that's the hard part. That's the tricky part. Reifying an idea is a fundamentally challenging activity. The problem with this slop thing is that they're degrading this reification process into an outsourcing machine that just extrudes stuff. But that stuff that's being extruded is sufficiently plausible for a lot of people to consider to be good enough.
And I think this is where things get problematic, I reckon, is because we can argue about the logic, the ethics, and all that fundamental negativity and horror, but for a lot of people, I would say, they have this idea and they say, "Write it for me." And they're actually quite happy with the slop that comes out. How do you feel about that? Because that's super worrying to me, that people will consider this to be good enough.
Foz:I'm not tactful about AI. [Laughter] I loathe and despise it with a fiery vengeance. I'm aware that therefore that I make some strong claims when I talk about it. Sometimes that is for exaggerative purpose; mostly it's not. But I genuinely do think that in a vast majority of cases, AI is embraced by people who lack the skills to do the thing otherwise. That's why they're using it. And it's not that they are necessarily incapable of developing those skills. Overwhelmingly, something like ChatGPT, we know is used by students. Which I think is a tragedy in and of itself, because rather than learning to think, which is the purpose of the education they are undertaking, they are outsourcing that because they just want the result.
And to be fair, a lot of them have been trained by inadequate education systems and inadequate methods of teaching—and this is not a knock on teachers; I value and respect teachers—but to believe that the outcome is the only thing that matters. That education consists of nothing but number go up, grade go up; grade good, bad grade bad. And so they don't think that there's an intellectual process. It's just producing the result. Just handing an assignment in. Just completing the task. Because it is not emphasised to them, I think in a lot of instances, that no, this is actually the mental process that you go through. You are training your brain. You are learning how to do something. And so they just think, "Well, all I have to do is have this information pass through my hands in some way, shape or form, and I turn it in, and then magically I level up. Magically I've done the thing. That's all that's required. I just have to sit through the process."
And I think it is an indictment on—particularly America's education system—that so many people within it have been trained to think of it that way. And I could go on a long tangent as to why I think that's the case. But essentially you just have this situation where people are not valuing their own thoughts. They're not valuing the work. And because of that, a lot of them are not willing to put in the hard work that it takes to get good at something. And then along comes AI, and it plays to the secret vanity of so many people, particularly with generative AI, of people who've said, "Well, I see people getting acclaim and respect for being an artist or for being a writer, and I want that acclaim. I want to be acclaimed as a writer and an artist too, but I don't want to have to undergo the mortifying ordeal of being bad at something in order to get good at something. I just want to be good at it right away."
It's like every six-year-old at their first piano lesson. [Laughter] "I just want to be good at it right away. And if I'm not good at it right away, then I'm never going to be good at it and this is a waste of my time. I just want to jump ahead to the part where I'm immediately a prodigy at this thing." And that's the appeal to vanity that generative AI has for so many people. And so when it comes to something like writing—particularly on the internet in the modern day and age, because you basically can't be online without typing or texting or writing in some way, shape, or form—I think it has led to this situation where people think that writing is easy to do. It is easy to type, if you have learned how to type, and you can communicate in a written way, in an effective way, in an email, in a text message, in whatever. But writing as something where you are trying to communicate with intent, if you are trying to tell a story or write an essay or undertake a piece of criticism, is actually a skill.
And a lot of people, I think, over the past 15–20 years have either lost that sense that it's a skill or have never been raised with it in the first place. They think that all writing is more or less fundamentally the same until they sit down and try and do it themselves and realise, "Oh, this is actually difficult." [Laughter] And so there is this devaluing—just by virtue of what the modern internet is—of writing as a skilled form of communication.
And so people do not necessarily have themselves the skill, because they've not taken the time to develop it, to recognise that when they get something out of ChatGPT, is this a good piece of writing or not? They can't assess it. And that I think is the great danger in all of this—one of the many great dangers—is that when you use AI, it's not teaching you how to be a better writer. It is actively making you worse because it is outsourcing your thinking to something that doesn't think. It's not teaching you craft, it's not teaching you skill. And so you do not have the ability to assess the skill level of what the output is unless you already had that skill. So it becomes this paradoxical thing. And people don't recognise that because they've fallen out of the habit of recognising that writing requires skill in the first place.
Josh:There's so much good stuff you said just in there. It's interesting, because I just read a Cory Doctorow piece that he put out—I think one or two days ago as of this recording—on his blog, where he's talking specifically about the Intro to Composition class that a lot of universities teach for basic writing. And he's talking exactly about how you talked about the education system becoming very output-oriented. It's the grade, not the process. So we're not teaching you how to think, we're teaching you how to puke out a five paragraph essay on a test.
So what he was getting at there is like the way that you get good at writing is by engaging with other writers, and—he's talking about seminars where you criticise each other's work—but he says, "What's really going on here is you're thinking a lot about your own work. So the feedback you're getting from the other writers almost kind of doesn't matter. It's just the process you're putting your own mind through. This is what's doing the sharpening." And you're right, if you're just typing a prompt into ChatGPT you're losing all of that. You're not doing any of that, so you're not going to get better.
But I think there's another interesting angle to this too, which is something we've talked about before on this show. There is this desire to reduce human beings to our output. And if you can say that the only worth of a human is what they produce, then it kind of doesn't matter how they've produced it, and now why don't we just automate away that human? And so the last thing that you said that really resonated with me is, you said you lose the ability to—and I'm paraphrasing wildly, but what I heard you say, let me put it that way—was that you lose the ability to tell whether something being extruded from the slop machine is any good. And I've had this experience myself where something I've written, I kind of walk away from it and I haven't thought about it, and I come back to the text and I read it and I'm kind of stuck on the thing that I've already written, and I don't let myself engage with the thinking behind it. And this is something that I've written myself; just imagine if it's something that's being spewed out of ChatGPT. I'm really anchored on the words and I've totally lost the ability to now think about this. And I find that concerning.
Foz:I do think that part of what has led to this particular and cultural inflection point with regard to AI is this online rise in puritanism—particularly among Gen Z, but not exclusive to Gen Z—because one of the things that is a hallmark of it is a discomfort with thought, with this idea of thought crime, with the idea that what you think in your head or something that you enjoy in fiction is inherently a measure of your morality in terms of the real world. And I think that this genuinely does create a discomfort with a certain kind of necessary thought, because if you are going to think about politics, if you are going to think about morality, if you're going to think about any remotely difficult topic, you are going to have to have difficult thoughts. You are going to have to contemplate things that you don't want to contemplate. You are going to have to think about different perspectives.
And a lot of people are so unwilling to do that, are unwilling to think about difficult things, because of the inherent discomfort that involves and their fear that if they have this thought, that's going to make them a bad person. And I do blame Christian cultural influence for this to a great deal because there is that idea that if you think a sin, then that's as sinful as doing it. And it bleeds into this wider culture, this idea that God is watching you, and if you think something, then he knows, and that's bad. So you have to you have to be minding your thoughts. You have to be not thinking, or only thinking the right things, and that means you can only talk about the right things. You can only express the right ideas. You can't muddle your way through. You can't make a mistake. You can't contemplate ideas and moral problems. You can't think about things from different angles because if you are engaging in that act of discomfort, that act of thinking, then you are being bad in some way.
And I think when people—particularly younger people—are caught up in that sort of resurgent puritanism, in that mindset, then using something like ChatGPT becomes very tempting because then they don't have to grapple with the idea at all. They're outsourcing the discomfort. They can just ask what appears to be an authority, "What do you think about this? What is what is the answer here?" And I say this unironically, I do think that there is a real parallel to be drawn from the way that some people use tools like ChatGPT and the way that some people interact with religion. Where it is, here is like a black box, here is here is a holy text or here is an AI, and I have decided that this thing has all of the answers. I have decided that this is the thing that will do my thinking for me. This is the only medium through which I think. And whatever answer this spits out, that is what I absorb and I repeat uncritically.
And there is such a danger in that, I feel, because people then don't question. And in a very profound sense, I think that there is nothing more dangerous to human beings, there is nothing that causes us harm quite like a premise that cannot be questioned in good faith. And religion tends to fall under those auspices and AI for a lot of people now falls under those auspices where you cannot question the validity of the technology, you cannot question the fact that it doesn't think, it's "Oh, it's meant to be this repository of wisdom", and even just by virtue of the name—which is itself a misnomer. It's not intelligent. It's algorithms. It's a large language model. It doesn't think. But people are predisposed—in particular when they don't understand the difference—because of the name, they think, "Oh, it must be objective. It must know. If the AI says it, then it must be so." And that's not how anything works. But the more people treat it as though it does, the more harm I think bleeds into the world because of that. And it makes me incandescently angry. [Laughter]
Ray:I think you're right that AI solves a similar purpose, which is that it's a trusted thing—and we've talked on this show before about trust—and I think this is one of those incredibly important things, that computers are trusted by people. And it's been building up ever since the computers could do numbers correctly, whereas humans felt like they weren't as good at numbers as computers were. Computer can add four big numbers together, it can multiply them together, it can do it quicker than we are, so it it's smarter than we are.
And i heard this guy—bringing us back on point—the CEO of DeepMind was being interviewed recently about looking forward to 2026, looking back over the last few years of AI, and he was saying, "Yeah, the public are ahead of these researchers and people like us who are sceptics, because the public realised that if it can if it can answer questions in a smart way, then it's already intelligent. AGI is already here because ChatGPT is able to answer questions plausibly. It can answer, it can do maths for me. Computers can do maths. Computers can search documents very quickly." And I'm thinking, holy shit, you're just a plain liar. You're just an absolute liar—to make money; I get it—but how the hell can you stand up there and say this kind of nonsense?
Foz:Well, they say that sociopaths are overrepresented among CEOs.
Ray:[Laughs] Yes. Just to bring us back a little bit towards the writing, reading thing. When Josh was saying to me at the beginning about talking about writers, one of the things that interests me—and I'm not a writer. I mean, I'd like to be a writer. I am humble enough to realise that my attempts are paltry. Maybe I can get better, and one of these days I will try and do that. But I do like to read. And one of the things that worries me with this authority stuff, and the AI stuff especially, is that people are looking for quick fixes, quick information, quick snippets. They want summaries. They don't want to read a book. They want to find out what happened in the book. And actually, oftentimes, what happens in a book isn't interesting.
The plot is really just kind of like—Shakespeare, for example, used plots from many many different plays. Plots can be plagiarised, plots can be moved around, and characters can be stereotypes. But what's interesting is how it actually—the summary isn't interesting—what's interesting is the thing. The book is interesting. The process of reading this thing is interesting. So I was wondering—maybe I'm just having moral panic—but it feels to me like the next generation—and I see in my own kids—don't read, won't read. And it's going to be confined either to a very small elite or something else. So, we can talk about writers, but writers need readers. So how do you feel about that evolution? Or am I being a bit too dramatic?
Foz:I don't think you're being dramatic. I think that we're undergoing a period of testing a premise, shall we say. And there's a sense in which I think—because I said earlier, I think there's nothing quite as dangerous for people as a premise that cannot be tested in good faith. I don't think the premise that we're testing here is being tested in good faith. Nonetheless, I think it can withstand the testing. And the thing that's being tested here is essentially the validity of written expression, the validity of writing as a form of communication. And I think we're already seeing that people are playing with this at the moment because they have the technology to do so.
For instance, to your point about summaries, there was an app, an AI quote unquote "reading" app, that I saw promoted on TikTok a couple of months back. So within the TikTok ecosystem, there is the phenomenon of "BookTok", where people talk about books, they review them, all of this stuff. And so it was in the context of "BookTok" that somebody, who I think was either a BookToker or was just using the tag, but was talking about setting her reading goals for the year and "Look how many books I'm getting through because I adopted this app where you tell it a book and it produces a digest of the book. And you read the digest and that's basically like reading the book. So I've read 60 books in the past two weeks because I've read these digests."
And everybody unanimously responded to it and went, "Are you insane? No. You haven't read shit." That's like me saying I read 15 blurbs today, so I've read 15 books. No I fucking haven't. What are you talking about? Words mean things, actually. And you see this happening. I think the problem is that there is, to me at least, a sense in which this conversation around writing and around the use of AI to substitute for thought in writing is a real where the rubber meets the road moment for redaction of process. Because you can't take the thinking out of thinking, right? It's definitionally the thing that you cannot do. And that's what people are discovering. They're not necessarily discovering it at the speed that I would like them to discover it, but we are seeing this. Because it's an immovable fact, you cannot get around it.
One of the comparisons that I make with this when it comes to people trying to use AI to write for them or think for them is that it's like, look, it would be like if you sent a robot to the gym to exercise for you and then claimed that you were getting gains. You haven't done shit. The fact that you came out with the workout routine that you told the robot to do and then it came back, the robot's not getting fitter. It's a fucking robot. Nobody is getting... all that's happening is the gym equipment is getting used, but you're not getting fitter. The robot's not getting fitter, because it fucking can't because it's a robot, and you're sitting there and then you're going to go and do something that requires the extra muscle you think you've been gaining and you're going to be surprised when you can't do it. And so, there is a real sense, people talk about learned helplessness with the use of AI, right? Because it is training people to not be able to do things that they were already perfectly capable of doing. And it is this sort of loss of intellectual muscle.
I think there's all different kinds of metaphors you can use for this. Another one is people who put Splenda, like a non-calorie sugar substitute, in a hummingbird feeder. And then the hummingbirds die, because they actually do need the calories. A hummingbird doesn't need to be slimming down, right? But the hummingbird thinks it's eaten something because it's a hummingbird. It doesn't know that you put fucking Splenda in its feeder. It thinks it's getting sugar. It thinks it's getting food. And then poor little thing keels over and dies. There is this intellectual equivalent of that here, where people think—because they are using AI—they think that they are thinking, and they are not. And then they are going to meet an intellectual task that requires them to do something and they will not be able to do it.
And I think Gen Z really is bearing the brunt of this. They've gotten—particularly younger Gen Z—the worst of both worlds here where they've grown up very online, but then they were also stuck inside during COVID, and now a lot of them are under-socialised and are academically behind, frankly, because of COVID, because they the quality of their education dropped over that period. They've started at university, ChatGPT comes along and they're like, "Well, great, this is a way for me to catch up. Because I was stuck inside. I didn't get to do things. Now I want to be outside. I'm behind. I haven't learned how to do this level of work because I was just in front of a computer screen." And it's making the problem worse.
And at the same time, you've got companies who are saying, "Oh, we're going to fire our existing coding staff and we're going to hire vibe coders." And then five minutes later, we're very quickly going to hire the real people back because it turned out that as soon as something went wrong with the vibe coded code, nobody who created it knew how to fix it. And just all of this stuff where you can lie about things to a certain extent. You can pretend that what you are doing is building intellectual muscle—or real muscle for that matter—but at a certain point that gets tested. You have to actually do the thing that you've claimed you're capable of doing. And in one way or another, people are finding out that they cannot do these things. And either they just then become dependent on the AI to do it for them, which is very very sad, or they realise, "Shit, I'm not actually doing what I thought I was doing."
Josh:Yeah, so you mentioned programming there. And what I have seen that is really strange to me as a programmer, is I have seen my fellow programmers embracing these LLM coding tools. And to me, it seems so obvious that if I were to do that, my actual ability to produce good code would atrophy. If I'm outsourcing the thinking, like you said, I can't take the thinking out of thinking and still maintain the ability to think. And I'm just wondering—I mean, programmers are doing this and programmers who I have in the past respected—I'm just wondering, are you seeing writers doing this to themselves? Are you seeing this self-inflicted gunshot wound of an LLM in your community?
Foz:I mean, I think there are absolutely people who have been using it, and because the community is so broadly opposed to it, a lot of them have not been saying out loud that they've been doing it. Nonetheless, I would not be surprised if it was happening. And I think that there is a kind of hubris here, but also—hubris on the one hand, and on the other hand, the terrible, inevitable folly of human nature, which is that sometimes you have to learn things the hard way in order to learn them. You can be intellectually told something, but until you experience the consequences, it doesn't really sink in.
An example for this I think is that if you have always been competent in a certain kind of way, it is very very easy to slip into thinking—or if you've been competent for a long time, if you've been skilled for a long time—it is very very easy to assume that this is now a baseline facet of who you are, that it cannot degrade, that you cannot get worse. And my personal example of this is my level of fitness. So for those listening, I have a physique that at present is somewhere between dungeon master and barbecue uncle. [Laughter]
And I did not used to have this physique. I used to be incredibly fit. When I was a teenager, I was basically half my present body weight and I was muscular and I exercised. The thing was I had a very active childhood. I was constantly running, I was constantly climbing, going through the bush, I was swimming, I was playing tennis, I was horse riding. It was a baseline component of my youth that I just did a lot of physical activity. And as a result, surprise surprise, I was very fit. And as such, I did not have any concept of what it was like to not be fit because I'd never been anything else.
And so when I started at university and discovered—well, I'd already discovered drinking by that point—but drinking openly and going to parties and socialising, suddenly I was not being shepherded to various athletic events by a school calendar or a parent or just the same routine that I'd had through high school. I stopped being as active—even though I was walking around campus every day, I was not at the level of activity that I'd previously been—and I was shocked, genuinely, viscerally shocked, the first time that I went swimming after not having swum for the first time in my life for sort of the better part of a year.
And I could tell that I wasn't as fast as I had been. I wasn't as strong. I was tiring. And it was it was jarring. I was like, "How is this even possible? I'm a good swimmer." And I had to sit and confront with the fact that, yes, you can lose the muscle. You can lose the ability. It can atrophy. You haven't been in a pool. Of course your ability to swim has gone away. But even though I could have logically intuited that, because I'd never been in that situation before, I just hadn't expected it. And I think that there is something similar here with people who are good at their field, coders that you respect. You see this with academics in various fields who are getting caught up using ChatGPT to generate papers or to generate citations that don't exist in fields of law and medicine. People who you would think would know better.
But they are so used—I suspect in many of these cases—they are so used to a certain base level of competence from themselves. They are so used to just this just being who they are, that it does not occur to them that they can get worse. That you can fail to do something you've always been able to do. That you can degrade your own capability by ceasing to practice it, by getting lazy, by getting sloppy, by outsourcing it to something. And they think, "Oh, but because I'm an intelligent person, because I'm a knowledgeable person doing this, obviously, there's going to be no problem. I'm not like those idiots who are using it to substitute for expertise. I've already got the expertise." And they don't see that expertise can go away, that it can degrade, that this is an active muscle that they are using, doing all of this work. And as soon as they stop, the muscle's going to shrink.
Ray:I think in a lot of the cases, places like law firms or whatever, where they're outsourcing to AI and looking up fictitious case law, it's because, again, they trust the computer. It's a trust thing. And the reason why they trust them is because they've had clerks before. They're basically substituting the clerk, the computer is substituting for the clerk. Now, if the clerk got something wrong, you could fire the clerk or you could retrain the clerk. You could discuss with the clerk where things went wrong. With the AI, it turns out there are disclaimers on these things: AIs get things wrong, please check your work. But people don't. So I think there's a muscle thing, but the there's definitely a trust thing as well.
Then the question is, if there's so many people out there that are going to default to be lazy—the word lazy is kind of like, I don't like the word lazy because it implies a certain projection—the question really is, if people aren't being lazy, what are the incentives that we have to alter to some extent to change this mode of behaviour which will default to convenience, which will default to—if they assume that they can trust this thing—they will default to it because the incentives are output something. In a lot of work, it's do an output, do a thing, produce an opinion in case law, make a diagnostic in medicine, write a program in code.
Ray:Make a book. [Laughs]
Foz:I mean, I think in certain contexts, I think—and I'm thinking specifically here of students using ChatGPT, particularly within the American education system, the American tertiary education system. The American secondary education system versus the tertiary education system, I think is a fascinating toxic dichotomy between failure meaning nothing and failure meaning everything. And I think you put those two things back to back. Thanks to George W. Bush's "No Child Left Behind" bullshit, it is functionally impossible to fail in American secondary education. Because the framework that exists is that you have to at all costs try and pass the child, regardless of whether they know the material. And it's something that has materially contributed to this reframing of education as not a thought process, not a learning process, not actually having to know the material, but just a box checking exercise.
Which has had this knock-on effect of teachers not assigning full books for people to read anymore, it's just the excerpts that'll be on the test. This constant streamlining of where it's forgotten its own purpose. But it means that in many instances you have these students who, the idea of having to learn is meaningless in some sense, because failure is meaningless, because you can you can get Fs in everything and still the school will bend over, because it has to, to try and pass you to make you sure you get up to the next grade. And I think it materially degrades the whole idea of education because you're not actually learning. It just becomes a hoop to jump through. It's not about why you're doing it in the first place.
But then you have these students go on to the American tertiary education system, which is deeply, prohibitively, criminally expensive and where the consequences therefore of failing are literally life ruining. Because if you fuck up and you have student loan debt, that debt doesn't go away. You're just saddled with that debt forever. And so you have these two extremes where you go from failure means nothing to failure can literally ruin the rest of your life. Both of these are inimical to actually learning. Because one, nothing means anything, and the other, you have to be able to fail at times in order to learn. Failure is a component of learning.
But if the stakes are so high, and you've got no experience actually trying before, or you've got no concept of what it means to have a meaningful consequence for failure and then suddenly the only consequence you've got is a literal all or nothing the rest of your life is ruined, it creates this perfect storm where people are tempted to use ChatGPT. Because they've not actually learned how to do the thing that they are ostensibly there to do. They've got no meaningful experience of failure in this respect. And then suddenly they're told, if you if you fuck this up, you fucked up forever. And so, "Oh, OK. I don't know how to do this. I don't know anything. I'm gonna use the machine." But there is gonna be a reckoning down the line with this because you are going to end up with graduates across multiple fields who simply cannot do the work for which they have ostensibly received a qualification.
Josh:Which is nothing new, but now we've just automated it and...
Foz:We've expedited it.
Josh:Yeah, exactly. So now anybody can do it. Cheating is not hard anymore.
Foz:No. And it becomes this thing—it's comical when you look at the way AI is being pushed into all of these fields and in some cases adopted by people in these fields when it doesn't really need to be. If you've got students who are using AI to do assignments that teachers have generated using AI and are being marked using AI, nothing is happening here. This is just a spinning of wheels. Nobody's learning. Nobody's teaching. You've automated the entire point of the thing. Nothing's being automated here, because you can't automate the human experience. You can't automate the point at which you as a person actually have to do something. And everyone's looking at it going, "Oh, this is so revolutionary." No, it's not. It's just stupid. It's a naked emperor with his dick out and you are so hypnotised that you've convinced yourself it's something else. It does make you feel insane in this moment to be the metaphoric small boy repeatedly pointing out the emperor is naked. And everyone's like, "No he's not. He's not naked. The fact that you can see his dick is completely coincidental."
Ray:I think theatre requires you to suspend your disbelief, and I think that's what we're talking about here, isn't it? That everything is kind of becoming theatrical. Everything is becoming made up. Like you say, if everything is made up, then I guess the question is, are you going to do a job, that's all made up, everything is essentially automated. At what point do we feel the pain? That maybe is the interesting question. I mean, we individually might feel some atrophication of our thought processes or our ability to do certain things. But what are the bigger consequences, do you think, for us?
Foz:I think that this is like there's an end game here of something that has actually been going since the early 1900s. And there was a court case I only found out about recently. It was Henry Ford. There was a legal ruling in the United States that said that Henry Ford did not have a responsibility to run his company for the benefit of the employees or for the benefit of his customers, so much as he had the responsibility to run it for the benefit of the shareholders. [5] Such that he could he could screw up the quality, he could underpay and overwork the workers, so long as the shareholders were benefiting. There was a legal ruling to this effect in the United States, in about 1910, I think. That's probably the wrong date. But early 1900s.
And that I think genuinely, that logic is a cancer within businesses, a cancer within tech, is a cancer within all fields, this idea of the shareholder. Because the shareholder is functionally—and I mean this with my full chest—a parasite. The reason that we have a business, say, that is a restaurant, is because people need to eat food and the food needs to come from somewhere. And if you make the food worse in quality, if you underpay and overwork the servers, if you do everything to make being in the restaurant a progressively worse and shittier experience, because the people who are parasitic on this, the shareholders who are just making money out of nowhere, right? It is like some sort of sci-fi / fantasy soul-sucking concept, where the life force is being drained from the thing in order to feed this external entity that has no material bearing on why the thing exists in the first place.
And then it is parasitic because eventually that business fails and the shareholders cash in and they move on to the next thing. That's basically what private equity is. But that mindset that that a business is successful on paper and thus successful in the stock market, if it is for the benefit of shareholders—because you've got this entire business ecosystem that is bewitched by the concept of number go up—and you have these private equity firms buying up companies, effectively looting them, leaving them hollowed out, desiccated corpses, and then moving on to the next, people lose jobs, the quality of the products goes down, everything gets worse for everybody else, except for these fucking ticks drinking the blood out of everything.
AI, I think, is the end game of that. Where it doesn't matter why we're actually doing this in the first place. It doesn't matter why people write in the first place, or why people read. It doesn't matter that this is a form of communication. It doesn't matter that the reason that we have human beings doing these jobs in the first place is because you need a human being to do the job. It's underpants gnomes logic—if you remember the underpants gnomes from very early South Park. Where it was step one: collect underpants, step two: mumble mumble, step three: profit. And there's no connection between those things. That is what AI does. It's this underpants gnomes logic of, "Of course we're going to profit by putting AI in literally everything, because we think it's cool and the stock market is happy about it." And it's sawing off—comically—it's sawing off the same branch that you're sitting on. Not you two, obviously, but the companies that are doing this.
They're so completely removed from the reality of anything because they're bewitched by number go up, they do not see that they are destroying themselves. At a certain point, there's a base level of stuff that needs to happen for the world to function. You need people in jobs. You need food. You need literature. You need all of these things. Things have to actually tangibly exist. People interact with them. We have all of these things for a reason. And the reason isn't so shareholders can get rich. If you just make everything for the shareholders, everything else will starve. Everything else will die. And we've been accelerating and accelerating that process, and AI is the end game of that acceleration.
And so at a certain point, it is just going to become apparent that no actually, you can't you can't successfully run something through AI alone. It just doesn't work. I don't know if you know; there's a very old famous viral cartoon of a dog with a ball in its mouth, and it's two panels—it's a ball or a stick; I can't remember—but it's "No take, only throw." [6] That difficulty when you've got a dog that wants you to throw the ball for it but it won't let you take the ball out of its mouth.
Josh:Yeah, I had one of those.
Foz:No take, only throw. Companies are like this: no spend, only earn. Where it's like, "We want to make money. We want to make money, but we're not willing to pay anybody. And we don't want to have any expenditure that we don't have to have and we want to cut..." you know, you only get money when people have money to fucking spend it.
Josh:But enter universal basic income. So problem solved, you know? Next. [Laughter]
Foz:This siloification of everything, this belief that nothing is interconnected... like, you have money, and that's great for you, but if you want things to spend the money on, other people need to be able to succeed also. This is not Highlander where there's only one, right? And it's you and then somehow magically all of this stuff exists for you to buy and other people exist for you to interact with. People just seem to think that if a hundred people are richer than god, the rest of society will continue to function in a way that makes that remotely worthwhile. It won't. And we're already seeing that, but we just have to seemingly go through this hubristic phase of very sociopathic, very detached, terrible terrible people ruining a bunch of shit before common sense will reassert itself.
Josh:I think that's some really sharp analysis. One thing to bring it back to the beginning, to that phrase, "You can't put the genie back in the bottle," there is this idea of inevitability. And I would say the three of us don't buy into the inevitability narrative, but I think that we also need to then actively work to resist this. So I'm wondering, looking at your community, do you believe there are things that writers are uniquely set up to do to resist this bullshit, to dispel the inevitability myth, and to save us all from this horrendous hellscape that you just outlined so beautifully?
Foz:I think just to think, and to have opinions, and to be able to put those opinions in our own words. The essence of creative work—of writing in particular, but all creative work—is communication. It's meaningful because it is people doing it and people receiving it. You can't cut the people out of that equation and still have something. It's like trying to take the milk out of milk and still pretend you've got a drink. It's like serving somebody an empty plate and claiming it's a meal. The substance of the thing is the human interaction. And so as you try and shortcut that, there's nothing left.
But I think because our works have been stolen and plagiarised to build these technologies, we are at a base level intimately aware of why this technology is bad in most instances. And because we are the ones who are trying to be replaced in so many instances; Hollywood going, "Oh, we won't we need to pay for scriptwriters anymore because we can just have AI generate stuff, and we won't need to pay for extras in movies anymore because we can just scan people and AI generate them, and we'll be able to make money without having to spend money: number go up," all of this kind of stuff. And we're sitting there going, "You're insane. You don't actually understand what you're doing."
But I think we are—and also particularly from a science fiction and fantasy perspective—what we do is think about things that aren't real, or that might be real, or that are dystopian. We deal in hypotheticals. And I know that fantasy can be a great escapist genre, rocket ships and dragons, and I love that about it. I really do. But we also are quite philosophical, because we're constantly dealing in what ifs. We're constantly exploring things that are not real. We're dealing in hypotheticals and metaphors and ideas. And that is a really important language when it comes to rejecting fascism, when it comes to rejecting ideas like this. And I think OK, you can't put the genie back in the bottle. I think the only extent to which that is true is you can't take away the concept. Like, yes, we now know what this technology can look like, but it doesn't mean that we have to use it.
And the example I used in my essay is AI is a technological innovation the way meth is a chemical innovation. The fact that we make something doesn't mean that it's good. But we have been on this humanity-wide technological kick the past couple of thousand years, and so we have built up this fairly significant head of steam with the idea of all technological progress is good progress. But we do have precedent for a technological innovation proving to be a bad idea—or if not a bad idea, something that needs to be used in a very specific way where you say, "Actually, not everybody can have this," or "We have to be very very careful." Like nuclear power, for instance.
When you had the Curies first discovering and working with radiation and you had radium, all of these things that looked like this great shiny new thing, people didn't realise about radiation sickness. They didn't realise this was going to kill people. And as soon as they did, the responsibility became, "Oh shit, we actually just can't have this lying around anymore. We can't have radium paint. We can't be painting radium onto our teeth." You have these innovations. We have precedent for innovations that are dangerous. We have precedent for technology that is dangerous, that needs to be restricted in its use.
But increasingly, you have this separation—certainly at an academic level—of tech from everything else, as though it's not interconnected. As though tech is over here and the rest of the world is over here and we can just keep making that graph get bigger and bigger, we can just do more and more over here without any of these other pesky moral and social and political and anything else environmental considerations. You can't. It's all interconnected.
And that I think is something that writers are—not uniquely placed, but very well placed—to point out. Particularly science fiction and fantasy writers, because when we sit down to create a new world for a story, we know that everything's connected because you're suddenly like, "Oh, I've got to come up with an architecture and a currency and a political system and a history and all of these things, and they interact in ways. And if I've done this, then how does this?" We're used to having all of these balls in the air.
But there is that great paradox where on the one hand, you've got tech companies saying, "Well, your work isn't worth paying for. We stole it all because it would have been too expensive for us to pay for it. Really, you can't expect us to have paid for it. It would have would have been too expensive. We couldn't have made our technology if we paid for everything. So naturally we had to steal it, but also it's worthless. It's worthless, so we shouldn't have had to have paid for it in the first place, because individually all of your works are worthless. And the fact that we are now seeking to monetise our copying of them—what we do is valuable. Our copying of your work is valuable, but your work itself—even though we couldn't have done this without it—is individually worthless."
And that is literally the argument of tech companies when presented with plagiarism lawsuits. It's like, "No, individually, your work's worthless. It's just that we want to profit from them because they're not worthless when we do it." And so we're all sitting here going, "Dude..." [laughs]
Ray:You're lying to my face.
Foz:I wanna hit you with a hammer. [Laughs]
Ray:I think that's really well said, Foz. Our podcast is called "Politechs" because we see this confluence between those two things, and you're right, obviously, everything is connected, and so I think that's really well said.
Josh:I know we've been talking for quite a while, and we don't want to hold you hostage all day, though we're really enjoying the the fire that's coming out of your mouth here. So I just want to thank you once again for being so generous with your time. Thank you for writing that piece in the first place. And we'll put that in the show notes and encourage everyone to read that. Before we go, I'm just wondering if you would say one more time what you're working on right now, where people can find you, because I really want people to engage with your your fiction as well.
Foz:Sure, so I am a queer science fiction and fantasy writer. The kind of stuff that I have published is most recently sort of queer romantic fantasy, so very different to the kind of thing that I'm talking about here, but also still exploring notions of identity and culture and belonging and personal history. I am findable basically everywhere online as Foz Meadows. That's where I am on Blue Sky and TikTok and Instagram and Tumblr, still, and Substack, all of these places. But yeah, I do just periodically yell about things in various formats.
Josh:Wonderful. Well, thank you for coming on and yelling about things here. I've really enjoyed this conversation and we'll have to have you back on to yell about more things. This has been a lot of fun.
Ray:Thanks a lot, Foz.
Foz:Thank you. I've had a great time.
Josh:Before we go, I would just like to say thank you to Catsup4 for transcribing this episode. If you would like to support the show by volunteering to transcribe an episode, please reach out to us by sending an email to <a href="mailto:politechs@politechs.dev">politechs@politechs.dev</a> or a message on BlueSky or Mastodon. OK Ray, take it away!
[Theme music begins]
Ray:This is that bit at the end where we thank you for listening to this episode of Politechs. First, we hope you enjoyed the show and we really appreciate the time you have given to this subject. If you want to follow up with us, please send email to <a href="mailto:podcast@politechs.dev">podcast@politechs.dev</a> because that's very direct and doesn't have any algorithms! If you prefer social media, we’re @politechs on Bluesky and <a href="mailto:@politechs@mastodon.social">@politechs@mastodon.social</a>. We’re not on Twitter for obvious reasons.
To help the show reach a wider audience, please share it with your friends, family and colleagues—there's nothing as good as a real human recommendation. You can find out more about the show on the website which is politechs.dev. As you should expect, there are no cookies or tracking of any kind. All the show notes, transcripts and contact details are over there. If you want to show a deeper level of commitment, there's also an option to donate. We’re passionate about free and open access to the episodes so there is no obligation and there are no benefits other than a warm feeling in your heart.
Finally, we would really appreciate a 5 star review on your podcast platform. Reading positive reviews makes us feel good and can help others find the show and make them feel like it's worth their time. OK, that's all the admin done and we hope to see you next week!
[Theme music ends]









