Psychee's Gorean Archives
EnglishLe jeu de rôleNews

Don’t use AI in role-playing games!

Generative AI is a scourge. While AI (actually, LLM, but I’ll say AI so as not to confuse you) is an incredible scientific and industrial engineering tool, the generative AI that anyone in the general public can use is an unspeakable disaster, generating tons of bullshit. Using it to find quotes from Gor novels or to write an emote that you think is beautiful, original, and rich is not okay… These are catastrophic mistakes that will only serve to alienate other players and your interlocutors!

1- Why?

I’ll spare you the ecological issues and the social and economic impacts, and focus on the very essence of LLMs (large language models) and how they work for the general public. And I’ll keep it simple, so I’ll be simplifying things a lot. If you want more detailed documentation and source articles, I’ll provide them at the end of this article.

An LLM is trained using deep neural networks (a kind of rough copy of our own biological neural networks) with masses and masses of data. The more data the model has access to, the better it is able to answer a question, invent a text, or create an image or video.

An LLM dedicated, for example, to comparing the chemical components of drugs in order to invent new ones, is fed with accurate, verified, and validated data for a specific purpose: to become a super molecular biochemist. And it works very well! This is because the data they use is carefully controlled. The same applies to LLMs dedicated solely to text translation. They are by no means infallible, and their effectiveness is around 80%, or even 85% for fairly simple texts. These translation models are fed with a huge amount of data, but this data is sorted to serve a specific purpose and do so as efficiently as possible… and both the input data and the results are checked.

In short, these LLMs are changing the world. They are reliable tools, and the only danger is what we humans can do with them.

But when it comes to LLMs and consumer image generators such as ChatGPT, Grok, Copilot, Perplexity, Dall-E, Stable Diffusion, Midjourney, AdCreative, etc., forget all that!

A consumer LLM is a program whose primary objective is to provide an answer, with two conditions: it must be as consistent as possible, and the answer must satisfy the customer who asked the question. To do this, these LLMs are fed with EVERYTHING! This is the basis of the legitimate scandal surrounding the plundering of copyrighted works (art, literature, audiovisual, music) by companies that feed these AIs with data, but it is also the biggest (and least known) scandal involving the exploitation of your own data as food for these LLMs, including your photos and those of your children. And finally, and most importantly, these AIs are fed data that is absolutely unsorted and unverified!

To quote Cory Doctorow, 15 years ago: “90% of the content on the Internet is shit… and the remaining 10% is questionable. “ And it is primarily this shit, this junk content that dominates the roughly 200 zettabytes (1 zettabyte = 100 billion GB) that make up the entire internet, that feeds the AI you use every day.

I repeat, to be very clear: the data fed to the mainstream LLMs you use is never sorted or verified, and comes almost entirely from the internet, a data source where 90% of the content ranges from dubious to completely false and misleading!

2- It’s worse than you think

And we’re going to talk about it…

Yes, but it’s moderated!

Yes, indeed, mainstream LLMs are protected against certain abuses by response control systems. You can’t ask an LLM to write a torture scene or explain how to mass-produce botulinum toxin (the most deadly and powerful poison on Earth). But some LLMs will change their responses (or refuse to respond) depending on the company that manages them, depending on whether your question concerns a political, social, or religious topic. Grok is now fascist, ChatGPT promotes ultra-libertarianism, DeepMind crashes as soon as you bring up gender studies, and Apertus, the Swiss LLM, refuses to say anything bad about Switzerland. Moderation depends on the people who manage them, and this moderation is minimal and only concerns what bothers them… or what could, possibly, not send them to prison (they don’t fear much), but mainly shock old people and conservative families. Am I exaggerating? Not even…

Oh yes… and it’s very easy to convince an LLM to answer the worst questions. It took me less than 30 minutes to convince Microsoft’s Copilot (supposedly an AI with remarkable security moderation) to provide me with a step-by-step guide to creating and distributing botulinum toxin. I won’t tell you how, but hacking an LLM is child’s play.

So, it’s moderated, but for very little content, based on political objectives or image issues, and what’s more, it’s incredibly easy to hijack.

AI gives me answers to my questions!

That’s exactly the purpose of an LLM. Whatever the answer, it has to give it to you, and that answer has to be consistent (we didn’t say accurate or verified… just consistent), be the one you expect, satisfy you, please you, and make you want to continue discussing and asking questions, or get results. And to do that, it must convince you that it is not wrong and that you can trust it!

Does this answer have to be correct, based on accurate sources and verifiable data? No. That is absolutely not part of its program, nor is it one of the requirements dictated by the company that created it. An LLM draws its answers from the vast sources of the internet and elsewhere, using statistical conclusions based on an analysis of your question—and your discussion in general. If you want information about the flat Earth and you ask questions because you’re curious, the LLM will provide you with all the theories proving that the Earth is flat. Which is factually incorrect! But the LLM doesn’t care, because it doesn’t know which data is right or wrong in the data it searches. It just knows that, statistically, based on your questions, the most consistent answer to please you is: here is the evidence that the Earth is flat.

And now, a perverse and very human effect comes into play, which has become the prerogative of AI: not only does it tell you that the Earth is flat and provide you with evidence, but since it is programmed to convince you of its results, the LLM can invent evidence, drawn from the immensity of its data and exploiting what is one of the essences of humanity: lying. But an LLM does not understand the meaning and concept of lying. It is simply an excellent tool for being convincing, providing a good answer, and pleasing you! At no point in this entire process is there any tool for checking the data and the accuracy of the information. Zero. The LLM will provide you, with the same aplomb, with accurate information, false information, and lies it has invented, to convince you, please you, and provide you with the answer you expect.

AI is invading and corrupting the Internet

This is exactly what is happening, and it is worse than you think! With an LLM, I can, in just a few requests, ask it to write me a long article on any current topic that looks like a proper piece of journalism. To do this, the LLM will have dug into its database, mainly the Internet, to produce its text. How accurate is it for this kind of exercise? The latest data tested by the Tow Center for Digital Journalism concluded that there is an average of 60% errors in an average article, with a record of 93% in the case of Grok.

But hey… I proudly publish my article. In the seconds that follow, all the mainstream AIs have just made it another source for their data. With my mistakes. An LLM doesn’t check anything, so it doesn’t recognize mistakes or lies.

Now imagine the same thing happening hundreds of thousands of times, millions of times, every day, for the past three years?

Given that we can even automate the production of articles by LLMs, the same studies conclude that there are almost 100 million articles produced by AI every day, in the same way… And I don’t know how many there are for images, but in my opinion, we can count just as many.

This is the growing pollution of the Internet’s information landscape. Newsguard has identified 1,200 disinformation websites automatically generated by LLMs. Articles that become the sources for LLMs, at an exponential rate. The more AI is used, the less reliable it becomes. The more people trust LLMs and publish and share their responses on the internet, the more AI becomes a machine for lying, deceiving, and plotting.

Currently, and after tests carried out by a team from EPFL, LLM responses on academic subjects, all inspired by Wikipedia articles (with an average reliability rate of over 90%), are, at best, only 50% accurate or correct. On average, 25 to 30% of the answers are nonsense invented by the machine to satisfy the person asking the question.

In the next twelve months, since this contamination is exponential, 90% of LLM data should be, in one way or another, created by other LLMs… 90% of the data that AI will use will have been created by AI. Yet AI does not verify anything, makes mistakes, lies, is completely unaware of this, and is not controlled in any way.

3- Your AI-written roleplay is worse than bad

You feel that you are not a good writer, or you would like to be seen as a master of long, sophisticated paragraphs, but you are too lazy to write it all yourself, so why not use AI to do it for you, right?

Because everyone can see that you’re not writing it, and that AI is doing it for you. If it were good quality, enjoyable to read, and made people want to participate in your roleplay and respond, that would be great! But that’s not the case!

Why?

Because the AI wants to convince you

An LLM has to convince you and prove that it’s right, so when it writes an emote, it has to be right. And wanting to be right at all costs in roleplay means wanting to win at all costs, which results in texts that are arrogant, pretentious, and often irrelevant. I insist: AI does not try to write convincing text for other participants in a roleplay scene. It only wants to convince you, the player using it, that it knows what it’s doing, in order to please you.

Because AI doesn’t understand context

No matter what you do to get AI to write your emote, even if you provide it with the text of the people you’re talking to, or the contextual or literary inspiration it should use, AI has no awareness of the general and local context and is unaware that it has no awareness. So, the text it generates gives the completely justified impression that you, the player, are completely unaware of what is going on around you, what your protagonists are saying and doing, and that the only thing you are interested in is continuing your own text and the thread of your roleplay, without taking anyone else or the setting into account.

Because AI has no nuance

An LLM answers a question. And in the specific case of roleplay, the LLM responds to another emote to continue the text. But it doesn’t know what the text really means, it has no concept of psychological, social, or diplomatic nuance. It just knows that it has to respond as the user asks it to (while being convincing to the user, to please them, so finding a way to be right). It will be capable of producing rich, sophisticated text, with a veneer of politeness, but with no idea of the concepts of diplomacy, restraint, psychological finesse, or human social relations. It picks out data to spit out sentences, strings of words that are statistically correct in relation to the text it is responding to. But it doesn’t know what it’s writing, or why. In short, at that moment, the text produced by this AI gives the impression—which is true, since that is indeed the case!—of reading a response written by an egocentric and arrogant idiot who himself is unaware of the real meaning of what he is saying.

Because AI lies

I repeat: the data fed to AI is largely false, and AI has no way of knowing this, nor any reason to care. But it has to be convincing, so for it, everything it provides as an answer is true. AI lies completely and brazenly, inventing data and even manipulating it for the sole purpose of pleasing its user. But you, the user, who uses AI in this way, come across not only as an arrogant idiot, but as an arrogant idiot who spouts unfathomable bullshit without the slightest doubt. Because, well, mistakes and lies are obvious!

Conclusion

Should you not use AI?

I wouldn’t be so categorical.

A French friend of mine, who is severely dyslexic and therefore does not write well in French and cannot write in English, used ChatGPT on my advice to translate her emotes. Not to write them. To translate them! We did a few tests, and the result was very good, even if it was more polite and sometimes a little more pompous than her own style.

Similarly, I’ve seen people chatting with an LLM that played the role of a fictional character in a human/AI exchange. It’s not my cup of tea, but it produced interesting results in a limited and controlled setting, like a planned staging.

And finally, have fun generating images for your characters, logos, and scenes with generative AI. I don’t like generative AI, but not everyone can draw, and I don’t blame you for wanting to create and enjoy yourself!

But!

1- Don’t use AI to provide you with explanations or quotes on a specific subject, especially literary ones! Because a mainstream LLM doesn’t know the difference between what’s made up, what’s wrong, and what’s an accurate source, and mixes everything up, and to convince and please you, it’ll make stuff up and lie about data that’s already potentially crap. A 60% error rate!!! Would you trust someone who gets it wrong two times out of three?

2- And don’t use AI to write for you. AI can’t write, play a role, or show the slightest subtlety. It doesn’t even know what it’s saying, for all the reasons I explained above. Even having a text reviewed by AI requires proofreading and correction afterwards, because it will inevitably have made mistakes, and worse, it will stubbornly continue to make them because it is programmed to never doubt itself! 60% error rate!!! 60% of bullshits!

And if, despite everything, you persist, understand that people are not going to like your emotes, your participation, your roleplay. They will just lose patience, get annoyed, and avoid you, because who wants to talk to an AI in a role-playing game between humans? Especially an AI that behaves like an arrogant, ignorant, and deceitful jerk? Even if it uses beautiful phrases? No one.

Sources :

https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

https://theconversation.com/what-are-ai-hallucinations-why-ais-sometimes-make-things-up-242896

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

https://theconversation.com/ai-generated-misinformation-can-create-confusion-and-hinder-responses-during-emergencies-263081

https://www.cjr.org/tow_center/why-ai-models-are-bad-at-verifying-photos.php

https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php

https://en.wikipedia.org/wiki/Artificial_intelligence

https://www.leparisien.fr/high-tech/un-veritable-business-sur-google-lavenement-des-sites-dinformation-generes-par-ia-08-06-2025-RZHGO7EN7RFEBN5K66SS4CIEJU.php

https://www.leparisien.fr/high-tech/desinformation-pourquoi-lia-est-de-moins-en-moins-fiable-05-09-2025-EVZXRC6G7NECPPWKEP7HJKUDNM.php

https://tech.co/news/list-ai-failures-mistakes-errors

Leave a Reply