Skip Navigation
Zaleramancer

(They/Them) I like TTRPGs, history, (audio and written) horror and the history of occultism.

Posts
1
Comments
63
Joined
3 mo. ago
  • Hi, once more, I'm happy to have a discussion about this. I have very firm views on it, and enjoy getting a chance to discuss them and work towards an ever greater understanding of the world.

    I completely understand the desire to push back against certain kinds of "understandings" people have about LLM due to their potentially harmful inaccuracy and the misunderstandings that they could create. I have had to deal with very weird, like, existentialist takes on AI art lacking a quintessential humanity that all human art is magically endowed with- which, come on, there are very detailed technical art reasons why they're different, visually! It's a very complicated phenomenon, but, it's not an inexplicable cosmic mystery! Take an art critique class!

    Anyway, I get it- I have appreciated your obvious desire to have a discussion.

    On the subject of understanding, I guess what I mean is this: Based on everything I know about an LLM, their "information processing" happens primarily in their training. This is why you can run an LLM instance on, like, a laptop but it takes data centers to train them. They do not actually process new information, because if they did, you wouldn't need to train them, would you- you'd just have them learn and grow over time. An LLM breaks its training data down into patterns and shapes and forms, and uses very advanced techniques to generate the most likely continuation of a collection of words. You're right in that they must answer, but that's because their training data is filled with that pattern of answering the question. The natural continuation of a question is, always, an answer-shaped thing. Because of the miracles of science, we can get a very accurate and high fidelity simulation of what that answer would look like!

    Understanding, to me, implies a real processing of new information and a synthesis of prior and new knowledge to create a concept. I don't think it's impossible for us to achieve this, technologically, humans manage it and I'm positive that we could eventually figure out a synthetic method of replicating it. I do not think an LLM does this. The behavior they exhibit and the methods they use seem radically inconsistent with that end. Because, the ultimate goal of them was not to create a thinking thing, but to create something that's able to make human-like speech that's coherent, reliable and conversational. They totally did that! It's incredibly good at that. If it were not for the context of them politically, environmentally and economically, I would be so psyched about using them! I would have been trying to create templates to get an LLM to be an amazing TTRPG oracle if it weren't for the horrors of the world.

    It's incredible that we were able to have a synthetic method of doing that! I just wish it was being used responsibly.

    An LLM, based on how it works, cannot understand what it is saying, or what you are saying, or what anything means. It can continue text in a conversational and coherent way, with a lot of reliability on how it does that. The size, depth and careful curation of its training data mean that those responses are probably as accurate to being an appropriate response as they can be. This is why, for questions of common knowledge, or anything you'd do a light google for, they're fine. They will provide you with an appropriate response because the question probably exists hundreds of thousands of times in the training data; and, the information you are looking for also exists in huge redundancies across the internet that got poured into that data. If I ask an LLM which of the characters of My Little Pony has a southern accent, they will probably answer correctly because that information has been repeated so much online that it probably dwarfs the human written record of all things from 1400 and earlier.

    The problem becomes evident when you ask something that is absolutely part of a structured system in the english language, but which has a highly variable element to it. This is why I use the "citation problem" when discussing them, because they're perfect for this: A citation is part of a formal or informal essay, which are deeply structured and information dense, making them great subjects for training data. Their structure includes a series of regular, repeating elements in particular orders: Name, date, book name, year, etc- these are present and repeated with such regularity that the pattern must be quite established for the LLM as a correct form of speech. The names of academic books are often also highly patterned, and an LLM is great at creating human names, so there's no problem there.

    The issue is this: How can an LLM tell if a citation it makes is real? It gets a pattern that says, "The citation for this information is:" and it continues that pattern by putting a name, date, book title, etc in that slot. However, this isn't like asking what a rabbit is- the pattern of citations leads into an endless warren of hundreds of thousands names, book titles, dates, and publishing companies. It generates them, but it cannot understand what a citation really means, just that there is a pattern it must continue- so it does.

    Let me also ask you a counter question: do you think a flat-earther understands the idea of truth? After all, they will blatantly hallucinate incorrect information about the Earth’s shape and related topics. They might even tell you internally inconsistent statements or change their mind upon further questioning. And yet I don’t think this proves that they have no understanding about what truth is, they just don’t recognize some facts as true.

    A flat-earther has some understanding of what truth is, even if their definition is divergent from the norm. The things they say are deeply inaccurate, but you can tell that they are the result of a chain of logic that you can ask about and follow. It's possible to trace flat-earth ideas down to sources. They're incorrect, but they're arrived at because of an understanding of prior (incorrect) information. A flat-earther does not always invent their entire argument and the basis for their beliefs on the spot, they are presenting things they know about from prior events- they can show the links. An LLM cannot tell you how it arrived at a conclusion, because if you ask it, you are just receiving a new continuation of your prior text. Whatever it says is accurate only when probability and data set size is on its side.

  • And, yes, I can prove that a human can understand things when I ask: Hey, go find some books on a subject, then read them and summarize them. If I ask for that, and they understood it, they can then tell me the names of those books because their summary is based on actually taking in the information, analyzing it and reorganizing it by apprehending it as actual information.

    They do not immediately tell me about the hypothetical summaries of fake books and then state with full confidence that those books are real. The LLM does not understand what I am asking for, but it knows what the shape is. It knows what an academic essay looks like and it can emulate that shape, and if you're just using an LLM for entertainment that's really all you need. The shape of a conversation for a D&D npc is the same as the actual content of it, but the shape of an essay is not the same as the content of that essay. They're too diverse, and they have critical information in them and they are about that information. The LLM does not understand the information, which is why it makes up citations- it knows that a citation fits in the pattern, and that citations are structured with a book name and author and all the other relevant details. None of those are assured to be real, because it doesn't understand what a citation is for or why it's there, only that they should exist. It is not analyzing the books and reporting on them.

  • Hello again! So, I am interested in engaging with this question, but I have to say: My initial post is about how an LLM cannot provide actual, real citations with any degree of academic rigor for a random esoteric topic. This is because it cannot understand what a citation is, only what it is shaped like.

    An LLM deals with context over content. They create structures that are legible to humans, and they are quite good at that. An LLM can totally create an entire conversation with a fictional character in their style and voice- that doesn't mean it knows what that character is. Consider how AI art can have problems that arise from the fact that they understand the shape of something, but they don't know what it actually is- that's why early AI art had a lot of problems with objects ambigiously becoming other objects. The fidelity of these creations has improved with the technology, but that doesn't imply understanding of the content.

    Do you think an LLM understands the idea of truth? Do you think if you ask it to say a truthful thing, and be very sure of itself and think it over, it will produce something that's actually more accurate or truthful- or just something that has the language hall-marks of being truthful? I know that an LLM will produce complete fabrications that distort the truth if you expect a base-line level of rigor from them, and I proved that above, in that the LLM couldn't even accurately report the name of a book it was supposedly using as a source.

    What is understanding, if the LLM can make up an entire author, book and bibliography if you ask it to tell you about the real world?

  • What's yours? I'm stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating "academic citation" sounding sets of words. It doesn't know what a citation actually is.

    Can you prove otherwise? In my sense of "understanding" it's actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It's not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn't understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn't exist.

  • Let me try again: In the literal sense of it matching patterns to patterns without actually understanding them.

  • As I understand it, most LLM are almost literally the Chinese rooms thought experiment. They have a massive collection of data, strong algorithms for matching letters to letters in a productive order, and sufficiently advanced processing power to make use of that. An LLM is very good at presenting conversation; completing sentences, paragraphs or thoughts; or, answering questions of very simple fact- they're not good at analysis, because that's not what they were optimized for.

    This can be seen when people discovered that if ask them to do things like tell you how many times a letter shows up in a word, or do simple math that's presented in a weird way, or to write a document with citations- they will hallucinate information because they are just doing what they were made to do: complete sentences, expand words along a probability curve that produces legible, intelligible text.

    I opened up chat-gpt and asked it to provide me with a short description of how Medieval European banking worked, with citations and it provided me with what I asked for. However, the citations it made were fake:

    The minute I asked it, I assume a bit of sleight of hand happened, where it's been set up so that if someone asks a question like that it's forwarded to a search engine that verifies if the book exists, probably using Worldcat or something. Then I assume another search is made to provide the prompt for the LLM to present the fact that the author does exist, and possibly accurately name some of their books.

    I say sleight of hand because this presents the idea that the model is capable of understanding it made a mistake, but I don't think it does- if it knew that the book wasn't real, why would it have mentioned it in the first place?

    I tested each of the citations it made. In one case, I asked it to tell me more about one of them and it ended up supplying an ISBN without me asking, which I dutifully checked. It was for a book that exists, but it didn't share a title or author, because those were made up. The book itself was about the correct subject, but the LLM can't even tell me what the name of the book is correctly; and, I'm expected to believe what it says about the book itself?

  • It's complicated. The current state of the internet is dominated by corporate interests towards maximal profit, and that's driving the way websites and services are structured towards very toxic and addictive patterns. This is bigger than just "social media."

    However, as a queer person, I will say that if I didn't have the ability to access the Internet and talk to other queer people without my parents knowing, I would be dead. There are lots of abused kids who lack any other outlets to seek help, talk to people and realize their problems, or otherwise find relief for the crushing weight of familial abuse.

    Navigating this issue will require grace, awareness and a willingness to actually address core problems and not just symptoms. It doesn't help that there is an increasing uptick of purity culture and "for the children" legislation that will curtail people's privacy, ability to use the internet and be used to push queer people and their art or narratives off of the stage.

    Requiring age verification reduces anonymity and makes it certain that some people will be unable to use the internet safely. Yes, it's important in some cases, but it's also a cost to that.

    There's also the fact that western society has systemically ruined all third spaces and other places for children to exist in that isn't their home or school. It used to be that it was possible for kids and teens to spend time at malls, or just wandering around a neighborhood. There were lots of places where they were implicitly allowed to be- but those are overwhelmingly being closed, commericalized or subject to the rising tide of moral panic and paranoia that drives people to call the cops on any group of unknown children they see on their street.

    Police violence and severity of response has also heightened, so things that used to be minor, almost expected misdemeanors for children wandering around now carry the literal risk of death.

    So children are increasingly isolated, locked down in a context where they cannot explore the world or their own sense of self outside the hovering presence of authority- so they turn to the internet. Cutting that off will have repercussions. Social media wouldn't be so addictive for kids if they had other venues to engage with other people their age that weren't subject to the constant scrutiny of adults.

    Without those spaces, they have to turn to the only remaining outlet. This article is woefully inadequate to answer the fundamental, core problems that produce the symptoms we are seeing; and, it's implementation will not rectify the actual problem. It will only add additional stress to the system and produce a greater need to seek out even less safe locations for the people it ostensibly wishes to protect.

  • The classic conflict of automation. Due to the structure of our economic system, the benefits of reducing labor are not that we all have more time to pursue art, philosophy, joy or love- instead talented, interesting people are forced out of jobs they can do well (and enjoy) and into financial stress and confusion.

  • I think it's very telling that the medication's success metrics in part of the longer article linked in this one are about if the meds made them perform better academically, as opposed to making them have a happier and better experience of their own lives.

    They really focus on it, like, "Oh, sure, they seem like they're doing better but it's only how they feel, they don't suddenly get smarter."

    And, yeah, my ADHD drugs don't make me smarter, but they drastically reduce the strain of dealing with tedious, pointless and unstimulating tasks.

  • I wonder which sci-fi novels it's mimicking here.

  • Yeah! Also, sometimes I use emulators that work well on phones to play older games, I had fun playing Final Fantasy Legends 2 with RetroArch.

  • I enjoy the way this game plays with dice- it's nice to see a designer who's thinking about them as physical objects and trying out novel ways of employing them.

    Over-all, I think this game is very cute- I like the way wounds/stress is represented and I think the variable dice sizes are fun. They put me in the mind of the Devil City and it's 77 Vicious Princes. While I admire the creative and thoughtful exploration of dice as a tool, I do feel like this project seems a bit aimless. It think as a project it feels more like a personal thought experiment than a game, not because of a lack of complexity, but because of an unclear intention.

    I would be pleased to see other things they make, because I think their ideas show promise.

  • My suggestion is to either change the context you play games in, or pick games that are very cognitively different from what you normally do at work.

    You can change your context with a new console, but I think it may be cheaper to do something like buying a controller and playing games while standing up, or on your couch/armchair, or playing games while sitting on a yoga ball. The point is to trick your brain, because it's associated sitting at a desk in front of a computer with boring tedium. Change the presentation and your subconscious will interpret it differently.

    You can also achieve this by identifying the things that you have to do in your job that mirror videogame genres you enjoy and picking a game that shares few of those qualities.

    I worked at the post office for years, doing mail processing, and my enjoyment of management and resource distribution style games went down sharply during that time because of the cognitive overlap- I played more roguelikes and RPGs as a consequence.

  • Thank you, I am trying to be less abrasive online, especially about LLM/GEN-AI stuff. I have come to terms with the fact that my desire for accuracy and truthfulness in things skews way past the median to the point that it's almost pathological, which is why I ended up studying history in college, probably. To me, the idea of using a LLM to get information seems like a bad use of my time- I would methodically check everything it says, and the total time spent would vastly exceed any amount saved, but that's because I'm weird.

    Like, it's probably fine for anything you'd rely on a skimming a wikipedia article for. I wouldn't use them for recipes or cooking, because that could give you food poisoning if something goes wrong, but if you're just like, "Hey, what's Ice-IV?" then the answer it gives is probably equivalent in 98% of cases to checking a few websites. People should invest their energy where they need it, or where they have to, and it's less effort for me to not use the technology, but I know there are people who can benefit from it and have a good use-case situation to use it.

    My main point of caution for people reading this is that you shouldn't rely on an LLM for important information- whatever that means to you, because if you want to be absolutely sure about something, then you shouldn't risk an AI hallucination, even if it's unlikely.

  • I'm not a frequent user of LLM, but this was pretty intuitive to me after using them for a few hours. However, I recognize that I'm a weirdo and so will pick up on the idea that the prompt leads the style.

    It's not like the LLM actually understands that you are asking questions, it's just that it's generating a procedural response to the last statement given.

    Saying please and thank you isn't the important part.

    Just preface your use with, like,

    "You are a helpful and enthusiastic with excellent communication skills. You are polite, informative and concise. A summary of follows in the style of your voice, explained in clearly and without technical jargon."

    And you'll probably get promising results, depending on the exact model. You may have to massage it a bit before you get consistent good results, but experimentation will show you the most reliable way to get the desired results.

    Now, I only trust LLM as a tool for amusing yourself by asking it to talk in the style of you favorite fictional characters about bizarre hypotheticals, but at this point I accept there's nothing I can do to discourage people from putting their trust in them.

  • Intellectual labor is hard and humans don't like doing difficult things, paired with a culture that's increasingly hostile to education and a government that wants you ignorant- it's easy to see how this happens in the US.

  • Hopefully everything goes smoothly. Based on my experience, once you get to specialists they can pretty quickly arrive at a diagnosis if they're not being purposefully obtuse. After all, the signs are pretty clear once they've been laid out in front of you and you've had personal experience with identifying them.

  • This is really heartening to see. Thanks for sharing it.

  • I wish you the best of luck. If it doesn't go well, I suggest looking for people who specialize in ADHD/Autism to go to if you can. Hopefully, though, everything goes great! It went very well for me and my partner when we sought a diagnosis, and I hope you get similar fortune. : )

  • Tabletop Gaming @beehaw.org
    Zaleramancer @beehaw.org

    Tell me about your favorite TTRPG!

    Hey all,

    I'm new here and I wanted to start a discussion about TTRPGs that people enjoy. I really like seeing people talk passionately about those sorts of things. I'm, personally, a big fan of games by Jenna Moran (Nobilis, Glitch, etc) and have a lot of positive experience with certain Powered by the Apocalypse Games. I'm also fascinated by a lot of OSR content because I feel like it has the potential to capture some of the more interesting parts of early D&D (my partner and I discuss AD&D quite a bit).

    I used to write homebrew for Exalted and I've played some WoD games, but it's not my true passion, except maybe for Changeling the Lost. I had a long-running game of Ars Magica 5e a while ago, which was a tremendous experience and really ignited my interest in more dense, bean-countery games that I had previously been discounting in favor of lighter, more narrative-drama experiences.

    I'm not as big a fan of D&D as some, but my partner and I did meet because of 3.5e h