On the subject of understanding, I guess what I mean is this: Based on everything I know about an LLM, their “information processing” happens primarily in their training. [...] They do not actually process new information, because if they did, you wouldn’t need to train them, would you- you’d just have them learn and grow over time.
This is partially true and partially not. It's true that LLMs can't learn anything wildly novel, because they are not flexible enough for this. But they can process new information, in fact they do it all the time. You can produce conversations that no one had before, and yet LLMs like ChatGPT will respond to it appropriately. This is more than just shape matching.
In fact, there are techniques like Few-Shot Learning and Chain of Thought that rely on the LLMs' ability to learn from context and revise its own answers.
The problem becomes evident when you ask something that is absolutely part of a structured system in the english language, but which has a highly variable element to it. This is why I use the “citation problem” when discussing them
IMO citation problem is not testing capability to understand. It's testing knowledge, memorization, and ability to rate its own confidence. Keep in mind that ChatGPT and most other LLMs will tell you when they perform web searches - if they don't then they're likely working off context alone. Enabling web search would greatly increase the accuracy of LLM's answers.
Unlike LLMs we have somewhat robust ability to rate how confident we are about our recollections, but even in humans memory can be unreliable and fail silently. I've had plenty of conversations where I argue with someone about something that one of us remembers happening and the other one is certain didn't happen - or happened differently. Without lies or misunderstandings, two people who had at some point memorized the same thing can later on confidently disagree on the details. Human brains are not databases and they will occasionally mangle memories or invent concepts that don't exist.
And even that is completely skipping over people with mental disorders that affect their thinking patterns. Is someone with psychosis incapable of understanding anything because they hold firm beliefs on things that cannot be traced to any source? Are people with frontal lobe damage who develop intense confabulations incapable of understanding? How about compulsive liars? Are you willing to label a person or an entire demographic as incapable of understanding if they fail your citation test?
An LLM cannot tell you how it arrived at a conclusion, because if you ask it, you are just receiving a new continuation of your prior text.
There are techniques like Chain of Thought that make LLMs think before generating response. Those systems will be able to tell you how they arrived at the conclusion.
But humans are also fairly prone to rationalization after the fact. There was a famous experiment on people who had to have functional hemispherectomy for medical reasons, where the left hemisphere makes up an explanation for right hemisphere's choices despite not knowing the true reason:
"Each hemisphere was presented a picture that related to one of four pictures placed in front of the split-brain subject. The left and the right hemispheres easily picked the correct card. The left hand pointed to the right hemisphere’s choice and the right hand to the left hemisphere’s choice. We then asked the left hemisphere, the only one that can talk, why the left hand was pointing to the object. It did not know, because the decision to point was made in the right hemisphere. Yet it quickly made up an explanation. We dubbed this creative, narrative talent the interpreter mechanism."
Hey again! First of all, thank you for continuing to engage with me in good faith and for your detailed replies. We may differ in our opinions on the topic but I'm glad that we are able to have a constructive and friendly discussion nonetheless :)
I agree with you that LLMs are bad at providing citations. Similarly they are bad at providing urls, id numbers, titles, and many other things that require high accuracy memorization. I don't necessarily agree that this is a definite proof of their incapability to understand.
In my view, LLMs are always in an "exam mode". That is to say, due to the way they are trained, they have to provide answers even if they don't know them. This is similar to how students act when they are taking an exam - they make up facts not because they're incapable of understanding the question, but because it's more beneficial for them to provide a partially wrong answer than no answer at all.
I'm also not taking a definitive position on whether or not LLMs have capability to understand (IMO that's pure semantics). I am pushing back against the recently widespread idea that they provably don't. I think LLMs have some tasks that they are very capable at and some that they are not. It's disingenuous and possibly even dangerous to downplay a powerful technology under a pretense that it doesn't fit some very narrow and subjective definition of a word.
And this is unfortunately what I often see here, on other lemmy instances, and on reddit - people not only redefining what "understand", "reason", or "think" means so that generative AI falls outside of it, but then using this self-proclaimed classification to argue that they aren't capable of something else entirely. A car doesn't lose its ability to move if I classify it as a type of chair. A bomb doesn't stop being dangerous if I redefine what it means to explode.
Do you think an LLM understands the idea of truth?
I don't think it's impossible. You can give ChatGPT a true statement, instruct it to lie to you about it, and it will do it. You can then ask it to point out which part of its statement was a lie, and it will do it. You can interrogate it in numerous ways that don't require exact memorization of niche subjects and it will generally produce an output that, to me, is consistent with the idea that it understands what truth is.
Let me also ask you a counter question: do you think a flat-earther understands the idea of truth? After all, they will blatantly hallucinate incorrect information about the Earth's shape and related topics. They might even tell you internally inconsistent statements or change their mind upon further questioning. And yet I don't think this proves that they have no understanding about what truth is, they just don't recognize some facts as true.
In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely.
This is something that sufficiently large LLMs like ChatGPT can do pretty much as well as non-expert people on a given topic. Sometimes better.
This definition is also very knowledge dependent. You can find a lot of people that would not meet this criteria, especially if the subject they'd have to explain is arbitrary and not up to them.
Can you prove otherwise?
You can ask it to write a poem or a song on some random esoteric topic. You can ask it to play DnD with you. You can instruct it to write something more concisely, or more verbosely. You can tell it to write in specific tone. You can ask follow-up questions and receive answers. This is not something that I would expect of a system fundamentally incapable of any understanding whatsoever.
But let me reverse this question. Can you prove that humans are capable of understanding? What test can you posit that every English-speaking human would pass and every LLM would fail, that would prove that LLMs are not capable of understanding while humans are?
If I were to have a discussion with a person responding to me like ChatGPT does, I would not dare suggest that they don't understand the conversation, much less that they are incapable of understanding anything whatsoever.
What is making you believe that LLMs don't understand the patterns? What's your idea of "understanding" here?
As I understand it, most LLM are almost literally the Chinese rooms thought experiment.
Chinese room is not what you think it is.
Searle's argument is that a computer program cannot ever understand anything, even if it's a 1:1 simulation of an actual human brain with all capabilities of one. He argues that understanding and consciousness are not emergent properties of a sufficiently intelligent system, but are instead inherent properties of biological brains.
"Brain is magic" basically.
Fooled me with Vita, not gonna fool me again. I still remember that they tried to brick any non-modded device by cutting PS Store support.
Was this ever a thing? I have never seen or heard anyone use "gen AI" to mean AGI. In fact I can't even find one instance of such usage.
Deep learning has always been classified as AI. Some consider pathfinding algorithms to be AI. AI is a broad category.
AGI is the acronym you're looking for.
This feels to me like the LLM misinterpreted it as some kind of fictional villain talk and started to autocomplete it.
Could also be the model simply breaking. There was a time when Sydney (Bing AI or whatever they call it now) had to be constrained to 10 messages per context and having some sort of supervisor on top of itself because it would occasionally throw a fit or start threatening the user for no reason.
Oh damn, you're right, my bad. I got a new notification but didn't check the date of the comment. Sorry about that.
That's a 1 month old thread my man :P
But sounds interesting, I haven't heard of Dysrationalia before. Quick cursory search shows that it's a term that has been coined mostly by a single psychologist in his book. I've been able to find only one study that used the term and it found that "different aspects of rational thought (i.e. rational thinking abilities and cognitive styles) and self-control, but not intelligence, significantly predicted the endorsement of epistemically suspect beliefs."
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6396694/
All in all, this seems to me more like a niche concept used by a handful of psychologists rather than something widely accepted in the field. Do you have anything that I could read to familiarize myself with this more? Preferably something evidence-based because we can ponder on non-verifiable explanations all day and not get anywhere.
The author's suggesting that smart people are more likely to fall for cons that they try to dissect but can't find the specific method being used, supposedly because they consider themselves to be infallible.
I disagree with this take. I don't see how that thought process is exclusive to people who are or consider themselves to be smart. I think the author is tying himself into a knot to state that smart people are actually the dumb ones, likely in preparation to drop an opinion that most experts in the field will disagree with.
The paracausal tarrasque seems like a genuinely interesting concept. Gives me False Hydra vibes
Both threads appeared on my feed near one another and I figured it was on topic given that the other one is directly referenced in the main post here. If OP can reference another post to complain about hate, I think it's fair game for me to truthfully add that their conduct in the very same thread was also excessively hateful - how else are we to discuss the main subject of this post at all otherwise?
I have read the blog post that you've linked, which is full of exaggeration.
The developer rejected PR that changed documentation to use one instance of they/them instead of he/him, responded "This project is not an appropriate arena to advertise your personal politics.", and then promptly got brigaded. Similar PRs were appearing and getting closed from time to time.
A satirical PR has been opened and closed for being spam - despite the blogger's commentary, it's abundantly clear that the developer didn't call the person opening the PR a "spam" (what would that even mean?).
The project also had code of conduct modified, probably due to the brigading, to essentially include the aforementioned "not an appropriate arena to advertise your personal politics or religious beliefs" line - I don't know what part of this is for the blogger a "white supremacist" language.
From what I can tell, this is all they've done. No racism, no sexism, no white supremacy. Would it be better if they just accepted the PR? Yes. Does it make the developer part of one of the worst groups of people that ever existed? No.
When I created an account here, I thought Beehaw is specifically a platform where throwing vitriol unnecessarily is discouraged.
Non-native speaker being stubborn about not using "they/them" in gender-neutral contexts (especially when most if not all of these weren't even about people) is not enough to label them as neither incel, transphobe, nor racist.
Intentionally mischaracterizing other human beings and calling them derogatory names that they don't deserve is, in my opinion, against the spirit of the platform.
The most recent example I’ve noticed is around the stuff with the Ladybird devs being weird about being asked to use inclusive pronouns, but it seems like a pattern.
You mean the thread where you out of nowhere called the maintainers "incels, transphobes, and racists" over singular instance of them using "he/him" as a gender-neutral pronouns in documentation and refusing to change it?
Have you tried Cosmoteer? It's a pretty satisfying shipbuilder with resource and crew management, trading, and quests. Similar vibe to Reassembly.
So you're basically saying that, in your opinion, tensor operations are too simple of a building block for understanding to ever appear out of them as an emergent behavior? Do you feel that way about every mathematical and logical operation that a high school student can perform? That they can't ever in whatever combination create a system complex enough for understanding to emerge?
I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.
LLMs rely on billions of precise calculations and yet they perform poorly when tasked with calculating numbers. Just because we don't calculate anything consciously to get a meaning of a word doesn't mean that no calculations are actually done as part of our thinking process.
What's your definition of "the actual meaning of the concept represented by a word"? How would you differentiate a system that truly understands the meaning of a word vs a system that merely mimics this understanding?