Even if someone's inaccurately using "AI" as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.
One spitting out false information isn't a sign they're not working. That's not what LLMs are designed for. They're chatbots - not generally intelligent systems. They don't think - they talk.
I'm not really interested in engaging in discussions about what you or anyone else thinks my underlying motives are. You're free to point out any factual inaccuracies in my responses, but there's no need to make it personal and start accusing me of being dishonest.
Right... You want to elaborate on that? This is the first time I've encountered such opposition to the practice of sitting still and learning to pay attention to one's mind. Are you sure you know what, for example, mindfulness meditation actually entails?
My sister's husband has a 400hp Audi and in my opinion it's much more fun car than an electric one with double the power. Objectively better isn't always what people actually desire. A digital watch for example is less desireable than a mechanical one despite being better at accurately and reliably telling the time.
Something being useful doesn't imply it's good or beneficial. Those terms are not synonymous. Usefulness describes whether a thing achieves a particular goal or serves a specific purpose effectively.
A torture device is useful for extracting information. A landmine is useful for denying an area to enemy troops.
In computer science Artificial Intelligence refers to any system designed to perform tasks that would typically require human intelligence. That includes everything from playing chess to recognizing patterns, translating languages, or generating text.
The first ever AI system was Logic Theorist written by Allen Newell in 1956.
Trying to redefine terms is not helpful. GenAI is AI. It's not misuse of the term.
It's a Large Language Model. It doesn't "know" anything, doesn't think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.
It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it's saying.
So no, it doesn't "guess." It doesn't even know it's answering a question. It just talks.
Is cruise control useless because it doesn't drive you to the grocery store? No. It's not supposed to. It's designed to maintain a steady speed - not to steer.
Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They're not useless - we're just using them off-label and then complaining when they fail at something they were never built to do.
The hostility just seems unnecessary and unproductive from my point of view. Unless of course your intention is to hurt - but I'll give you the benefit of the doubt and assume you'd rather change minds instead.
It's a nuanced discussion, which is why I don't think either fanaticism or militant opposition is going to get us anywhere. This is a technology community - people should be free to have civil discussions about technology. Criticism is just as valid without the jabs and insults.
Even if someone's inaccurately using "AI" as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.
One spitting out false information isn't a sign they're not working. That's not what LLMs are designed for. They're chatbots - not generally intelligent systems. They don't think - they talk.