Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
4
Comments
231
Joined
6 mo. ago

  • Fuck, I'm an all seeing, prescient wizard! Neat!

    Uh, I mean... "yes of course, I intended to have Ai Horde piping baked in from the get go".

    Do let me know if my shit works with their shit to make...not shit.

    #####################

    EDIT: For fun, I asked my router to ask Qwen3-4B 2507 WTF Ai Horde was. I'm showing this for interest sake, because 1) shows you how the thing works IRL 2) shows you how and when it's WRONG in a LOUD way, if you know how to read it.

    ######################

    This is in default mode, no KBs attached. Just straight YOLOing the question at it, cold, and asking it to argue from it's own internal priors.

    AI-horde is a decentralized platform where users rent or sell computational power (GPU time) to run AI models. It connects AI model developers who need processing with individuals or organizations who have idle GPU hardware. The platform uses blockchain-like tokens to manage payments and incentives. It’s not a single AI system but a marketplace for GPU resources.

    Missing details: Specific token mechanics, current model library, or user onboarding flow could affect accuracy. For example, if token economics changed recently, the explanation might not reflect current conditions.

    Confidence: Medium | Source: Model

    #####################################

    That's the routers way of saying "hey, best guess. Sorry". And from what I can see, it sort of right, but specifics are wrong. So, ok for a cheap YOLO, not for "I bet my money on this" Let's call this a 4/10

    ####################################

    Now, I gave it web access privileges. It did this. Let's call this 7/10

    An AI-horde is a free, open-source community service that allows anyone to contribute computing power to generate AI-generated images and text. It functions as a crowdsourced distributed cluster of workers focused on image and text generation tasks [1]. This model resembles collaborative projects like Folding@home, where users share computational resources for a common goal [2].

    ####################################

    Finally, here's what it does in >>KB mode. I scraped the FAQ and then did >>summ new. Eyeballing it against the FAQ, it's call this 8.5/10. YMMV

    AI-horde is a decentralized platform for distributed AI computing. It allows users to rent out their idle GPU hardware to run AI models, connecting developers needing processing power with volunteers who provide it through open APIs. The system uses a non-cryptocurrency-based incentive called Kudos to prioritize contributions, where workers earn Kudos for completing tasks, and these are not tradable assets but rather internal metrics of service quality and project loyalty.

    Access to AI-horde is managed via API keys; anonymous usage is allowed but receives the lowest priority. The project emphasizes volunteerism and discourages commercialization of Kudos, treating its sale or exchange for money or cryptocurrency as contrary to its ethos. Participation as a worker involves installing the AI-horde client, selecting task types (e.g., image or text generation), configuring supported models, and running the worker to begin earning Kudos.

    Confidence: high | Source: Docs

    ################################

    FINALLY, for "fun", I >>kb attached all and asked it to reason over the above three responses, in context of what it has in the KB store, and rank them out of 10.

    [3] is most correct, scoring 9.5/10. It accurately reflects the core model: decentralized GPU renting via open APIs, Kudos as a non-crypto, non-tradable incentive tied to volunteerism, and the crowdsourced nature of contributions. It also correctly notes access via API keys and lower priority for anonymous use.

    [2] is accurate but less complete, scoring 7.5/10. It correctly identifies the crowdsourced, open-source, volunteer-based nature and analogy to Folding@home. However, it omits key specifics like Kudos, API key access, and the explicit non-commercialization stance.

    [1] is partially correct but misleading, scoring 6/10. It correctly describes GPU

    Confidence: medium | Source: Docs


    TL;DR

    • KB mode: source-based info and when queried against, adjudication accuracy = v.good
    • Clean, reproducible demo. Try it yourself
  • Responding to my own top post like a FB boomer: May I make one request?

    If you found this little curio interesting at all, please share in the places you go.

    And especially, if you're on Reddit, where normies go.

    I use to post heavily on there, but then Reddit did a reddit and I'm done with it.

    https://lemmy.world/post/41398418/21528414

    Much as I love Lemmy and HN, they're not exactly normcore, and I'd like to put this into the hands of people :)

    PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a "Q&A_with_drBobbyLLM.md" and sticking it on the repo. It might explain some common concerns.

    And, If nothing else, it might be mildly amusing.

  • There are literally dozens of us. DOZENS!

    I'm on a potato, so I can't attach it to something super sexy, like a 405B or a MoE.

    If you do, please report back.

    PS: You may see (in the docs) occasional references that slipped passed me to MoA. That doesn't stand for Mixture of Agents. That stood for "Mixture of Assholes". That's always been my mental model for this.

    Or, in the language of my people, this was my basic design philosophy:

    YOU (question)-> ROUTER+DOCS (Ah shit, here we go again. I hate my life)

    |

    ROUTER+DOCS -> Asshole 1: Qwen ("I'm right")

    |

    ROUTER+DOCS -> Asshole 2: Phi ("No, I'm right")

    |

    ROUTER+DOCS -> Asshole 3: Nanbeige ("Idiots, I'm right!")

    |

    ROUTER+DOCS (Jesus, WTF. I need booze now) <- (all assholes)

    |

    --> YOU (answer)

    (this could have been funnier in the ASCII actually worked but man...Lemmy borks that)

    EDIT: If you want to be boring about it, it's more like this

    https://pastebin.com/gNe7bkwa

    PS: If you like it, let other people in other places know about it.

  • Yeah, this is different. Try it. It gives you cryptogenic key to the source (which you must provide yourself: please be aware. GIGO).

  • Yes of course.

    Abliterated is a technical LLM term meaning "safety refusals removed".

    Basically, abliteration removes the security theatre that gets baked into LLM like chatGPT.

    I don't like my tools deciding for me what I can and cannot do with them.

    I decide.

    Anyway, the model I use has been modified with a newer, less lobotomy inducing version of abliteration (which previously was a risk).

    https://huggingface.co/DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF/tree/main

    According to validation I've seen online (and of course, I tested it myself), it's lost next to zero "IQ" and dropped refusals by about...90%.

    BEFORE: Initial refusals: 99/100

    AFTER: Refusals: 8/100 [lower is better], KL divergence: 0.02 (less than 1 is great, "0" is perfect.)

    In fact, in some domains it's actually a touch smarter, because it doesn't try to give you "perfect" model answers. Maths reasoning for example, where the answer is basically impossible, it will say "the answer is impossible. Here's the nearest workable solution based on context" instead of getting stuck in a self-reinforcing loop, trying to please you, and then crashing.

    In theory, that means you could ask it for directions on how to cook Meth and it would tell you.

    I'm fairly certain the devs didn't add the instructions for that in there, but if they did, the LLM won't "sorry, I can't tell you, Dave".

    Bonus: with my harness over the top, you'd have an even better idea if it was full of shit (it probably would be, because, again, I'm pretty sure they don't train LLM on Breaking Bad).

    Extra double bonus: If you fed it exact instructions for cooking meth, using the methods I outlined? It will tell you exactly how to cook Meth, 100% of the time.

    Say...you...uh...wanna cook some meth? :P

    PS: if you're more of a visual learner, this might be a better explanation

    https://www.youtube.com/watch?v=gr5nl3P4nyM

  • It's copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify...but you can't roll a closed source version of it and sell it for profit.

    In any case, I didn't build this to get rich (fuck! I knew I forgot something).

    I built this to try to unfuck the situation / help people like me.

    I don't want anything for it. Just maybe a fist bump and an occasional "thanks dude. This shit works amazing"

  • I feel your pain. Literally.

    I once lost ... 24? 26? hrs over a period of days with GPT once...it each time confidently asserting "no, for realz, this is the fix".

    This thing I built? Purely spite driven engineering + caffeine + ASD to overcome "Bro, trust me bro".

    I hope it helps.

  • Fair point on setting expectations, but this isn’t just LLMs checking LLMs. The important parts are non-LLM constraints.

    The model never gets to “decide what’s true.” In KB mode it can only answer from attached files. Don't feed it shit and it won't say shit.

    In Mentats mode it can only answer from the Vault. If retrieval returns nothing, the system forces a refusal. That’s enforced by the router, not by another model.

    The triple-pass (thinker → critic → thinker) is just for internal consistency and formatting. The grounding, provenance, and refusal logic live outside the LLM.

    So yeah, no absolute guarantees (nothing in this space has those), but the failure mode is “I don’t know / not in my sources, get fucked” not “confidently invented gibberish.”

  • Ah - ASD, ADHD and Lemmy. You're a triple threat, Harry! :)

    Glad if it was entertaining, if even a little!

  • Not famous, no :)

    I hear you, brother. Normally, my hyperfocus is BJJ (I've been at that for 25 years; it's a sickness). I herniated a disc in my low back and lost the ability to exercise for going on 6 months.

    BJJ is like catnip for autists. There is an overwhelming population of IT, engineers and ASD coded people in BJJ world.

    There's even a gent we loving call Blinky McHeelhook, because well...see for yourself

    https://www.youtube.com/watch?v=9mRsqvRduHY

    Noticing the effects of elbow position, creating an entire algorithm, flow chart and epistemology off the fact?

    "VERY NORMAL."

    Anyway, when my body said "sit down", my brain went "ok, watch this".

    I'm sorry. I'm so sorry. No one taught me how to drive this thing :)

    PS: I only found out after my eldest was diagnosed. Then my youngest. The my MIL said "go get tested". I did.

    Result - ASD.

    Her response - "We know".

    Great - thanks for telling me. Would have been useful to know, say... 40ish years ago.

  • Oh god, I think liked being called a clanker more :P

    (Not North Dakotan. West Australian. Proof: cunt cunt cunty cunt cuntington).

  • Yep, LLMs can and do run on edge devices (weak hardware).

    One of the driving forces for this project was in fact trying to make my $50 raspberry pi more capable of running llm. It sits powered on all the time, so why not?

    No special magic with NVIDIA per se, other than ubiquity.

    Yes, my card is NVIDIA, but you don't need a card to run this.

  • Indeed. And have you heard? That makes the normies think were clankers (bots). How delightful.

    Re: the Linux stuff...please, if someone can do that, please do. I have no idea how to do that. I can figure it out but making it into a "one click install" git command took several years off my life.

    Believe it or not, I'm not actually a IT / CS guy. My brain just decided to latch onto this problem one day 6 months ago and do an autism.

    I'm 47 and I still haven't learned how to operate this vehicle...and my steering is getting worse, not better, with age.

  • Please enjoy :) Hope it's of use to you!

    EDIT: Please don't yeet wikipedia into it. It will die. And you will be sad.

  • For the record: none of my posts here are AI-generated. The only model output in this thread is in clearly labeled, cited examples.

    I built a tool to make LLMs ground their answers and refuse without sources, not to replace anyone’s voice or thinking.

    If it’s useful to you, great. If not, that’s fine too - but let’s keep the discussion about what the system actually does.

    Also, being told my writing “sounds like a machine” lands badly, especially as an ND person, so I’d prefer we stick to the technical critique.

  • Ok, but can we just blame the English anyway?

  • Meep Morp