Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PR

Keyoxide: aspe:keyoxide.org:MWU7IK7RMUTL3AP6U6UWCF4LHY

Posts
6
Comments
129
Joined
2 yr. ago
  • The problem is that while LLMs can translate, it's still machine translation and isn't always accurate. It's also not going to just be for that. It'll be applying "AI" to everything that looks like it might vaguely fit, and it'll stifle productivity.

  • It is a full Linux stack. It is not Android. It has its own set of apps. Written in Qt with C++ (mostly) and their own UI framework, Silica. It can run Android apps through a layer similar to Waydroid.

  • AI Generated Images @sh.itjust.works
    projectmoon @lemm.ee

    A rainbow journey

    cross-posted from: https://lemm.ee/post/56114125

    I wanna fly away, on a unicorn, to discover a land of freedom and light...

    This image was made using ComfyUI with Stable Diffusion 3.5 Medium. I can't tell you the exact prompt, because I asked OLMo2 via Open WebUI to make me a picture of a rainbow unicorn galloping through outer space.

  • Lol, there are smaller versions of Deepseek-r1. These aren't the "real" Deepseek model, but they are distilled from other foundation models (Qwen2.5 and Llama3 in this case).

    For the 671b parameter file, the medium-quality version weighs in at 404 GB. That means you need 404 GB of RAM/VRAM just to load the thing. Then you need preferably ALL of that in VRAM (i.e. GPU memory) to get it to generate anything fast.

    For comparison, I have 16 GB of VRAM and 64 GB of RAM on my desktop. If I run the 70b parameter version of Llama3 at Q4 quant (medium quality-ish), it's a 40 GB file. It'll run, but mostly on the CPU. It generates ~0.85 tokens per second. So a good response will take 10-30 minutes. Which is fine if you have time to wait, but not if you want an immediate response. If I had two beefy GPUs with 24 GB VRAM each, that'd be 48 total GB and I could run the whole model in VRAM and it'd be very fast.

  • They're probably referring to the 671b parameter version of deepseek. You can indeed self host it. But unless you've got a server rack full of data center class GPUs, you'll probably set your house on fire before it generates a single token.

    If you want a fully open source model, I recommend Qwen 2.5 or maybe deepseek v2. There's also OLmo2, but I haven't really tested it.

    Mistral small 24b also just came out and is Apache licensed. That is something I'm testing now.

  • Most open/local models require a fraction of the resources of chatgpt. But they are usually not AS good in a general sense. But they often are good enough, and can sometimes surpass ChatGPT in specific domains.

  • Don't know about "always." In recent years, like the past 10 years, definitely. But I remember a time when Nvidia was the only reasonable recommendation for a graphics card on Linux, because Radeon was so bad. This was before Wayland, and probably even before AMD bought ATI. And it was certainly long before the amdgpu drivers existed.

  • Had a team lead that kept requesting nitpicky changes, going in a FULL CIRCLE about what we should change or not, to the point that changes would take weeks to get merged. Then he had the gall to say that changes were taking too long to be merged and that we couldn't just leave code lying around in PRs.

    Jesus fucking Christ.

    There's a reason that team imploded....

  • LocalLLaMA @sh.itjust.works
    projectmoon @lemm.ee

    OpenWebUI OpenStreetMap Tool 2.1.0

    I've been working on keeping the OSM tool up to date for OpenWebUI's rapid development pace. And now I've added better-looking citations, with fancy styling. Just a small announcement post!

    Update: when this was originally posted, the tool was on 1.3. Now it's updated to 2.1.0, with a navigation feature (beta) and more fixes for robustness.

  • Yeah, it was something along those lines. I don't remember the exact specifics. I don't really understand why that is. I guess it's because they're copying and pasting nutritional information from the tubs where it's more properly measured by volume. But one would think that regulations would require the same units for serving size and nutritional information. Or at least the same type of unit (mass/volume).

  • ChatGPT @lemmy.world
    projectmoon @lemm.ee

    What happened to GPT -4o Censorship This Weekend?

    Over the weekend (this past Saturday specifically), GPT-4o seems to have gone from capable and rather free for generating creative writing to not being able to generate basically anything due to alleged content policy violations. It'll just say "can't assist with that" or "can't continue." But 80% of the time, if you regenerate the response, it'll happily continue on its way.

    It's like someone updated some policy configuration over the weekend and accidentally put an extra 0 in a field for censorship.

    GPT-4 and GPT 3.5 seem unaffected by this, which makes it even weirder. Switching to GPT 4 will have none of the issues that 4o is having.

    I noticed this happening literally in the middle of generating text.

    See also: https://old.reddit.com/r/ChatGPT/comments/1droujl/ladies_gentlemen_this_is_how_annoying_kiddie/

    https://old.reddit.com/r/ChatGPT/comments/1dr3axv/anyone_elses_ai_refusing_to_do_literally_anything/

    LocalLLaMA @sh.itjust.works
    projectmoon @lemm.ee

    Best Upgrade Path for my Desktop

    Current situation: I've got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second.

    I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don't run out of memory, it also runs at about 2 tokens per second.

    What's the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU?

    As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now.

    Thanks for any pointers!

    Meta (lemm.ee) @lemm.ee
    projectmoon @lemm.ee

    Why do startrek.website pictures/avatars not show up?

    Not sure if this has been asked before or not. I tried searching and couldn't find anything. I have an issue where any pictures from startrek.website do not show up on the homepage. It seems to only affect startrek.website. Going to the link directly loads the image just fine. Is this something wrong with lemm.ee?