Skip Navigation

Posts
41
Comments
401
Joined
12 mo. ago

  • You can install a browser eextension (i think name is kiwix only) which can load offline zim files. problem is that ux is very bad (you need to load a zim file each time, or manually change). on desktop, my answer was to manually unpack all zim files (using zimutils) and then arrange them in a controlled dir structure, then recompress them into a mountable file format, and separately, maintain a list of all files in side, and while using, i have something hand rolled to mount the archive, select suitable file, and open in browser - yes it is a lot of work, but i do kinda have a offline search engine now.

  • as a mod - that is too demeaning of others

    as a general person - (agrees)

    (having an internal conflict if i should upvote or not)

  • gives me a good reason to say why i do not use bash

  • I am interested in it, because i have 3 package managers, arch, uv and cargo (binstall) so this covers me well hopefully.

  • i rarly use it, mostly to do sentiment/grammar analysis for some formal stuff/legalese. I kinda rarely use llms (1 or 2 times a month)(i just do not have a usecase). As for how good, tiny models are not good in general, but that is because they do not have enough knowledge to store info, so my use case often is purely language processing. though i have previously used it to do some work demo to generate structured data from unstructured data. basically if you provide info, they can perform well (so you can potentially build something to fetch web search results, feed into context, and use it(many such projects are available, basically something like perplexity but open)).

  • i saw a parrot a few days ago. not my first time, but i do get happy seeing them. practically all animals

  • adding to this comment, the best way that we currently know how to extract this energy is using spinning black holes, with theoretical efficiency of ~42% (answer to the universe)(src: a minute physics video precisely on this). the naive solution to just touch them gets like 0.01-0.1% of total energy, so in bad case, we need trillion years.

  • technically, it uses a lot of energy (depending on how much the blade weighs). it is not electrical energy, but gravitational potential energy

  • i agree with yt commenter - the video aged very well

  • pretty much this. I use smolllm (a 3B param model, trained only on openly available datasets)

  • further clarification - ollama is a distribution of llama cpp (and it is a bit commercial in some sense). basically, in ye olde days of 2023-24 (decades in llm space as they say), llama cpp was a server/cli only thing. it would provide output in terminal (that is how i used to use it back then), or via a api (an openai compatible one, so if you used openai stuff before, you can easily swap over), and many people wanted a gui interface (a web based chat interface), so ollama back then was a wrapper around llama cpp (there were several others, but ollama was relatively main stream). then as time progressed ollama "allegedly enshittified", while llama cpp kept getting features (a web ui, ability to swap models during run time(back then that required a separate llama-swap), etc. also llama cpp stack is a bit "lighter" (not really, they both are web tech, so as light as js can get), and first party(ish - the interface was done by community, but it is still the same git repo) so more and more local llama folk kept switching to llama cpp only setup (you could use llama cpp with ollama, but at that point, ollama was just a web ui, and not a great one, some people prefered comfyui, etc). some old timers (like me) never even tried ollama, as plain llama.cpp was sufficient for us.

    as the above commenter said, you can do very fancy things with llama cpp (the best thing about llama cpp is that it works with both cpu and gpu - you can use both simulataneously, as opposed to vllm or transformers, where you almost always need gpu. this simultaneous thing is called offloading. where some of the layers are dumped in system meory as opposed to vram, hence the vram poor population used ram )(this also kinda led to ram inflation, but do not blame llama cpp for it, blame people), and you can do some of them on ollama (as ollama is wrapper around llama cpp), but that requires ollama folks to keep their fork up to date to parent, as well as expose the said features in ui.

  • in some sense, it is not small, the gate requires 1 or 2 people to look over, they are looking at cost of paying 2 people. Not saying it is good, and evil is too hard for me, but they certainly want to push us to not use the gate.

  • try to use it with llama cpp if you folks are interested in runinng locall llms - https://github.com/ggml-org/llama.cpp/issues/9181

    the issue is closed, but that is not because it is solved, check it out, and find link to your relevant hardware (amd or intel or something else), and see if your particular piece is available. if so, you have hope.

    in case it is not, try to find first party stuff (intel vino or intel one or amd rocm stack, and use that with transformers python or see if vllm has support).

    also, try to check r/localllama on the forbidden website for your particular hardware - there is likely someone who has done something with it.

  • Not me personally, but at mu uni, a small gate which lead to nearest subway station and saved about 2-3 mins as opposed to going the longer route receently started requiring a entry at exit. A record entry at arrival kinda makes sense, because uni premises are generally treated as a "safe space" and I am not currently speaking of surveillance nature of this, that is a separate discussion. they require a entry at exit. Which does not make sense for many reasons - almost everyone inside is there for some reason, so what is the point of asking for this. students/staff have a pass to get in without entry just by showing pass from afar, but they are not even accpeting that. Their reason - apparently admin wants to close that door (because it is a door, someone needs to keep watch of it), so they want a record of how many people use it. and instead of doing a simple counting, or using the already nearby cctv cams to get rough headcount, they want people to register themselves going out - defeating the main point of that door - it was quicker. Now it takes just as long (there is a queue now as people have to wait for others to finish registering themselves. I timed myself going the longer route, and if i would have stood there, i would have taken longer. It to me is absurd.

  • but did euler really try hard enough? my conjecture leaves out just enough room in it's current form to become unfalsifiable.

    on a more serious note, at least 1 part of my reasoning was that someone will give examples of instances where euler was not successful, and the fact that we had to come up with a relatively recent solution, it really puts feathers to euler's hat

  • If i am not wrong, it is the amount of time a piefed tab stays in focus. I tried to find it in cookies, but could not find it, but it is stored only on the user's end (not on server). I have a few problemss with it (i can go do something else, while pifed tab stays focus (and browser does not unload it), it is still counting usage, but since they are not invasive, and not tracking when and how i am interacting, they can not do any better.

  • recently got an update. my provider updated their end connections. they had fiber for long range, but for last mile/to home, they used copper coax, which they now have upgraded to fiber. they also updated our router from 2.4 Ghz to 5 Ghz, and also changed our plan from 50Mbit to ~200Mbit. There also is a price increase, but it was not that big, so net it was a positive.

    Do not have a backup internet, i do also have a cellular connection (though currently my data pack is exhausted), If my it (home wifi) goes down for a short term (less than a week, more like 2-3 days), then it is fine, inconvinience for sure, but i personally would be ok. but if it goes out for longer, then it is a big isssue.

  • if that is the case, then it is great. I personally am a rust fan, and use a smithay based wm (niri). and that is basically a single man project, but with active community support. XFCE can pull more man power, but still feels like wasted effort. if just the lang was the choice, they could have considered cosmic wm. it is mroe heavy than xfce needs, but they would have probably had an easier time.

  • Did lennart leave microsoft? Probably a good thing in general for linux (it definitely was wierd that lead for systemd was working at microsoft)

  • I personally do not hink it is a great decision. xfce is not really large enough to afford making a wayland compositor. smithay lets you start from 10 or 20, instead of 0, but you still need to get to 100. They should have probably chosen something like wayfire/labwc or some other wayland floating wm. Though I wish them good luck, I used to use xfce, and loved it.

  • PieFed Meta @piefed.social

    Rss feeds for notifications, user's posts/comments and user's saved posts/comments