Skip Navigation

Search

Free Open-Source Artificial Intelligence @lemmy.world
yo_scottie_oh @lemmy.ml

In GPT4All settings, selecting AMD graphics card yields no performance improvement over CPU

Background: This Nomic blog article from September 2023 promises better performance in GPT4All for AMD graphics card owners.

Run LLMs on Any GPU: GPT4All Universal GPU Support

Likewise on GPT4All's GitHub page.

September 18th, 2023: Nomic Vulkan launches supporting local LLM inference on NVIDIA and AMD GPUs.

Problem: In GPT4All, under Settings > Application Settings > Device, I've selected my AMD graphics card, but I'm seeing no improvement over CPU performance. In both cases (AMD graphics card or CPU), it crawls along at about 4-5 tokens per second. The interaction in the screenshot below took 174 seconds to generate the response.

![](https://lazysoci.al/api/v3/i

Free Open-Source Artificial Intelligence @lemmy.world
smorty/maria [she/her] @lemmy.blahaj.zone

how does one interact with MCP servers without mcp-libraries?

im building som dum lil foss llm thingy for godot and now im interested in letting users implement their own MCP servers.

so like - okay, the model context protocol page says, that most servers use stdio for every interaction. So now - the request format can be seen here, its apparently a JSONrpc thing.

so - first thing i want to do is retrieving all the capabilities the server has.

i looked through all the tabs in the latest docs, but could not find the command for listing all the capabilities. so i installed some filesystem mcp server which runs well and tried this:

 bash
    
PS C:\Users\praktikant> npx -y @modelc