Skip Navigation

It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic

Data quantity doesn't matter when poisoning an LLM

: Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset

Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …

Comments

8

Comments

8