Great news! A small number of samples can poison LLMs of any size
Great news! A small number of samples can poison LLMs of any size
www.anthropic.com
A small number of samples can poison LLMs of any size
Anthropic research on data-poisoning attacks in large language models
So the research is out and these LLMs will always be vunerable to poisoned data. That means it will always be worth out time and effort to poison these models and they will never be reliable.