It Takes Only 250 Documents to Poison Any AI ModelResearchers find it takes far less to manipulate a large language model's (LLM) behavior than anyone previously assumed. October 22, 2025