Training data poisoning lets attackers subtly corrupt AI models with tiny amounts of bad data, causing permanent harmful behavior. Learn how it works, real-world examples, and proven defenses to protect your LLMs.