As artificial intelligence platforms become increasingly embedded in our daily digital interactions, the burden of safety monitoring has shifted from simple content moderation to identifying genuine threats to physical safety. The recent incident involving OpenAI, where the company failed to alert law enforcement regarding a user's behavior that preceded a fatal shooting in Canada, highlights the immense, often invisible, pressure on these systems. When an AI model acts as a communication channel, determining the difference between hyperbolic rhetoric and a credible, actionable threat is an extraordinary challenge for even the most sophisticated abuse detection pipelines.