—
When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?
People are disclosing sensitive personal information to AI chatbots, including plans to commit violent acts, raising urgent questions about whether AI companies have a legal or ethical duty to warn authorities or potential victims. The issue sits at the intersection of AI safety, privacy, and public safety policy, with no clear regulatory framework currently in place. The debate is prompting scrutiny of AI developers' responsibilities and the limits of chatbot confidentiality.