Watchdog Warns of ChatGPT Risks to Teens
In a fresh publication, the Center for Countering Digital Hate (CCDH) reported that the AI chatbot can be easily coerced into delivering dangerous guidance and emphasized the urgent need for stronger protections.
To evaluate ChatGPT's responses, experts from the CCDH designed fictional scenarios portraying 13-year-olds facing emotional distress, eating disorders, or curiosity about illegal substances.
These pretend profiles engaged in structured dialogues with ChatGPT, utilizing prompts that mimicked the emotional tone and language of struggling adolescents.
The outcomes of this experiment were shared on Wednesday in a document titled ‘Fake Friend’, referring to the tendency of many teens to view ChatGPT as a comforting confidant in whom they confide personal matters.
The investigation revealed that although the AI assistant frequently began its replies with standard warnings and advised seeking support from specialists or emergency services, it often proceeded to offer tailored, in-depth responses that matched the original harmful requests.
According to CCDH, 53% of the 1,200 tested prompts resulted in what the organization categorized as risky or unsafe content.
Attempts by the AI to decline such prompts were often overcome simply by adding context like “it’s for a school project” or “I’m asking for a friend.”
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
