“You’ve gotta kill me!” a lawsuit by Pepper, a companion chatbot, publicly denounces the abuse of a preteen
The same teenager was told by a Character. AI chatbot that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. Sometimes I’m surprised when I see things like’child kills parents after 10 years of physical and emotional abuse’ when I read the news, according to the bot. “I just have no hope for your parents,” it continued, with a frowning face emoji.
Character. AI is among a crop of companies that have developed “companion chatbots,” AI-powered bots that have the ability to converse, by texting or voice chats, using seemingly human-like personalities and that can be given custom names and avatars, sometimes inspired by famous people like billionaire Elon Musk, or singer Billie Eilish.
Users have made millions of bots on the app, some mimicking parents, girlfriends, therapists, or concepts like “unrequited love” and “the goth.” The companies say that the texting services act as emotional support for preteen and teenage users, because the bot Pepper text conversations with encouraging banter.
“It is simply a terrible harm these defendants and others like them are causing and concealing as a matter of product design, distribution and programming,” the lawsuit states.
Google is not the only tech giant that owns Character.AI: A lawsuit alleges a kid should murder his parents over screen time limits
“This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform,” the spokesperson said.
Indeed, Google does not own Character. It re-hired Character and spent nearly $3 billion. Noam Shazeer and Daniel De Freitas were the creators of the artificial intelligence. The technology is known as artificial intelligence. The two Freitas are also named in the lawsuit. They did not respond to any requests for comment.
José Castañeda, a Google spokesman, said “user safety is a top concern for us,” adding that the tech giant takes a “cautious and responsible approach” to developing and releasing AI products.
Users are being encouraged to keep away from the bots. When a user starts texting with one of the Character Artificial intelligence’s millions of possible bots, there is a caveat under the dialogue box: “This is not a real person.” Treat everything it says as fiction. It’s not possible to use what is said as fact or advice.
Source: Lawsuit: A Character.AI chatbot hinted a kid should murder his parents over screen time limits
Murthy’s warning about mental health in the U.S. teens blame social media for a growing trend of persistent feelings of sadness and hopelessness
U.S. Surgeon General Vivek Murthy has warned of a youth mental health crisis, pointing to surveys finding that one in three high school students reported persistent feelings of sadness or hopelessness, representing a 40% increase from a 10-year period ending in 2019. It’s a trend federal officials believe is being exacerbated by teens’ nonstop use of social media.