TikTok Responds to State Insights Concerning Social Media and Social Media in Live Business: a State-Dependent Trade-Off
The documents show that TikTok was aware that it “interferes with essential personal responsibilities like sufficient sleep, work/school responsibilities, and connecting with loved ones.”
In each of the separate lawsuits state regulators filed, dozens of internal communications, documents and research data were redacted — blacked-out from public view — since authorities entered into confidentiality agreements with TikTok.
The entire complaint was sealed by a state judge after the attorney general asked that all settlement documents be kept secret to prevent them from being disseminated.
Separately, under a new law, TikTok has until January to divest from its Chinese parent company, ByteDance, or face a nationwide ban. TikTok is fighting the looming crackdown. The ability to counter content that harms young people has been called into question in new lawsuits from state authorities.
In an understated assessment, one TikTok official concluded: “[O]ne of our key discoveries during this project that has turned into a major challenge with Live business is that the content that gets the highest engagement may not be the content we want on our platform.”
That’s when TikTok officials realized there was “a high” number of underage streamers receiving digital currency on the app in the form of a “gift” or “coin” in exchange for stripping — real money converted into a digital currency often in the form of a plush toy or a flower.
The company is aware that some young users have accounts, but does little to remove them because of complaints from parents and teachers, according to previously redacted portions of the suit.
There is a TikTok service for kids under 13, but it does not include the same level of safety as a standard account.
According to TikTok’s own studies, the unredacted filing shows that some suicide and self-harm content escaped those first rounds of human moderation. TikTok removed self-harm videos that had more than 30,000 views, according to a study.
Artificial intelligence is usually used to flag pornographic, violent or political content in the first round. The following rounds use human moderators, but only if the video has a certain amount of views, according to the documents. These additional rounds often fail to take into account certain types of content or age specific rules.
Despite these heedings, TikTok’s algorithm still puts users into filter bubbles. Users are placed into a bubble after 30 minutes of use in one sitting according to an internal document. The company wrote that having more human moderators to label content is possible, but “requires large human efforts.”
TikTok: Managing Negative Filter Bubbles on Social Media and the Phenomenology of Self-Suicide
There are a lot of videos that discuss suicide, such as one that says, if you could kill yourself without hurting anyone, would you?
“After following several ‘painhub’ and ‘sadnotes’ accounts, it took me 20 mins to drop into ‘negative’ filter bubble,” one employee wrote. I feel sad, but in a high spirit, as a result of the intensive density of negative content.
Yet internal documents paint a very different picture, citing statements from top company executives who appear well-aware of the harmful effects of the app without taking significant steps to address it.
Adding a video or banner to the filters would be an awareness statement about filters and the importance of positive body image/mental health.
One popular feature, known as the Bold Glamour filter, uses artificial intelligence to rework people’s faces to resemble models with high cheekbones and strong jawlines.
TikTok’s beauty filters, which users can make themselves look thinner and younger, was the subject of the multi-state litigation.
The break videos on TikTok are to get people to stop scrolling and take a break. Internally, however, it appears the company didn’t think the videos amounted to much. One executive said that they are “useful in a good talking point” with policymakers, but “they’re not altogether effective.”
The document shows the project manager saying that their goal is not to shorten the time spent. In a chat message echoing that sentiment, another employee said the goal is to “contribute to DAU [daily active users] and retention” of users.
The app lets parents set time limits for their kids that can range from 40 minutes to 2 hours per day. TikTok even created a tool that set the default time prompt at 60 minutes per day to combat excessive and compulsive use of the social media app.
Another internal document found that the company was aware its many features designed to keep young people on the app led to a constant and irresistible urge to keep opening the app.
He explained: “We have robust safeguards, which include being proactive in removing suspected under-age users and launching safety features such as default screentime limits, family teaming, and privacy by default for children under the age of 16.”
Comments on ‘Inappropriate’ User Bio’ by NPR and the Pseudo-Public Defender’s Attorney General’s Office
The information came after a two-year investigation by the 14 attorneys general that led to a lawsuit against the company.
Underaged users who have specifically stated in their bio that they are 13 or younger are not subject to being reported to the moderators.
Alex Haurek, spokesman for TikTok, criticized NPR for reporting on information that is under a court seal, saying the material takes outdated documents out of context and cherry-picks misleading quotes.