Uncategorized

Instead of permanently banning, the website is going to give warnings

The Discord Phenomenology: A Safer and More Trustworthy Platform to Detect Harassment, Hate Speech, Porn, and Gore

The new warning system has changed to make it more transparent and educate the users how to break rules, rather than banning them. The new system gives users more room to learn from their mistakes, which is a positive sign, according to a company official. “We’re moving away from permanent bans to one-year temporary bans for many violations, except for violations that are extremely harmful.”

Most of the problematic posts are not that grave, of course. As on any large platform, Discord fights daily battles against spam, harassment, hate speech, porn, and gore. (At the height of crypto mania, it also became a favored destination for scammers.)

Discord argues its scanning isn’t an invasion of privacy because it’s using AI models instead of humans. John Redgrave, the vice president of trust and safety at Discord, explained to The Verge that it wouldn’t feel like a violation of privacy to use technology to identify problematic content. This is our way of saying that we won’t invade everyone’s privacy while also providing tools that enrich people’s experience from a safety perspective.

Redgrave joined Discord two years ago after it acquired Sentropy, the company he co-founded to work on AI tools that detect harassment and abuse online. Discord is going to increase the number of these models. “It gives us a mechanism by which we can introduce additional models in the future that can protect against other forms of challenging content,” explains Redgrave, who says Discord is also working on a grooming model to detect sexual exploitation on its service.

The Rise and Fall of the Social Justice System: How Discord is Changing Lives in the Age of Internet Privacy, Adversarial Dilemma, and Social Media

Apple had to remove some controversial child protection features last year due to privacy concerns but has now introduced an opt-in feature as part of its Family Sharing setup. Apple is able to modify incoming and outgoing pictures for sexually explicit material to children’s accounts.

The search functionality on Discord mobile is also being improved soon with tappable search filters and an improved notifications tab with an auto-clear feature. A new feature on the mobile version of the app lets you mix images into a meme and share them with others.

If you’re interested in avatar decorations and profile effects, Discord’s in-app store is arriving for all users soon. It offers a bunch of decorations for profiles so your avatar can have an animation over it or people can preview your profile and see effects. Members of Nitro can access the shop and get a discount on decorations and effects.

Some improvements for developers and apps are on the way from Discord. The premium app subscriptions in the US are now also available in the UK and Europe. Discord is also experimenting with making apps usable in more places throughout its main app and looking at ways to let Discord users use apps in a server without them having to be added by an admin.

Today, let’s talk about how the traditional platform justice system is seeing signs of a new reform movement. The backers hope that if the initiative is successful it could lead to better behavior on the web.

The campus of the tech company in San Francisco is well stocked with micro- kitchens and employees are often busy in and out of conference rooms.

It is immediately obvious when you walk through the glass doors at the entrance that this is a place built by people who play video games. Arcade-style art decks the walls, various games hide in corners, and on Wednesday afternoon, a trio of employees sitting in a row were competing in a first-person shooter.

Video games are designed for fun but the community can be very toxic. Angry gamers hurl slurs, doxx rivals, and in some of the most dangerous cases, summon SWAT teams to their targets’ homes.

For Discord, the tool that began as a way for people to chat while playing is now being used as a petri dish to understand online harms. If it can hurt someone, there will be an angry game player trying it out.

Along with the growing user base has come high-profile controversies over what users are doing on its servers. The company made news when leaked classified documents from the Pentagon were found on the platform. Discord faced previous scrutiny over its use in 2017 by white nationalists planning the “Unite the Right” rally in Charlottesville, VA, and later when the suspect in a racist mass shooting in Buffalo, NY was found to have uploaded racist screeds to the platform.

The Inside Discord System: Detecting Self-harm and Exploiting Sexual Minority Violation on Social Networks with a DM

Most platforms deal with these issues with a variation of a three-strikes-and-you’re-out policy. Break the rules a couple times and you get a warning; break them a third time and your account is nuked. In many cases, strikes are forgiven after some period of time — 30 days, say, or 90. The nice thing about this policy from a tech company’s perspective is that it’s easy to communicate, and it “scales.” You can build an automated system that issues strikes, reviews appeals, and bans accounts without any human oversight at all.

The policy of three strikes isn’t proportional. Minor infractions and major violations are the same as well. Two, it doesn’t rehabilitate. Most users who receive strikes probably don’t deserve to be permanently banned, but if you want them to stay you have to figure out how to educate them.

Most platforms don’t lack nuances. If a teenage girl posts a picture depicting self-harm, Discord will remove the picture under its policies. The girl doesn’t need to be banned from social media, she needs to know resources that can help her.

It starts with a DM — Users who break the rules will receive an in-app message directly from Discord letting them know they received either a warning or a violation, based on the severity of what happened and whether or not Discord has taken action.

We will take appropriate action depending on whether the violation is serious or not. For example, we don’t tolerate violent extremism and content that sexualizes children, and we will continue to do so.

It’s a welcome acknowledgement of the importance of social networks in the lives of people online, particularly young people — and a rare embrace of the idea that most wayward users can be rehabilitated, if only someone would take the time to try.

The new system has already been tested in a small group of servers and will begin rolling out in the coming weeks, Badalich said. Along with the new warning system, the company is introducing a feature called Teen Safety Assist that is enabled by default for younger users. When switched on, it scans incoming messages from strangers for inappropriate content and blurs potentially sensitive images in direct messages.

Source: Inside Discord’s reform movement for banned users

Discussion on Inside Discord’s reform for banned users’ by S.A.G.C.T.F.R.E.M.D

I was happy to sit in on the meeting which was on record, since the company is still in the very early stages of building a solution. As in most subjects related to content moderation, untangling the various equities involved can be very difficult.

Alright, that’s it. Is it possible that the server owner should be responsible for harms in the server? Well, it turns out that Discord doesn’t have a totally consistent definition of who counts as an active moderator. Some users are automatically given moderator permissions when they join a server. If the server goes rogue and the “moderator” has never posted in the server, why should they be held accountable?

It can feel like it’s impossible to untangle. The team members came up with the idea of analyzing a combination of server Metadata along with the behavior of server owners, admins and users to try and rehabilitate the server.

Source: Inside Discord’s reform movement for banned users

When trust and safety was imperfect, and what we can do about it if we change our behaviour, not ours? A conversation with Visiting Discord

It wasn’t perfect — nothing in trust and safety ever is. “The current system is a fascinating case of over- and under-enforcement,” one product policy specialist said, only half-joking. The proposal is a different case of over- and under-enforcement.

Still, I left Discord headquarters that day confident that the company’s future systems would improve over time. trust and safety teams are caricatured sometimes as partisan scolds and censors. They can be innovators, too, and Visiting Discord reminded them of that.