Uncategorized

The internet is only getting harder from here by the UK’s new rules

Does Ofcom want to see the online spread of illegal content? Reply to Musk, Whitehead, and other questions about the Online Safety Act

The duty of care that tech firms have for their users is the first time it has been put in place. They have to get it down and conduct risk assessments to understand the specific risks of those services if they become aware of illegal content on their platform.

It is the goal to make sites aware of the spread of illegal content and be alert to it. It is meant to encourage a switch from a reactive approach to a proactive approach, according to a lawyer specializing in tech, media, telecoms and data.

Large tech platforms already follow many of these practices, but Ofcom hopes to see them implemented more consistently. “We think they represent best practice of what’s out there, but it’s not necessarily applied across the board,” Whitehead says. “Some firms are applying it sporadically but not necessarily systematically, and so we think there is a great benefit for a more wholesale, widespread adoption.”

The platform known as X is one of the big outliers. The UK’s efforts with the legislation long predate Elon Musk’s acquisition of Twitter, but it was passed as he fired large swaths of its trust and safety teams and presided over a loosening of moderation standards, which could put X at odds with regulators. Ofcom’s guidelines, for example, specify that users should be able to easily block users — but Musk has publicly stated his intentions to remove X’s block feature. He clashed with the EU over similar rules and was considering leaving the European market to avoid them. When I asked if X had been cooperative in talks with Ofcom, he was silent and didn’t say whether it had been.

The section in question allows Ofcom to require online platforms to use so-called “accredited technology” to detect CSAM. But WhatsApp, other encrypted messaging services, and digital rights groups say this scanning would require breaking apps’ encryption systems and invading user privacy. The full impact on cipher messaging is uncertain because of the plans of Ofcom to consult next year.

There’s another technology not emphasized in today’s consultation: artificial intelligence. But that doesn’t mean AI-generated content won’t fall under the rules. The Online Safety Act tries to address online harms in a neutral manner, regardless of how they were created. CSAM made out of artificial intelligence would be in scope because it is CSAM and a deepfake used to conduct fraud will be in scope because it is fraud. The technology is not being regulated, but the context is.

MacKinnon says a zero-sum law is problematic for a non-profit organization in the early stage of the election campaign

“We agree as a platform that we have responsibilities,” MacKinnon says, but “when you’re a nonprofit and every hour of work is zero sum, that’s problematic.”

The act covers a wide spectrum of issues, from how technology platforms should protect children from abuse to scam advertising and terrorist content, to becoming law in October. The first set of proposals for how the act will be implemented was released today.