Uncategorized

The content moderation dilemma should be solved by GPT-4

OpenAI wants GPT-4 to solve the content moderation dilemma: a voice for victims of human-induced harassment and human-generated violence

OpenAI sees three major benefits compared to traditional approaches to content moderation. First, it claims people interpret policies differently, while machines are consistent in their judgments. Those guidelines can be as long as a book and change constantly. OpenAI argues that big language models could implement new policies in a matter of seconds, because humans take a lot of training to learn and adapt.

Second, GPT-4 can allegedly help develop a new policy within hours. The process of drafting, labeling, gathering feedback, and refining usually takes weeks or several months. Third, OpenAI mentions the well-being of the workers who are continually exposed to harmful content, such as videos of child abuse or torture.

Source: OpenAI wants GPT-4 to solve the content moderation dilemma

How to Protect Copyrights in Generative Intelligent Intelligence: A Theta-Law-Leg Court Suggestion from The Times

After nearly two decades of modern social media and even more years of online communities, content moderation is still one of the most difficult challenges for online platforms. Meta, TikTok, and many others rely on armies of admins who have to look through distressing and often traumatic content. Most of them are located in developing countries with lower wages, work for outsourcing firms, and struggle with mental health as they receive only a minimal amount of mental health care.

Click workers and human work are heavily relied on by OpenAI. Thousands of people, many of them in African countries such as Kenya, annotate and label content. The job and pay are bad, the texts can be disturbing.

The gray area of content that isn’t necessarily illegal poses a great challenge for automated systems. Even human experts struggle to label such posts, and machines frequently get it wrong. Images and videos that document crimes or police brutality are not immune to satire.

A lawsuit from The Times against OpenAI would set up what could be the most high-profile legal tussle yet over copyright protection in the age of generative AI.

There is a concern in The Times of how ChatGPL will be able to compete against the paper by creating text that answers questions based on the original reporting.

If, when someone searches online, they are served a paragraph-long answer from an AI tool that relies on reporting from The Times, the need to visit the publisher’s site is greatly diminished, said one person involved in the talks.

In other words, if a federal judge finds that OpenAI illegally copied The Times’ articles, the court could order the company to destroy ChatGPT’s dataset, forcing them to recreate it using only work that it is authorized to use.

They fear protecting their rights, and they ask how to ensure that companies using generative intelligence respect our intellectual property, brands, reader relationships and investments.

Artificial Intelligence and Times’ Fair Use of Books: A Class Action Against ChatGPT and a High-Court Case against Warhol

In June, Times CEO Meredith Kopit Levien said at the Cannes Lions Festival that it is time for tech companies to pay their fair share for tapping the paper’s vast archives.

In the same month, Alex Hardiman, the paper’s Chief Product Officer, and Sam Dolnick, a deputy Managing Editor, described to employees an internal initiative designed to capture the potential benefits of artificial intelligence.

Comedian Sarah Silverman joined a class-action suit against the company, alleging that she never gave ChatGPT permission to ingest a digital version of her 2010 memoir “The Bedwetter,” which she says the company swallowed up from an illegal online “shadow library”

The fair use doctrine is a defense that may be used by Artificial Intelligence companies if they are accused of using work without permission.

In 2015, a federal appeals court ruling found that a legally permissible use of ” fair use” was when scanning millions of books for its Google Books library.

In it, the high court found that Andy Warhol was not protected by fair use doctrine when he altered a photograph of Prince taken by Lynn Goldsmith. The court found that the images were being sold to magazines.

The court stated that if the original and copied work were to be spread together, there would be a risk of substitution for the original or licensed derivatives.

What Can Journalists Expect from Using ChatGPT to Publish Artificial Intelligence-Generated Content for AP News Articles?

Journalists for AP can experiment with ChatGPT but are asked to exercise caution by not using the tool to create publishable content. The result from a generativeAI platform should be considered unveted source material, subject to the standards of AP. The publication will not use an artificial intelligence product unless it is related to a news story. In that event, AP said it would label AI-generated photos in captions.