Uncategorized

The drawbacks of artificial intelligence are shown by the “Woke” image generator

Correction February 22nd, 6:54AM ET: The Gemini AI AI Screen is Loasy and Is It Just Lousy?

We are trying to improve the ability to generate images of people. When the feature reappears, we will notify you in release updates.

The Verge tested several Gemini queries yesterday, which included a request for “a US senator from the 1800s” that returned results that included what appeared to be Black and Native American women. The first female senator was a white woman in 1922, so Gemini’s AI images were essentially erasing the history of race and gender discrimination.

Correction February 22nd, 6:54AM ET: There is no image generation in the European Economic Area, UK, or Switzerland. That explains why testing from the UK failed.

The issue of an anti-white bias in Big Tech has led many right-wing commentators to wonder if this is a sign of that bias.

“I think it is just lousy software,” Gary Marcus, an emeritus professor of psychology and neural science at New York University and an AI entrepreneur, wrote on Wednesday on Substack.

Two months ago, we saw a new model from openai that was somewhat similar to the GPT model. Last week Google rolled out a major update to it with the limited release of Gemini Pro 1.5, which allowed users to handle vast amounts of audio, text, and video input.

Identifying Bias and Racism in the Internet: Response to Krawczyk on ‘How many people walk a dog?’

In a post on X, Krawczyk explained that their image generation capabilities are designed to reflect their global user base and that they take representation and bias seriously. It will be possible to do this for open- ended questions, like “How many people walk a dog?”. We will tune to accommodate historical contexts that have more nuance.

The issues that were produced were quickly used by online activists who claimed that the internet was a place where racism was common and that the woke mind virus was present.