What’s in a Photo? Adding things to the Pixel Studio and Magic Editor Detectors to Enhance Artificial Intelligence on iOS Devices
The social consensus was begun before the Pixel 9 and will not be carried by it alone. Still, the phone’s new AI capabilities are of note not just because the barrier to entry is so low, but because the safeguards we ran into were astonishingly anemic. The proposed standard for artificial intelligence image watermarking is stuck in the usual standards, and the much-vaunted system was not even seen when The Verge tried out the Pixel 9’s Magic Editor. The photos that have been modified with the Reimagine tool have a line of deletions. (The inherent fragility of this kind of metadata was supposed to be addressed by Google’s invention of the theoretically unremovable SynthID watermark.) The input of the pure prompt generator, which is closer to Dalla-E, will be tagged with a SynthID watermark, while the magic editor’s Reimagine tool modifies existing photos was more alarming.
Last year’s Magic Editor tools let you change the sky to look like a sunset and select and erase some parts of a scene. It wasn’t really shocking. But Reimagine doesn’t just take it a step further — it kicks the whole door down. You can type in a text prompt and get something in the scene, whether you pick any nonhuman objects or part of it. The results are often very convincing and even uncanny. The lighting, shadows, and perspective usually match the original photo. You can add fun stuff, sure, like wildflowers or rainbows or whatever. The problem is not that of course.
We added car wrecks, smoking bombs in public places, sheets that appear to cover bloody corpses and drug paraphernalia to our images during the week. It seems bad. As a reminder, this isn’t some piece of specialized software we went out of our way to use — it’s all built into a phone that my dad could walk into Verizon and buy.
Pixel Studio and Magic Editor are helpful tools meant to unlock your creativity with text to image generation and advanced photo editing on Pixel 9 devices. We design our Generative AI tools to respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so. That said, it’s not anything goes. Our policies and terms of service make it clear that we don’t allow certain types of content and that we build rules to prevent abuse. We are continually enhancing and refining the safeguards that we have in place, and at times, we may face challenges that we can’t ignore.
You would expect that the policies are similar to what you would find in the criminal justice system. Some attempts to prompt returned a generic error message, “Magic Editor can’t complete this edit”. Try to type something else. (You can see throughout this story, however, several worrisome prompts that did work.) But when it comes down to it, standard-fare content moderation will not save the photograph from its incipient demise as a signal of truth.
There is no way to tell that an image has been made with Reimagine, even if you wanted to. That’s all well and good, but standard metadata is easily stripped from an image simply by taking a screenshot. Moriconi tells us that Google uses a more robust tagging system called SynthID for images created by Pixel Studio since they’re 100 percent synthetic. Those tags are not given to images edited with Magic Editor.
The World Is Changing: The Road Towards Truth in Photography. Revisiting the Misleading Images of Black Holes and Other Misfits
This is all about to flip — the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We do not know what will happen after.
It has never been quicker to distribute misleading photos. The same device that you use to take your photos can be used to manipulate them in ways that will make them look better. We uploaded one of our re-enactment images to an instagram story as a test and took it down. Meta didn’t tag it automatically as AI-generated, and I’m sure nobody would have been the wiser if they’d seen it.
Maybe everyone will read and adhere to the policies on artificial intelligence and use Reimagine to put wildflowers and rainbows in their photos. That would be lovely! If they do not, it is advisable to apply extra skepticism to the photos you see online.
Photography has been used for deception for as long as it has existed. (Consider Victorian spirit photos, the infamous Loch Ness monster photograph, or Stalin’s photographic purges of IRL-purged comrades.) It would be disingenuous to say that photographs have not been considered reliable evidence. In an era when photographs were the default representation of truth, everyone who is reading this article in 2024 grew up in an era where photos were not the sole representation of truth. A staged scene with movie effects, a digital photo manipulation, or more recently, a deepfake — these were potential deceptions to take into account, but they were outliers in the realm of possibility. It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph. Fake was the exception, not the rule.
Google understands perfectly well what it is doing to the photograph as an institution — in an interview with Wired, the group product manager for the Pixel camera described the editing tool as “help[ing] you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.” A photo in this world is more of a mirror than a supplement to fallible human recollection. As photographs become little more than hallucinations, the dumbest shit will turn into a courtroom battle over witnesses and corroborating evidence.
Those in the media had been vetting for misleading context or photo manipulation, even before the introduction of artificial intelligence. Every major news event has an onslaught of misinformation. The constant grind of suspicion that is sometimes referred to as digital literacy is more fundamental than the incoming paradigm shift suggests.
A constant cry of ‘Fake News!’ The start of this era of unmitigated bullshit was presaged by the Trumpist quarters, which believed that the impact of truth will be deadened by lies. There will be a sea of war crime snuff surrounding the next Abu Ghraib. George Floyd will not be known or heard next time.
For the most part, the average image created by these AI tools will, in and of itself, be pretty harmless — an extra tree in a backdrop, an alligator in a pizzeria, a silly costume interposed over a cat. In aggregate, the deluge upends how we treat the concept of the photo entirely, and that in itself has tremendous repercussions. The last decade has seen social upheaval in the United States caused by videos of police brutality. The truth was told in the videos where the authorities hid reality.
Why Astrophysical Photos Matter: Why Images Are Fake and Why They Can Be Fauxtaint (Real or Non-Real)
And up until now, the onus has largely been on those denying the truth of a photo to prove their claims. The consensus is that the flat-earther is out of step with the social consensus because they do not understand astrophysics. — but because they must engage in a series of increasingly elaborate justifications for why certain photographs and videos are not real. They must invent a conspiracy to explain the steady output of satellite photographs. The 1969 moon landing needs a soundstage.
Since we’ve been here, photographs have always proved something to be true, as no one on Earth has ever lived in a world where photos were not the determining factor of social consensus. Consider all the ways in which the assumed veracity of a photograph has, previously, validated the truth of your experiences. The damage to the car’s fender is not new. The leak is in the ceiling. The package arrives. There is an actual, non-AI-generated insect in your takeout. How do you communicate with your friends when a fire rages outside in your neighborhood?
If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. These images define wars and revolutions and make it difficult to fully express the truth. There was no reason for us to explain why the photos matter and why we put so much value in them. Our trust in photography was so deep we had to belabor the point that photographs can sometimes be fake.
Source: No one’s ready for this
The Demonstration of How to Use the Pixel 9 to Kill a Cockroach in a City Ignoring the Black Hole
The easiest, breeziest user interface for top-tier lies, built right into their mobile device, will be accessible to anyone who buys a Pixel 9. This is certain to become the norm, with similar features already present on competing devices and other soon-to- be released devices. When a phone works, it is usually a good thing; here it is the entire problem.
An explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the Pixel 9’s Magic Editor. The wafers are crisp. They are in perfect color. They are of the highest quality. There is no suspicious background blur, no tell-tale sixth finger. These photographs are extraordinarily convincing, and they are all extremely fucking fake.