Listen to Nature’s Podcast: Changing the Look of Text to reveal an Artificial Intelligence Experience in Two Buried Cities, in Central Asia
Don’t miss an episode. Subscribe to the Nature Podcast on
Apple Podcasts
,
Spotify
,
YouTube Music
or your favourite podcast app. An RSS feed for the Nature Podcast
is available too.
What is known about how a researcher’s own brain reacts to birth-control pills and how high-altitude tree planting could benefit a species of butterfly.
If someone wants to change the look of the text, they can remove a watermark and make it look like it was written by a person. The way in which an LLM selects its token is changed by the way that it draws from its huge training set of billions of words. An analyser can be used to spot this change. But there are ways in which the signal can be removed — by paraphrasing or translating the LLM’s output, for example, or asking another LLM to rewrite it. And a watermark once removed is not really a watermark.
In Central Asia, there are two remote cities buried high in the mountains and how a digital watermark can help identify artificial intelligence.
How Can Artificial Intelligence Work for the Effective Use of DeepMind and What Can It Teach About AI? A Conversation with Scott Aronson
In a welcome move, DeepMind has made the model and underlying code for SynthID-Text free for anyone to use. The work is an important step forwards, but the technique itself is in its infancy. We need it to grow up fast.
There is an urgent need for improved technological capabilities to combat the misuse of generative AI, and a need to understand the way people interact with these tools — how malicious actors use AI, whether users trust watermarking and what a trustworthy information environment looks like in the realm of generative AI. Researchers need to study these questions.
watermarking will only be useful if it is acceptable to companies and users. Although regulation is likely, to some extent, to force companies to take action in the next few years, whether users will trust watermarking and similar technologies is another matter.
In contrast to the United States, the European Union has adopted a legislative approach, with the passage in March of the EU Artificial Intelligence Act, and the establishment of an AI Office to enforce it. China has introduced mandatory watermarking and the state of California wants to do the same.
The authors have been watermarking LLM outputs for a long time. A version of it is also being tested by OpenAI, the company in San Francisco, California, behind ChatGPT. There is little literature on how technology works and its strengths and limitations. Scott Aronson, a computer scientist at the University of Texas’ Austin, made one of the most important contributions when he spoke about how watermarking can be achieved. Among them are John and his colleagues at the University of Maryland in College Park, who have made valuable contributions.