Uncategorized

Powerful computing efforts are being launched to boost research

Predicting the Future With Artificial Intelligence: Where Is The Entity? When Do We Need It? And How Will The President Biden End His Executive Order?

Predicting the future and warning against it has been the subject of science fiction for many years. Even as Star Trek envisioned the wonders of flip phones and iPads, Neal Stephenson’s Snow Crash warned of the dystopian nature of the metaverse.

Too often, it seems like the minds pushing AI watched too much Trek and not enough Kubrick. Throughout Silicon Valley, the hype is often focused on all the wonderful things AI can create, from art to music to term papers. Meanwhile, others are left to warn that AI might be using other people’s work without authorization, regurgitating racist stereotypes, or just evolving too quickly. Never before have optimism and pessimism coexisted so uncomfortably.

Given that, it almost seems petty to nitpick about the actual content. But I will anyway. Biden had an executive order. I read all 19,811 words of government-speak, so you won’t have to. I was ready for Dramamine by the end. How will the president encourage the benefits of the technology while making it better? By unleashing a human wave of bureaucracy. The document wantonly calls for the creation of new committees, working groups, boards, and task forces. Civil servants and political appointees should be supervised by the use of artificial intelligence.

There is a reason why WIRED chose Dead Recreationd to be the perfect Artificial Intelligence panic movie. In it, an AI known as The Entity becomes fully sentient and threatens to use its all-knowing intelligence to control military superpowers all over the world. When it’s the ideal paranoia litmus test, it is. something rises to the level of Big Bad in a summer blockbuster, you know it’s the thing people are most freaked out by right now. The Entity seems terrifying for someone like President Biden who is aware of the Artificial Intelligence brinkmanship happening around the world. Did no one watch the movie?

As ChatGPT’s first birthday approaches, presents are rolling in for the large language model that rocked the world. From President Joe Biden comes an oversized “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” And UK prime minister Rishi Sunak threw a party with a cool extinction-of-the-human-race theme, wrapped up with a 28-country agreement (counting the EU as a single country) promising international cooperation to develop AI responsibly. Happy birthday!

Before anyone gets too excited, let’s remember that it has been over half a century since credible studies predicted disastrous climate change. Now that the water is literally lapping at our feet and heat is making whole chunks of civilization uninhabitable, the international order has hardly made a dent in the gigatons of fossil fuel carbon dioxide spewing into the atmosphere. The United States has just installed a climate denier as the second in line to the presidency. Will the regulation progress any better?

Among the things the document lacks are a firm legal backing for all the regulations and mandates that may result from the plan: Executive orders are often overturned by the courts or superseded by Congress, which is contemplating its own AI regulation. Don’t hold your breath, as a government shutdown looms. And many of Biden’s solutions depend on self-regulation by the industry that’s under examination—whose big powers had substantial input into the initiative.

Both nations have committed to develop a national AI ‘research resource’, which aim to provide AI researchers with cloud access to heavy-hitting computing power. Russell Wald, leader of the policy and society initiative at the Stanford Institute for Human-Centered Artificial Intelligence in California, says that the UK has made a massive investment.

The director of the Isambard National Research Facility at the University of Bristol says that the system will be one of the world’s top 5 artificial intelligence capable systems. He says that the UK researchers can train even the largest frontier models being conceived in a reasonable amount of time.

“It’s a good thing,” says Bengio. “Right now, all of the capabilities to work with these systems is in the hands of companies that want to make money from them. We need academics and government-funded organizations that are really working to protect the public to be able to understand these systems better.”

The executive order directs agencies that fund life-sciences research to establish standards to protect against using AI to engineer dangerous biological materials.

Agencies are urged to help skilled immigrants stay, study and work in the U.S. The National Science Foundation must launch and fund a regional innovation engine that is focused on Artificial Intelligence and establish four national artificial intelligence research institutions in the next 1.5 years.

In 2021, Wald and colleagues at Stanford published a white paper with a blueprint of what such a service might look like. In January, a NAIRR task force report called for its budget to be $2.6 billion over an initial period of 6 years. “That’s peanuts. Wald believes that it should be substantially larger. He says that the funding for a full-scale NAIRR will have to be passed by lawmakers in the summer of 2023. Wald says that Congress needs to step up and invest in this. If they don’t, we’re just leaving it to the companies.

Plans were announced in March for the UKAIRR. The government said at the summit it would triple the amount of funding for AIRR to $300 million, in order to transform UK computing capacity. Given its population and gross domestic product, the UK investment is much more substantial than the US proposal, says Wald.

The plan is backed by two new supercomputers: Dawn in Cambridge, which aims to be running in the next two months; and the Isambard-AI cluster in Bristol, which is expected to come online next summer.

On the path to dangerous systems: What are they offering next year? A comment on Bengio’s comments on the Future of Pharmaceutics

He states that they are on a path to build systems that are potentially dangerous. “We already ask pharma to spend a huge chunk of their money to prove that their drugs aren’t toxic. We should do the same.”

Yoshua Bengio is the Director of the Quebec Artificial Intelligence Institute and he said that the things that are going to be released next year are not yet known.