The Impact of Open Access on Artificial Intelligence Development: A Study of Industry and Academic Research in Theis Labs, Las Palmas, Argentina
The current Artificial Intelligence boom is likely to not exist if it weren’t for the work that started in the academia. Many of the techniques that are now being used on an everyday basis, such as machine learning and natural-language processing, are underpinned by academic work into artificial neural networks that dates back decades. But it is true to say that much of the latest cutting-edge and high-profile research in AI is being done not in university labs, but behind the closed doors of private companies.
Whatever approach is taken, keeping publicly funded, independent academic researchers at the forefront of AI progress is crucial for the safe development of the technology, says Vallor. She says that if the technology is not developed in a responsible way it could be very dangerous. commercial incentives are the only ones driving the bus when it comes to an automated system.
She suggests that companies that create and deploy artificial intelligence could see their tax burden go down. “Those that don’t want to adopt responsible-AI standards should pay to compensate the public who they endanger and whose livelihoods are being sacrificed,” says Vallor.
For that scrutiny to happen, however, it is imperative that academics have open access to the technology and code that underpins commercial AI models. “Nobody, not even the best experts, can just look at a complex neural network and figure out exactly how it works,” says Hoos. We don’t know much about the system’s capabilities and limitations so it’s essential that we know how it’s created.
Many companies are making moves towards open access for their models in order to get more people to work with them. “It’s a core interest for industry to have people trained on their tools,” he says. Meta, the parent company of Facebook, is trying to better compete with the likes of OpenAI andGoogle by pushing for more open models. Giving people access to its models will allow an inflow of new, creative ideas, says Daniel Acuña, a computer scientist at the University of Colorado Boulder.
Academics are free to pursue projects, but they can gain insights and support from industry to help solve interesting and tricky problems. To learn about the industry experience is a common requirement for new hires in Theis labs. There is actually a back and forth relationship between the two.
The different approaches of industry and academic researchers have been studied by the group. The composition of a research team is related to the novelty of their work, and it was analysed to find out how their impact was on citations and models created.
Financing a large facility for artificial intelligence research in Europe: From particle physicists to language models for natural-language processing
Funding is necessary in order to make the most of that freedom. Theis says that investing in basic research is not only happening in a few eclectic places.
A group of researchers in CLAIRE have come up with a plan to create a facility for academic scientists to keep up with industry in terms of infrastructure when it comes to artificial intelligence. It would cost the European Union over 100 billion dollars to finance the project over the next six years, which is less than the cost of the original moonshot, but it is worth it. He said that the facility would be open to the public, rather than private company labs. And just like the Apollo programme and CERN, it would have great benefits to both society and industry, he says.
An even more ambitious plan has been put forward by CLAIRE, the Confederation of Laboratories for Artificial Intelligence Research in Europe, which was co-founded by Hoos in 2018. The plan is inspired by the approach in physical sciences of sharing large, expensive facilities across institutions and even countries. “Our friends the particle physicists have the right idea,” says Hoos. “They build big machines funded by public money.”
The data that companies can use to train their models are large enough to be produced by their platforms as users interact with them. “When it comes to training state-of-the-art large language models for natural-language processing, academia is going to be hard-pressed to keep up,” says Fabian Theis, a computational biologist at Helmholtz Munich, in Germany.
Using machine learning to shape science-policy advisers shape programmes that solve real-world problems: the example of Canada’s National Science and Engineering Research Council
I started talking directly with the fishing crews and their families. I attended their meetings to hear about the drought’s impacts. I became a fierce advocate for scientists who do work that can be applied directly to society after listening to the community. I make sure they keep in touch with the intended beneficiaries during the early days of their research.
The assistant director for climate resilience is the chief of staff in the White House Office of Science and Technology Policy.
We’re also working with government agencies to weave in nature-based solutions. I am aware of the science behind that. A restored marsh would support nearby communities, buildings and roads against storm surge or sea-level rise. Nature is strengthened and people benefits are provided by such initiatives.
I became interested in policy when I was pursuing my master’s degree in astronomy and astrophysics at the University of Victoria, Canada. A good friend encouraged me to be vice-president of the graduate student association, through which I eventually helped to negotiate dental coverage for members. During negotiations with the university’s provosts, I realized how much was needed to be done. So many problems needed solving through policy change. I realized that I really enjoy this.
I now work at the National Sciences and Engineering Research Council (NSERC), which distributes government funds to university researchers throughout Canada. Data is collected to see if the policies are working.
I work for the chief data officer to oversee all data-related infrastructure for the NSERC — including data stewardship, analysis and coordination with other government agencies and ministries, so that data are some of the core drivers of public service.
Canada is investing money into electric vehicles, their batteries and recycling those batteries. If someone in parliament says, “Electric vehicles are the future. Do we know if we’ve supported this? Then we have to look through reports for related NSERC-funded projects. It’s time consuming. We want machines to be part of our processes so that you don’t have to read every report with a specific phrase. With machine learning, you can get your hands on a larger data set. And using those data, we can help officials to craft policy more efficiently.
Source: Science-policy advisers shape programmes that solve real-world problems
The diversity of black people who apply for grants: How we need to work together to make sure EDI policies are more effective in science education – an application to the NSERC
The areas we’re working on are equality, diversity and inclusion. Making EDI policies more effective is important to me because I’m mixed race (my father is Black Jamaican and my mother is white Canadian) and neurodiverse — I have attention deficit hyperactivity disorder, am on the autism spectrum and have mental-health challenges. All of these things worked against me in my science education. But I know I’m lucky to have made it to where I am.
Our office put forward the idea that we need to collect data on people who apply for grants to better understand EDI. Data had been collected here and there, but in silos. I and an NSERC colleague analysed the data that was pulled together.
There are areas the council is doing well and there are also places where it needs to work on. For example, we found that the diversity of people applying for funding is not yet representative of the Canadian population as a whole. Census data was used to show how many applications were needed from specific groups for award recipients to be represented. So, for example, we would expect 50 Black people to apply for a student fellowship if the student pool was representative of the population, but instead we received only 30 applications.
For example, the NSERC can say to a university, “We see that you didn’t report having any Black applicants to your PhD programmes last year. Are you OK with that?” Or, “Why might that be?” We know that there are gaps because of the data.
Source: Science-policy advisers shape programmes that solve real-world problems
Science-Policy Advisors Shape Programmes that solve Real-World Problems: How to Make a Difference in Government and Non-Government
We put together science-based evidence to inform policymakers, whether they are government stakeholders at the national or state level, or at a non-governmental organization that is implementing a project. I manage a team of about 30 people who work on various issues, including long-term decarbonization scenarios — to decrease companies’ carbon footprints without compromising on development. India is a lower-middle-income country.
The hardest thing is trying to get an audience with policymakers. We really need to follow up several times. It’s not like I say something and they lap it up. That never happens. It takes many conversations to make that happen.
To talk about the greatest impact, you have to speak their language. It could be social impacts if you are talking to a legislative member who works with people. By contrast, for policymakers at the highest levels of government, it could be economic impacts and investments. These are the big tickets that resonate well. At the end of the day, they want to make a difference.
Source: Science-policy advisers shape programmes that solve real-world problems
Science-Policy Advisers Shape Programmes that Solve Real-World Problems: Forming Policies that Shape the Future of Science, Scientists and Policy Makers
I worked at the Indian Institute of Science in Bengaluru for 25 years as a forest ecologist. I was monitoring the Western Ghats mountain range for a long time.
I thought I would be a faculty member at the research university. I was doing research for the community when I worked at the Coastal and Marine Laboratory at Florida State University. I was investigating local drought conditions, which were leading to declines in oyster populations.
Scientists believe that developing, advising on or advocating for policies can contribute to social progress. The need for science-policy advisers is ever increasing as the world reels from COVID-19 and looks ahead to upheaval caused by climate change and artificial intelligence.
Over the past few years, I’ve volunteered at organizations such as the World Economic Forum and the International Union of Pure and Applied Chemistry, helping global leaders create more effective public policies that are backed.
Our reports have identified emerging technologies that will shape the future of science. Some of the predictions have come true. For example, our 2015 report identified the gene-editing tool CRISPR–Cas9 as a transformative technology. The scientists that discoveredCRISP won a prize after five years.
In the report, we highlighted mRNA vaccines. At the time we didn’t think the COVID-19 will be a big deal. We just thought that people weren’t paying enough attention to this technology, and that governments should put more resources into developing it.
Artificial intelligence is useful for chemistry, and it can be helped by science policy. To do that, we need an innovative chemistry ‘language’ that can be ‘read’ by machines. The IUPAC is going to create a standard way of storing chemical substance data with the creation of a textual identifier. This will accelerate the implementation of AI in scientific discovery.
Systems thinking also involves connecting various disciplines. In the case of chemistry, it means connecting the molecular description of compounds and their reactivity with their role in health, the economy and the environment, for instance — always putting people at the centre.
Source: Science-policy advisers shape programmes that solve real-world problems
The Use of Artificial Intelligence in Education: A World-Wide View on the Status of the U.S. AI Act and Beyond
We’re working with governments worldwide to provide guidelines, teaching tools and training workshops. Over the past five years we have conducted workshops in South Africa, the United States and Egypt to train secondary-school teachers to use this approach.
This requires producing high-quality reports that stand up to the most rigorous scrutiny. Beyond the report, trust needs to be built through active listening and empathy. It starts on the day you are asked to advise on public policy, and it continues until the day that legislation is implemented.
Scientific advice is mainly about providing the best current knowledge in context. Decision makers have similar responsibilities as lawyers to ensure their decisions are sound, and they should work with scientific advisers to update their policies with the latest knowledge.
There is limited evidence that states are following the EU in drafting legislation for artificial intelligence. There is strong evidence of lobbying of state legislators by the tech industry, which does not seem keen on adopting the EU’s rules, instead pressing for less stringent legislation that minimizes compliance costs but which, ultimately, is less protective of individuals. Among others, there are two bills in Colorado and Utah and two bill in Oklahoma and Connecticut.
The debate on the use of Artificial Intelligence in the US is more advanced than in other countries, another reason for the lack of a Brussels effect. This includes a policy roadmap from the Senate, and active input from industry players and lobbyists. The hesitancy embodied by the governor is another explanation. States fear that strong legislation could cause a local tech exodus to states with milder regulations, which would be less harmful in data protection legislation.
The tech industry can have more of an influence than passive inspiration does. The Connecticut draft bill did contain a section on generative AI inspired by part of the AI Act, but it was removed after concerted lobbying from industry. Although the bill received some support, it is still in limbo. Connecticut’s governor, Ned Lamont, threatened to veto the bill, due to the industry associations’ belief that it would stifle innovation. Its progress is frozen, as are many of the other more comprehensive AI bills being considered by various states. It is expected that the Colorado bill is changed to make sure it won’t hamper innovation.
A major difference between the state bills and the AI Act is their scope. A risk-based system has been put in place by the AI Act to protect fundamental rights and there are no social scoring systems for people with family ties or education. High-risk AI applications, such as those used in law enforcement, are subject to the most stringent requirements, and lower-risk systems have fewer or no obligations.