A government commission is currently considering an innovation that could be as transformational for artificial intelligence (AI) as a hadron collider is for physics. It’s called a National Research Cloud, and right now the federal government National AI Research Resources Working Group (NAIRR) is determining how we can develop such a cloud to expand access to IT and data and stimulate basic and non-commercial research in AI. What is at stake may be rates of investment in basic scientific research not seen since the days of the Cold War.
The concept is simple. The federal government will provide access to the computer and data resources needed for AI research that are becoming increasingly inaccessible to academics. The best way to expand access requires funding from the federal government to allow researchers to access existing cloud computing power in the short term while creating public cloud computing options for long term use.
Yet critics attack this idea from both sides, endangering the potential for substantial innovation.
Some big tech skeptics reject the very idea of a National Research Cloud, fearing it would increase the power of big tech companies and the damage AI is doing to vulnerable communities. On the other hand, free market scholars believe the government is playing too militant a role and that we should rely exclusively on private tech companies – which offer a wide range of cloud services – to provide commercial cloud credits to academic researchers. .
These objections are wrong. The Stanford Institute for Human-Centered AI, where three of us have appointments, led the call for the NAIRR in 2020. Over the past 10 months, we’ve assembled a large research team to undertake a in-depth study on how to design the NAIRR and published our findings in a report. Our research strongly contradicts these objections.
To those worried about the concentration of AI research power in private tech companies: we are already there. Failure to create a National Research Cloud would only further consolidate the power of the private sector in AI.
Ten years ago, PhDs with expertise in AI were just as likely to go to academia as they were to industry. Now they are twice as likely to join the industry. This led to some problems. First, private sector research is subject to the direction, oversight and veto of tech companies, leading to research myopia. Facebook, for example, fired her internal studies on Instagram’s toxicity to teenage girls. Second, private sector research can only be directed towards a narrow scope. As data scientist Jeff Hammerbacher noted, “The best minds of my generation are thinking about how to get people to click on ads.”
A National Research Cloud is a compelling way to address both of these issues, as it will expand access to AI resources outside of the corporate context. It will expand the number of people capable of developing, interrogating or auditing AI systems, going beyond narrow technical fields to include the physical sciences, social sciences and humanities. Unlocking data in a confidential manner about Earth observation, labor markets, and our justice system – now often accessible only to the privileged few – will also steer AI toward a more diverse set of social issues. urgent.
To those on the other side who fear government involvement more than the influence of the private sector: as described, relying on the private sector alone greatly hinders innovation, but the public sector plays a vital role in stimulating basic research and doing it cost-effectively. Take the example of satellite imagery. Until 2008, the United States Geological Service charged about $ 600 per satellite image. When he made imagery free, he fueled the use of computer vision to study global warming, habitat change, poverty, and urban sprawl, generating an estimate 3 to 4 billion dollars in annual profits.
In the IT context, the federal government has extensive experience in building state-of-the-art IT facilities, Oak Ridge National Laboratory Summit system, which was the world’s most powerful supercomputer from 2018-2020, to the National Science Foundation’s investment in a high-performance computing network that has contributed, along with the private sector, to major efforts such as the COVID-19 response. Many universities have also found that relying on commercial cloud services can be three to eight times more expensive than building their own systems. Such a public system cannot be built overnight, but it can be worth the initial effort and expense.
Not only that, but America also desperately needs build a workforce in the public sector who is ready to use, monitor and regulate AI systems – and a National Research Cloud can make that possible. Old computer systems continue to plague the government. As the Government Accountability Office noted, the Department of Defense as of 2016 “was still using 8-inch floppy disks in a legacy system that coordinates the operational functions of the United States nuclear forces.” The NAIRR is an opportunity for the federal government to reset and rebuild itself and should not be seen as a zero-sum game between tech-skeptics and the market crowd only.
On its current trajectory, our future in AI will increasingly rest in the hands of a few small players in the industry. The National Research Cloud can correct this imbalance, expanding the range of voices that have access to one of the most important technological developments of our time.
Daniel E. Ho, JD, Ph.D. is William Benjamin Scott and Luna M. Scott Professor of Law and Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence, Stanford University. Jennifer King, Ph.D. is a privacy and data policy researcher at the Stanford Institute for Human-Centered Artificial Intelligence. Russell C. Wald is director of policy at the Stanford Institute for Human-Centered Artificial Intelligence. Christopher Wan, JD / MBA candidate at Stanford University, co-authored the “Building a National AI Research Resource” report and provided research and writing assistance for this article.