Many Looming Unknowns About AI Policies of Incoming Administration
By Deborah Borfitz
January 9, 2025 | Looming uncertainties tied to the upcoming U.S. administration change will affect many key areas of artificial intelligence (AI) in 2025, according to Mark Dredze, interim deputy director for the Data Science and AI Institute at Johns Hopkins University. While a focus on deregulation appears likely, exactly how things will play out for AI governance and policy is anyone’s guess.
“I actually have no idea what to expect in 2025,” Dredze says. “I am super-confused.”
He can name at least five reasons he is baffled—each tied to the political agenda of President-elect Donald Trump and seemingly contradictory moves by his billionaire pal Elon Musk, who he has chosen to lead a newly constituted Department of Government Efficiency. Dredze’s focus is on the safety and limitations of AI development, federal workforce issues, changes that might happen to an executive order issued by the Biden administration, the availability of research funding, and AI competition.
Much has happened over the past two years that could potentially be undone, he says, beginning with Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence issued in October 2023. EO 14110, as it is known, is “the most significant AI regulation in the United States” and stipulates that every federal agency needs an in-house chief AI officer.
The order also set up the AI Safety Institute Consortium (AISIC), of which Hopkins is a member, and catalyzed the launch of many major AI-focused research initiatives at the National Institute of Standards and Technology (NIST), National Science Foundation (NSF), National Institutes of Health (NIH), and many agencies within the Department of Defense (DoD), he adds.
One of the major components of EO 14110 centers on AI safety and placing limits on the development of core AI technologies that might cause harm to individuals and society, continues Dredze. Concerns about AI safety, bias, fairness, and transparency were also major themes of congressional hearings, leading to proposed limitations and reporting requirements for explaining AI models.
How AI safety is defined also seems to be in flux, Dredze says. Here he points to Musk, a high-profile player in the AI space who also “seems to have the ear of the incoming president.” Musk has strongly advocated for awareness about the existential risk AI poses to society and humanity and more recently came out in support of a controversial California bill codifying the first safety guardrails on AI in the U.S., “whereas almost every other tech company was against it.”
On the other hand, he is founder of a company called xAI “which stands out in the AI space for not doing any of the normal safety stuff,” says Dredze. Its chatbot was purposefully designed to have fewer guardrails than its major competitors.
It is unclear which of these opposing views might win out in 2025, he says. More certain is a rollback of the limitations on AI companies, in line with deregulation sentiments of the incoming administration. “But how does that square with the concern about existential risk?”
The ‘Wild Card’
A changing of the guard in Washington also comes with “a big push for reforming the federal workforce,” notably shrinking its size and ending remote work across agencies, which has left many current employees nervous, says Dredze. “The problem is those efforts are going to have implications for recruiting talent in the [super-competitive] AI space,” meaning a loss of in-house expertise.
“I think it is safe to assume that the new administration is going to revoke [EO 14110],” he adds, the unknown being what will take its place—or if only parts of the policy will be rolled back. “We don’t know because so much AI policy is made under executive order and subject to disappear at the whim of the incoming administration.”
As a professor, Dredze says, he is particularly concerned about a potential drop in federal research funding studies at Hopkins and other academic institutions. “The NSF has funded large-scale AI centers, [and] the NIH is making a massive investment in AI across all 27 institutes and centers. NIST and the Advanced Research Projects Agency for Health (ARPA-H), a relatively new federal agency, also have multiple projects underway that were launched in response to the Biden administration’s interest in seizing the promise and managing the risks of AI.
“What is going to happen under the new administration, especially in the climate of cutting spending at the federal level, I don’t know,” says Dredze. “A lot of AI funding does come from the DoD [and] DARPA [Defense Advanced Research Projects Agency]... so maybe we’ll see increases there.”
Competition between the U.S. and other countries—critically, China—is the “wild card,” Dredze says. The two nations have both been investing heavily in building and deploying the best AI.
The U.S. government has responded by placing limitations on the export of cutting-edge GPU chips to China and making investments in the domestic manufacturing market, he continues, both of which seem to align with sentiments of the incoming administration. But how this would interact with the potential loss of research funding and a clear regulatory framework for the development of AI technologies is “super up in the air.”
Long-term Consequences
People “often underestimate” the impact of the federal government on industry and academia, says Dredze, referencing funding he personally received from DARPA as a Ph.D. student. Years ago, the agency financially supported the development of AI agents used to help with workflow tasks such as answering emails and making PowerPoint slides.
The leadership of one of those teams launched a startup called Siri that was eventually acquired by Apple, he points out. Newer versions of voice-activated digital assistants, notably Alexa and Google Assistant, have since emerged.
What happens over the next year or two on the federal stage—most critically, “who is in the federal government, what kind of talent we have there, and where they’re directing their resources”—could well affect the course of AI development over the next 20 years in the U.S., says Dredze.
He remains skeptical of anyone suggesting that AI will ever hit a “scaling wall,” a theoretical point where increasing the size and complexity of an AI model starts yielding diminishing returns. AL models will be trained differently in another few years, just as the training methods used today are different from those employed a few years ago, Dredze says. That evolution was true of computing and the building of faster processors and will likewise be true of AI.