As Congress continues its investigation into
the role AI should play in government
, members of the House Oversight Committee are questioning the uses and procurement of AI tools for government work, as well as the privacy concerns the technology poses when unregulated.
Expert witnesses testified at a June 5 hearing that the federal government is currently using AI in some capacities, but several factors, such as the government’s outdated technology systems and data practices, could keep federal workers from using the technology to its full potential. They also said the government’s
current procurement practices
for AI technology slow its adoption, which risks other countries taking the lead in the global AI race.
“The question before us is not whether the federal government should adopt AI, it’s whether we will lead or follow,” Bhavin Shah, founder and CEO of Moveworks, an AI company that contracts with both private companies and local governments.
Many federal departments
are using AI
that has been approved under the current procurement standards, like The Department of Health and Human Services, which uses AI in medical research and tracking disease outbreaks, and the Department of Veterans Affairs, which uses AI to help analyze medical records and data to predict risk-related incidents of suicide.
Shah said in his testimony that the federal government workers “deserve” the same quick access to AI-powered tools that the private sector is using to maintain efficiency and cost savings. He said Moveworks spent three and a half years and $8.5 million to achieve FedRAMP status, the standardized approach the federal government uses to assess security of cloud-based tools.
“This is a prohibitive barrier for smaller AI innovators, the companies that often develop the most cutting-edge solutions,” Shah said.
Ylli Bajraktari, president and CEO of tech-focused think tank Special Competitive Studies Project, said that while the U.S. still leads in AI research, the slow adoption of AI into government use is our “primary disadvantage,” against countries like China.
“We are hampered by bureaucratic inertia, outdated IT infrastructure and a lack of AI literacy in the workforce,” Bajraktari said. “Overcoming these barriers is the key to winning the global tech competition.”
Bajraktari’s suggestions to the committee included establishing a “space race” style council for AI in the White House, increasing non-defense AI research and development spending to $32 billion and launching a targeted AI talent strategy that would promote AI literacy and attract top STEM talent from abroad. He also suggested overhauling the procurement process to more quickly integrate AI into government systems and strengthening global alliances in AI and cybersecurity.
But the oversimplification and speedy adoption of AI in government is a major security and data privacy risk, testified Bruce Schneier, a fellow and lecturer at the Harvard Kennedy School. Schneier warned congressmembers that recent actions by Elon Musk — who
just stepped down as the leader
of the Department of Government Efficiency, or DOGE, this week — highlight the dangers of unfettered AI use.
In February, Musk said the DOGE team
was using AI to help make decisions
about federal workers’ jobs. DOGE also reportedly had gained access to sensitive data from various federal departments, like the Treasury Department payment system, Social Security and other demographic data. In April,
48 lawmakers expressed their concerns
with Musk using unauthorized AI systems on these and other datasets.
Musk’s actions go against what the committee aimed to do, Schneier said, which is release AI responsibly, and protect the interests and rights of Americans. Americans’ data has been consolidated and fed to non-vetted AI models, he said, which poses national security risks.
“You all need to assume that our adversaries have copies of all the data DOGE has exfiltrated and have established access into all the networks that DOGE has removed security controls from,” Schneier said. “And your data can be used against you.”
Musk’s actions and access to Americans’ data, and
his reported drug use
, were the center of a 40-minute debate before the June 5 hearing, where Republicans once again blocked Democrats in a 21-20 vote to attempt to force Musk to testify before Congress.
“He’s dismantled our government, endangered Americans and weaponized public service for his own financial gain,” said Rep. Stephen Lynch a Democrat from Massachusets and the acting ranking member of the House Oversight Committee.
Linda Miller, founder and chief growth officer at TrackLight, a platform that performs fraud detection for government programs, said in her testimony that DOGE’s actions are an example of why it’s challenging to impose private sector innovation on governments.
“My fellow panelists have made some very smart suggestions about turbocharging the rip and replace efforts of legacy IT systems and removing procurement barriers, but we have to be realistic about how capable government is to enact these rapid changes, no matter how badly they’re needed,” she said.
Miller said the best uses of AI in government work today should be automating routine processes and repetitive tasks, which would free federal workers up for more high level work. She recommended that to speed AI adoption, Congress may consider supporting “regulatory sandboxes” — controlled environments where AI systems can be tried and tested under supervision of regulators before they are released at scale.
“Wholesale changes to legacy IT systems and the federal acquisition system will take years, but we can create innovative laboratories where AI projects can operate in proof of concept regulatory sandboxes, in carefully controlled environments, to start to show the art of the possible,” Miller said.