Sections

Commentary

How DOGE’s modernization mission may have inadvertently undermined federal AI adoption

January 21, 2026


  • DOGE promised to modernize federal technology, maximize productivity, and eliminate waste, fraud, and abuse across the executive branch, but it may have instead made it more difficult for agencies to adopt AI, despite a top-down mandate.
  • The explicit framing of AI adoption around efficiency and workforce reduction from DOGE did little to catalyze cultural change and made an already challenging environment more difficult.
  • The federal government can rebuild trust by reframing AI adoption around mission improvement and service delivery rather than workforce reduction and rewarding responsible risk-taking.
Leader of the Department of Government Efficiency Elon Musk wears a shirt that says "Tech Support" as he speaks during the first cabinet meeting of U.S. President Donald Trump's second term in the Cabinet Room at the White House in Washington, D.C., on Feb. 26, 2025.
Leader of the Department of Government Efficiency Elon Musk wears a shirt that says "Tech Support" as he speaks during the first cabinet meeting of U.S. President Donald Trump's second term in the Cabinet Room at the White House in Washington, D.C., on Feb. 26, 2025. Jim Watson/AFP via Getty Images

Over the past year, the Trump administration has emphasized the importance of artificial intelligence (AI) adoption to both “deliver the highly responsive government the American people expect and deserve” and help the Department of Defense “maintain its global military preeminence.”  This effort builds on initiatives from the Biden administration and the first Trump administration, which encouraged agencies to experiment with AI in their workflows, endeavored to build a more AI-savvy workforce, and sought to break down bottlenecks to AI adoption.  

The current administration has operationalized these efforts through various vehicles. These include a new talent recruitment process known as the U.S. Tech Force, through the General Services Administration’s (GSA) OneGov pooled-procurement strategy, and as part of the Department of Government Efficiency (DOGE), which was established by executive order in January 2025, though its current status remains unclear. The initiative, led by Elon Musk for its first four months, promised to modernize federal technology, maximize productivity, and eliminate waste, fraud, and abuse across the executive branch. One year later, it is increasingly evident that this effort—and its frenetic activity during the first half of 2025—may have instead made it more difficult for agencies to adopt AI, despite a top-down mandate. 

DOGE’s modernization legacy 

Although DOGE identified some legitimate and well-known challenges related to data silos, overlapping or convoluted regulations, and talent recruitment and retainment, my forthcoming report on AI adoption in the federal government finds that DOGE did little to address these bottlenecks and may have even undermined the very modernization efforts it aimed to address in some cases. Rather than focusing on these issues systematically, DOGE’s full-court press into the executive branch depleted technical capacity, further entrenched a risk-averse culture that rewards the status quo over responsible experimentation, and fostered mistrust toward future data-sharing initiatives. 

An inadvertent talent vacuum 

At the heart of federal tech modernization is talent to oversee and implement change. For more than a decade, agencies have worked to update processes designed to recruit and retain technologists, including through direct hiring authoritiespooled certifications of applicants, flexible candidate rankings, and fixed-term appointments that are both part of centralized services and embedded within agencies. An October 2023 executive order from the Biden administration announced a hiring surge for “AI and AI-enabling talent,” including those that could develop, manage, and monitor AI systems; upskill the federal workforce; and understand the effects of AI on society, including legal issues related to misuse.  

An April 2024 status update from the AI and Tech Talent Task Force reported a significant increase in hiring and planned hiring, including 750 projected new AI and AI-enabling hires by September 2025. According to federal job listing data analyzed in my forthcoming report, the number of technical jobs specifying AI capabilities rose from none in 2016 to around 9% of all technical jobs in 2024 (318 roles). Approximately 25% of the AI-specific job listings were posted from 2024 onward, after Biden’s executive order. This means that by February 2025—when reductions in force began across the federal government—many of these recent hires were still probationary employees, making them much easier to dismiss as part of the mass firings and resignations that shrunk the federal workforce by approximately 250,000 employees. 

In addition to probationary employees, the mass firings also included technical talent that had been working for years to break down resistance to modernization efforts and foster a culture of responsible experimentation within the federal government. The “deletion” of 18F, a small digital services agency within the GSA, and the tumult at the U.S. Digital Service—the White House technology team repurposed to house DOGE—further stymied tech modernization efforts and halted projects that could have genuinely improved how the federal government serves citizens, including a long-awaited direct file project for taxes.  

For an administration that made federal AI adoption central to its agenda, the haphazard culling of the federal workforce inadvertently removed the people working to achieve this objective. Efforts to course correct with a new fixed-term tech talent recruitment surge are helpful but ultimately insufficient and will face their own distinct set of challenges. 

A culture of extreme risk aversion remains intact 

In a risk-averse culture where innovation is not rewarded in performance reviews, a powerful preference toward the status quo makes technology modernization challenging even in the most optimal environments. Technologists have identified artificial “third rails” or “fiefdoms” around processes or data flows, and areas ripe for tech intervention, such as spreadsheetbased reporting, are declared off limits without explanation, even if efforts to leverage technology could offload tedious tasks or improve how government serves its citizens. 

DOGE’s explicit framing of AI adoption around efficiency and workforce reduction did little to catalyze cultural change and made an already challenging environment more difficult. Federal employees who might otherwise embrace AI tools instead perceived it as an existential threat to their livelihoods. And warranted concerns about surveillance and job displacement among staff who occupy critical chokepoints may have led them to drag their feet, quietly block pilots, or resist scaling beneficial systems. With fixed-term technology talent primed to swoop in for a few years, it is easy enough to wait them out rather than experiment. The harder challenge is enacting real cultural change that prioritizes and rewards responsible experimentation. 

Data sharing takes a hit 

For years, federal agencies—through their chief data officers—have recognized the importance of high-quality datasets for evidence-based decision-making and worked to address bottlenecks in data sharing both within and across agencies. DOGE’s approach to data may have undermined ongoing efforts to address data storage, quality, and governance issues, creating new problems that will take time to repair.  

As part of its effort to build a more efficient government, DOGE sought unprecedented access to sensitive data across the federal government. For all of its limitations, the Privacy Act of 1974 was designed to restrict access to data based on a “need-to-know” basis and prevent the kind of centralized database that DOGE appeared to be constructing. Multiple courts found that DOGE affiliates accessed private information from agencies without being able to articulate a “legitimate need” for such access.  

Adding to these concerns, high-profile incidents of DOGE accessing systems without proper security credentials and releasing sensitive but unclassified information about intelligence agencies on the public-facing DOGE website, among other incidents, may have confirmed fears about improper data stewardship, amplified distrust, and fostered more risk aversion toward future data linkages.  

The road forward after DOGE 

One year on, DOGE’s status remains uncertain, but the legacy of its first few months of operation persists. Despite an explicit mandate from the White House to accelerate AI adoption across federal agencies, DOGE’s approach may have made it more difficult for federal agencies to fulfill this aspiration. While technical talent in the federal government is a prerequisite for both building bespoke AI systems and managing contractor-developed ones, fixed-term hiring will not solve the tech talent gap that DOGE exacerbated. Tours of duty for technologists are a valuable addition to the broader workforce, but federal technology modernization must be recognized as a long-term institutional project requiring sustained investment in permanent capacity. This includes improving the technical career pathway to create more opportunities for advancement and reforming hiring processes to ensure the right talent is getting in even if they are unfamiliar with the particularities of federal hiring.  

The federal government should also work to rebuild trust within its existing workforce by reframing AI adoption around mission improvement and service delivery rather than workforce reduction. It must also reward responsible risk-taking as part of performance evaluations and encourage continued learning and skills development across the workforce.  

Finally, data-sharing initiatives need to be rebuilt around legitimate processes, including proper security vetting and transparent governance. The shortcuts DOGE took demonstrated that expedited data sharing across agencies was possible; however, they may have also undermined interagency collaboration. Improving this collaboration will require investments in shared infrastructure, like APIs or data brokers, clear access restrictions, and regular audits for quality control, among other possibilities. 

DOGE’s activities over the past year demonstrate what happens when federal government modernization meets a “move fast and break things” mentality. Rather than achieving its mission—which even Musk acknowledged was only “somewhat successful”—the rapid flurry of activity catalyzed by DOGE may have done more to undermine technology adoption than facilitate it. Moving forward, a more deliberate and systematic process that addresses myriad issues related to talent, funding, data access, outdated regulations, and trust will help the federal government leverage AI capabilities to better deliver for its citizens. 

  • Footnotes
    1. The report, which will be released in February, explores existing bottlenecks and solutions to better facilitate AI adoption across the federal government. It draws on interviews with current and former technologists across executive agencies and data from several sources, including AI Use Case Inventories, OMB memoranda submissions, federal jobs data, and request for information submissions. 

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).