Sections

Research

Assessing the state of AI adoption across the federal government

April 15, 2026


  • The past three administrations have made adoption of AI within the federal government a priority, yet clear bottlenecks remain.
  • AI use case inventories, federal jobs data, OMB memoranda, and interviews with federal technologists reveal the pace and scope of AI adoption has accelerated in recent years, though use remains concentrated among a handful of large agencies.
  • Expanding support for AI literacy across agencies and fostering public trust through stronger transparency practices can help bolster responsible AI adoption across the federal government.
the Claude AI website is seen on a laptop
NEW YORK, NEW YORK - FEBRUARY 16: In this illustration, the Claude AI website is seen on a laptop on February 16, 2026 in New York City. According to reports from the Wall Street Journal, the Defense Department used Anthropic's Claude Ai, via its Palantir contract, to help with the attack on Venezuela and capture former President Nicolás Maduro. (Photo illustration by Michael M. Santiago/Getty Images)

Executive summary

Three consecutive administrations have made adoption of artificial intelligence (AI) across the U.S. federal government a priority. Most recently, the Trump administration’s AI Action Plan highlighted AI’s potential to “help deliver the highly responsive government the American people expect and deserve.” To assess the current state of AI adoption across the federal government, this report draws on AI use case inventories from 2023 to 2025, federal jobs data, OMB memoranda, request for information submissions, and interviews with current and former federal technologists across eight agencies.

While the scope and pace of AI adoption accelerated significantly over the past three years, AI use across the federal government remains concentrated among a handful of large agencies. Workforce capacity constraints, a risk-averse culture, procurement and funding challenges, and low public trust in AI systems slow adoption efforts.

To bolster responsible AI adoption, the federal government could expand support for technical talent and AI literacy across agencies; continue to address the structural barriers in procurement, regulation, and budgeting that hinder technology modernization more broadly; and foster public trust through stronger transparency practices, improved use case inventories, and a focus on high-impact, positive applications that demonstrably improve how government serves the American people.

AI adoption within the federal government has been a bipartisan priority, but the scale and pace of its use varies significantly by agency

The Trump and Biden administrations both emphasized the importance of the federal government adopting AI to improve service delivery, foster data-driven decisionmaking, promote national competitiveness, and bolster national security. The first Trump administration elevated such adoption as a focus in 2020 through an executive order that required agencies to inventory their use cases, set governance processes, and experiment with AI adoption. The Biden administration continued these efforts in a 2023 executive order, introducing expanded requirements for risk assessments, additional reporting obligations for “rights‑impacting” and “safety-impacting” systems, and a hiring surge for AI and AI-enabling talent, among other changes.

Since returning to office in January 2025, the Trump administration has renewed efforts to embed AI across the executive branch. The president’s first AI-related action revoked Biden’s 2023 executive order, but his Office of Management and Budget (OMB) memoranda left in place many of the structures laid out or expanded on by his predecessor, including chief AI officers, procurement reforms, and use case inventories. Perhaps most controversially, the launch of the Department of Government Efficiency (DOGE) in February 2025 aimed to “use modern technology” to tackle “waste, fraud, and abuse” in federal programs and operations.

In July 2025, the administration released America’s AI Action Plan, highlighting AI’s potential to “deliver the highly responsive government the American people expect and deserve.” Together, these executive orders, OMB memos, and the action plan encourage federal agencies to accelerate AI adoption, with an emphasis on partnering with the private sector and streamlining acquisition processes.

Uses of AI in federal agencies 

AI utilization across federal agencies has grown significantly since the first Trump administration mandated reporting on AI use cases. Some of this growth is likely due to clarifications in reporting guidance from OMB in both 2024 and 2025. However, it likely also reflects real expansions in AI use across government. 

It is important to note a few significant limitations to this data. Despite some improvements, inventories are self-reported, and some offices are unsure of what to report and in how much detail. Inventories are never fully consistent, making comparison and standardization difficult across agencies and over time. For some agencies, use cases lack unique identifiers that would enable tracking over time, and the consolidated list of use cases released by OMB in 2023 and 2024, which forms the basis of most analyses by researchers and reporting by journalists, excludes these identifiers. These use cases also do not account for any efforts to leverage AI deployed through DOGE.

Nevertheless, the reporting provides some insight into the executive branch’s experiments with AI. In 2025, 41 agencies documented more than 3,600 individual use cases, 69% above the total number reported in 2024 and five times the number reported in 2023. While many reported use cases in 2025 are designed to streamline operations and facilitate back-office processes, others are facilitating mission-critical functions, such as benefits delivery, health and medical services, and law enforcement. For example, 52% (17) of use cases reported by the Social Security Administration support service delivery and government benefits processing. Thirty-six percent (86) of the Department of Homeland Security’s (DHS) and 54% (170) of Department of Justice’s (DOJ) AI use case inventories support law enforcement efforts. And 20% (89) of Department of Health and Human Services’ (HHS) and 45% (166) of the Department of Veterans Affairs’ (VA) use case inventories facilitate health and medical services.

With continued encouragement to experiment with AI in their operations, more agencies have begun to report use cases. In 2023, 21 agencies submitted AI use cases to the federal inventory, including 13 large agencies and eight midsize agencies, as classified by the number of employees. No small agencies participated. By 2025, participation had expanded to 41 agencies (13 large, 17 midsize, and 11 small), suggesting growing awareness of AI’s potential across government. Notably, nine agencies reported use cases for the first time in 2025, and 26 agencies increased their use case reporting from 2024 to 2025.Reported use cases are, of course, limited by what agencies can share publicly.

Figure 1

However, their inventories reveal significant disparities in the scale and pace of AI adoption, as some agencies report far more usage than others. For the past three years, five large agencies (classified as those with more than 15,000 employees) accounted for over half of the total reported use cases in the AI inventories (Figure 1). Large agencies contributed 69% of all reported AI use cases in 2024 and increased their share to 76% in 2025. Meanwhile, midsize (1,000-14,999 employees) and small agencies (under 1,000 employees) combined saw their share decline from 31% to 24% over the same period, even as the number of participating agencies remained relatively steady. As Figure 2 shows, on average, each large agency reported 211 use cases in 2025, compared to just 48 per midsize agency and five per small agency, up from 114, 32, and four use cases on average, respectively, in 2024. This indicates that while more small and midsize agencies are beginning to experiment with AI, large agencies are scaling their efforts more aggressively. The 11 small agencies that reported in 2025 collectively submitted 60 use cases, representing only 2% of the total inventory. Some of this divergence is due to the differing missions of each agency, but some of it may also be due to uneven capacity and resources to devote toward AI experimentation and learning.

Figure 2

Both the 2024 and 2025 inventories detailed information about the risks associated with each use case. In 2024, the Biden administration required agencies to detail rights- or safety-impacting use cases that “control or significantly influence” outcomes. The Trump administration changed this requirement to cover “high-impact” use cases that are used as “the principal basis” for outcomes. In 2024, rights- or safety-impacting use cases made up 16.5% of the total inventory (351 use cases). In 2025, high-impact use cases accounted for 12.3% of the total inventory (445 use cases). While the percentage declined, the absolute number of flagged use cases grew.

The lack of a common use case ID in some cases makes it difficult to compare whether agencies have simply reclassified their rights- and safety-impacting use cases as high impact, or if they have taken a different approach. Of the more than 1,000 use cases that could be matched across the 2024 and 2025 inventories by agency and name, 85% of those previously flagged as rights- or safety-impacting retained a high-impact designation under the new framework (175 total use cases), suggesting some consistency across administrations when it comes to AI use.

However, 32 cases—including nine DOJ use cases used for law enforcement—were downgraded between 2024 and 2025. These include use cases that leverage machine learning to triage threats at FBI field offices and support investigations, among others. In more than half of all cases, the rationale for this downgrade was that they did not form “the principal basis” for a decision. Eight use cases moved in the opposite direction, gaining a high-impact designation for the first time. These include use cases related to health care billing and diagnostic decisionmaking, among others.

In theory, if a use case meets the “high-impact” classification, agencies are required to provide information about pre-deployment testing, impact assessments, monitoring, or appeal processes in place. In practice, information about high-impact applications of AI remains incomplete. More than 85% of all high-impact deployed use cases in 2025 lack some required information about risk mitigation measures in place, despite explicit requirements from OMB.

There are several bottlenecks to adoption, including an insufficient amount of talent and risk-averse culture

As the federal government embeds AI across agencies, there are several challenges that could hinder adoption and erode public confidence in its use. Some only apply to certain agencies (e.g., those that handle sensitive health data), which face different risk profiles, data security practices, and viability for AI solutions. Many issues are not new and stem from structural bottlenecks that have hindered the adoption of technology in the federal government more broadly; others are unique to the nature of AI and the societal implications therein.

A significant amount of research has focused on the challenges of tech modernization in the federal government and how those extend to the deployment of AI-enabled solutions. Specific areas include outdated technology infrastructure and data governance challenges, among others. However, there are also some unique features of AI systems that compound existing challenges and create new ones. Namely, they require a different technical skillset to develop, pose greater uncertainty about future capabilities, remain somewhat inscrutable, can evolve rapidly, and face a public skeptical of their benefits.

To identify some of these obstacles, this report draws on several sources of information, including interviews with current and former technologists across executive agencies, data from AI use case inventories between 2023 and 2025, OMB memoranda submissions during the Trump and Biden administrations, federal jobs data, and submissions to the Office of Science and Technology Policy’s (OSTP) Request for Information on Regulatory Reform on Artificial Intelligence (OSTP-TECH-2025-0067). Findings and recommendations are detailed below.

AI and AI-enabling talent still represent just a small fraction of overall technical talent in the federal workforce

For over a decade, the federal government has worked to increase technical talent, recognizing it as a necessary component for overseeing data-driven solutions and building in-house capacity. However, hiring challenges remain a persistent obstacle to successfully integrating technology—including AI systems—into federal agencies. Known barriers include slow hiring timelines (particularly for roles requiring security clearances), the need for insider knowledge about the nuances of federal hiring, limited potential for career advancement, and salaries that are relatively low compared to the private sector.

Various initiatives have aimed to streamline the hiring process and bring technology expertise into government. These include pooled certification efforts, direct hiring authorities, more flexible candidate ranking processes, restrictions on resume length, efforts to be less vague in job titles, and different types of termlimited appointments, among other changes. Recently, a bipartisan bill also aimed to revamp government hiring processes using a dedicated team devoted to reviewing AI-focused job applicants. The Trump administration also announced the fixed-term U.S. Tech Force, which is designed to bring talent in from industry and elsewhere for two years of federal government service.

Since 2016, the federal government has posted more than 56,000 technical job listings. As Figure 3 demonstrates, technical job listings grew significantly during the first Trump administration and Biden administration, with posts spiking in 2020, likely in response to the Foundations for Evidence-Based Policymaking Act of 2018. The act led to a hiring surge for technical talent capable of executing new statutory mandates. However, the number of listings specifying AI and AI-enabling capabilities represents just a small fraction of these positions. Of the technical job listings identified, a little more than 1,600 of them explicitly reference AI capabilities (or just below 3%).

Figure 3

The Biden administration recognized the need to hire explicitly for “AI and AI-enabling” roles and launched a talent surge in response to Executive Order 14110. The new roles sought technical talent to build, train, and test AI models, understand the impact of AI on society, manage and monitor AI projects, train an AI-enabled workforce, and understand the legal implications of AI misuse, among other areas. Early data from the AI and Tech Talent Task Force showed the talent surge was expected to bring around 250 people to the federal government by summer 2024, with 500 more planned hires between September 2024 and September 2025 (Table 1).

Figure 4

The number of technical job listings specifying AI capabilities has steadily risen over time, from zero in 2016 to around 8% of all technical jobs in 2024. The number of posts nearly doubled from 184 in 2021 to 318 in 2024, reflecting this hiring surge, though it still represents just a small fraction of the civilian federal workforce. However, since the culling of the federal workforce under the second Trump administration, it has since dropped alongside an overall decline in hiring (Figure 3).

These AI and AI-enabling job listings were also more likely to reference expedited hiring, consistent with authorization from OPM. Nearly 33% of all AI-specific job posts took advantage of expedited hiring pathways, compared to just 17% for other technical jobs. This suggests a desire to get this talent into the federal government more quickly.

The goal of attracting an AI-savvy federal workforce is not a partisan objective. The Biden-era executive order explicitly built on an executive order from the first Trump administration that called for more AI talent in government. This was further articulated under the second Trump administration through its AI Action Plan, subsequent U.S. Tech Force program, and one DOGE official who said, “We need to hire and empower great talent in government. There’s not enough tech talent here. We need more of it.”

To address this challenge and resolve issues related to limited potential for career advancement, fixed-term hiring—including the Presidential Innovation Fellows, the Digital Corps, and the newly announced U.S. Tech Force—are appealing solutions that undoubtedly play an important role in the federal innovation ecosystem. However, additional investments in long-term capabilities and workforce stability are necessary to both enhance the effectiveness of fixed-term appointments and boost institutional capacity in the medium to long run. Since the start of 2025, the federal government has only posted 160 job listings for full-time roles that explicitly require AI expertise. Some recent job postings that are designed to accelerate AI adoption make no mention of AI or technology. This may narrow the applicant pool and deter qualified candidates from applying. 

Additionally, actions designed to reduce the federal workforce in early 2025 may have undermined efforts to recruit more forward-thinking AI experts, many of whom had been hired less than a year prior. Depending on the agency, probationary periods can last up to three years. Prior to the end of the probationary period, employees are more easily dismissible. As Figure 4 demonstrates, at least 25% of the AI-specific job listings were posted from 2024 onward, after an Oct. 30, 2023, executive order designed to surge AI and AI-enabling talent. Public reporting on departures from the U.S. Digital Service and cuts to 18F underscore the loss of at least some technical talent, and it is unclear how much of this newly hired AI and AI-enabling talent was lost simply because they were easier to remove.

A culture of risk aversion makes AI adoption more challenging

When technical talent joins the federal government in either a career role or fixed-term appointment, they enter a historically risk-averse, hierarchical environment. In this context, senior leadership can significantly impact the success of AI adoption. Several technologists who successfully scaled pilot projects emphasized the importance of receiving explicit space to experiment by a supportive supervisor. In many cases, directives from the White House, during both the Biden and Trump administrations, also provided cover to help technologists successfully innovate.

However, leadership buy-in is not always a given. In some cases, senior government officials might be interested but do not have funding available to divert away from more urgent priorities. In others, they may not understand the technology or have time to devote to learning more. Without confidence that leadership will support experimentation, even talented technologists may default to safer, more conventional approaches.

This limited capacity for experimentation can also reverberate across parts of the federal government. The current administration has attempted to curb hesitancy across agencies through education and comprehensive testing, but adopting and scaling AI solutions remains a challenge. The administration’s explicit linkage of AI deployment to federal workforce cuts through DOGE may also reinforce this hesitancy to adopt solutions that could theoretically free up time to focus on higher-order, less mundane work.

Based on current data where deployment information is available, nearly 60% of all use cases are either in the pilot or pre-deployment stage, suggesting the federal AI landscape is still in a rapid growth phase. Encouraging adoption requires investments in (and explicit time devoted to) education and continued experimentation.

The hype cycle around AI is also particularly challenging in a federal procurement environment dominated by external contractors. Although a majority of 2025 use cases provide no details about how they were developed, of the more than 1,600 that do, around 63% were developed by contractors exclusively or in combination with in-house resources. For projects that have yet to be deployed, in-house development and contracting are roughly evenly split. By the time use cases reach full deployment, however, contracting is involved in 72% of use cases, while purely in-house development drops to 28%. This imbalance is consistent regardless of agency size and could be interpreted in two ways: It could show that newer projects are increasingly relying on in-house capabilities due to greater capacity, or it could suggest difficulty in scaling projects. Agencies may have sufficient internal capacity to prototype and test AI applications, but those that are successfully operationalized and deployed do so by relying on external vendors. In either case, adequate training and in-house, longer-term technical expertise to oversee vendor-driven projects and develop in-house ones are critical.

AI’s dynamic nature poses accountability, funding, statutory, and regulatory challenges

While earlier technologies—from static software adoption to cloud computing—required organizational change, they operated in ways that were ultimately comprehensible and auditable. Despite progress in AI explainability, AI systems can introduce “black box problems,” where decisionmaking processes may not be entirely clear even to their developers. This opacity can fracture accountability in new ways. When a traditional database query produces an unexpected result, administrators can trace the logic step by step; when an AI system suggests an individual be denied a benefit or flagged for an investigation, the reasoning may be harder to explain or reproduce, complicating audits and increasing the importance of AI and AI-enabling talent in federal roles.

The pace of AI developments compounds these challenges. In the past few years, the number of notable models, patents, and publications has grown rapidly. Previous technology adoptions allowed more runway for agencies to plan, pilot, and scale over multiyear horizons with reasonable confidence that the underlying technology would remain stable. AI capabilities are advancing much faster, and their trajectory remains so uncertain that a pilot program launched today may be obsolete—or superseded by dramatically more capable systems—before it reaches scale.

The cost of AI innovation also remains unpredictable and requires flexible budget allocations for both staff and tools development or procurement. However, the current budgeting process, which begins a year and a half before the start of the fiscal year, requires agencies to forecast AI capabilities in a way that is difficult—if not impossible—for a technology evolving so rapidly. Absent unlikely changes to federal budgeting cycles, innovation funds, such as the Technology Modernization Fund (TMF) run by the General Services Administration (GSA) and OMB, have provided more flexible funding opportunities to agencies for innovation projects. The TMF received $1 billion in additional appropriations from the American Rescue Plan—on top of an initial $250 million authorized at its inception in 2017—and has funded more than 60 projects across 34 agencies. The fund has received some criticism for its difficult application process, which requires agencies to produce extensive documentation and detailed multiyear implementation plans. Projects are also generally expected to repay awards from future savings or appropriations, but uncertainty about repayment terms and future budgets may discourage agency leaders from pursuing or fully scoping TMF proposals.

Despite these limitations, two years ago, the TMF announced a call for proposals related to AI projects and supported $51 million in AI-enabling projects, including a $10 million award to the Department of Commerce’s AI Safety Institute and $18.2 million to the Department of State to use generative AI for diplomacy. After a two-month lapse in authorization, Congress recently reauthorized this spending. The short-term absence of this effort highlighted the critical need for more flexible, pooled funding opportunities better equipped to support AI-related projects than traditional pathways.​

The rapid pace of AI development can also pose challenges for legal and regulatory frameworks that were largely designed around static software with less frequent, more predictable updates. In comments to a recent OSTP request for information, respondents identified several different requirements that struggled to keep pace with AI developments. Among them were: (1) FedRAMP, a process that is built around a one‑time, front‑loaded authorization package for cloud services; (2) authorization to operate (ATO), which is granted only after a point‑in‑time security assessment, with major changes typically triggering reassessment; and (3) Federal Acquisition Regulation (FAR), which governs how agencies procure information technology and other goods and services through contracting rules and procedures that can slow rapid iteration. In theory, each requirement serves a distinct and important purpose, but in practice, they can pose challenges to the more dynamic adoption and maintenance of AI systems. Learning based on user feedback, a common practice for AI systems, is also more difficult due to the Paperwork Reduction Act (PRA), a law designed to control and minimize the burden of federal information collection on the public by requiring advance OMB approval and public notice and comment for some surveys, forms, and other data‑gathering activities. The timeline for approval can take anywhere from six to nine months.

Public skepticism of AI poses a challenge for federal adoption

Another challenge to leveraging AI in federal operations stems from overarching public skepticism about the technology itself. According to recent Pew Research Center data, about half of Americans now say they are more concerned than excited about the growing prominence of AI, up from 37% four years prior. Just 17% of the American public believes AI will positively impact the U.S. in the next two decades. While these numbers will likely improve as more people experiment with AI capabilities, any high-profile attempts to use AI to improve federal operations—particularly for high-impact services or punitive uses—may face an uphill battle convincing a skeptical public of their merits.

The regulatory environment—or lack thereof—may also further undermine public confidence. The absence of comprehensive federal AI legislation, combined with the Trump administration’s efforts to preempt state-level regulations and concerns about potential human extinction from superintelligent systems, may signal to the public that AI development is proceeding without adequate guardrails. Whether or not such fears are warranted, a public primed to worry about AI catastrophe is unlikely to trust government assurances that a new automated system will serve their interests without binding statutory requirements. They may be even less inclined to do so when agencies deploy high-risk AI systems in areas, such as law enforcement or health care, but provide limited information about their risk mitigation practices, as was the case for some agencies in the 2025 AI use case inventories and subsequent reporting to OMB.

The growing politicization of large language models (LLMs) represents another challenge that could further fuel public skepticism. LLMs offer tremendous potential to support federal employees in their work and are already being leveraged through customized tools like State Chat and USAi. At the same time, their perceived political biases are increasingly a focus for the Trump administration. A recent executive order threatens federal procurement for models with built-in “ideological biases or social agendas.” And the Pentagon recently designated the frontier model developer Anthropic a “supply chain risk”—a first for a U.S. company—in a highly public dust-up over perceived “woke AI,” surveillance, and autonomous uses of AI in warfighting.

Recognition of political bias in LLMs is not without merit, but it is difficult—if not impossible—to build systems that are truly neutral. Developers can endeavor to approximate neutrality, and the aspiration of neutrality in AI systems is not an unreasonable objective, particularly given their growing ubiquity across search engines and on cell phones. However, recent partnerships between the Pentagon and xAI, as well as efforts by xAI to gain FedRAMP High authorization, may undermine the spirit of the executive order given that the Elon Musk-owned company has a well-documented history of model updates to reflect the political preferences of its founder and, more recently, generated sexualized images of children, raising concerns across agencies. This partnership may also reinforce skepticism—at least for a subset of the population—about how the government is approaching the adoption of AI across federal agencies, particularly within classified environments.

There is also significant public concern about AI automating human interactions. Ample research shows that people prefer engaging with other people as opposed to automated systems, particularly for consequential decisions and those that are more subjective in nature. This preference is also apparent in the provision of federal government services. A survey of more than 13,000 airline travelers found that perceptions of the Transportation Security Agency (TSA) had improved not due to the widespread role of facial recognition technology, which has helped to decrease wait times, but rather due to improved “interpersonal communication” by TSA agents. In this context, technology may have helped offload some of the tedium of the airport security workload, allowing agents to devote more personal attention to passengers, though privacy concerns persist and opt-out options remain hard to find.

Further adoption of AI can be facilitated through expanded support for talent, focusing on the fundamental building blocks thwarting adoption, and fostering public trust in AI's use in government services

To address some of these challenges and facilitate the adoption of AI into federal agencies in a way that enhances their mission, government officials should consider:

Expanding support for technical talent and AI literacy to bolster responsible innovation

  • Clarify the scope and purpose of fixed-term federal tech talent programs. There are several programs that offer structured “tours of duty” for technologists, including the recently launched U.S. Tech Force, Presidential Innovation Fellows, U.S. DOGE Service (formerly, U.S. Digital Service), Digital Corps, and the now-defunct 18F. Better articulating their differences will be important to ensure that the right type of talent enters the application pipeline for each opportunity. It is also critical to address any real or perceived conflicts of interest for short-term hires, particularly with a direct pipeline to and from the private sector.
  • Continue reforming hiring processes for civil service technical talent. While fixed-term technical talent programs play an important role in technology modernization efforts, career, civil‑service technologists are vital to the success of these appointments. In partnership with short-term hires, they can tether innovation to agency realities, including procurement challenges. They can also ensure that knowledge, practices, and relationships remain in government after each cohort rotates out. As a result, agencies should prioritize improving processes for this type of hiring. Agencies could reform federal hiring for civil service technical talent with dedicated teams designed to assess the capabilities of applicants to these roles. Qualification standards could be updated to reduce gatekeeping based on narrow degree requirements and misaligned job classifications, focusing instead on demonstrated skills and experience. Agencies could expand and institutionalize pooled certifications through skills‑based assessments, such as coding exercises and technical problem‑solving. Given the pace of AI development, agencies should continue to leverage, if not expand, direct hiring authorities for AI and AI-enabling roles. Job descriptions designed to facilitate AI adoption should also be written with explicit reference to AI or technology, otherwise prospective applicants who are well-equipped for the role may not apply.
  • Create meaningful career pathways for civil-service technologists. While retention bonuses can help keep talent in the federal government temporarily, they do not substitute meaningful career advancement. To build institutional memory and retain talent, technologists should have career ladders comparable in promotion potential to other specialized federal positions. There is also potential to expand hybrid tech policy or “bridge” roles for experienced technologists who can sit inside policy, oversight, and implementation teams while remaining rooted in hands‑on practice. These roles could help improve career pathways for federal technologists and expand the number of people writing and enforcing regulations to more of those who understand real‑world technical constraints and risks.
  • Invest in tiered AI literacy and treat it as a core job requirement rewarded in evaluation processes. Given the rapid pace of AI development and hesitancy of federal employees to adopt AI tools, expanding AI literacy efforts into a tiered program that includes tailored training scoped to different roles will help employees identify and test AI applications in their own work contexts, understand where AI may be well-suited to a specific task, or simply have awareness of the way AI is reshaping work more broadly. Technical specialists could engage in regular continued learning in their area of expertise, while senior leadership builds a solid grasp of AI capabilities, limitations, and trade‑offs for decisionmaking. All public servants require basic understanding of AI ethics, risks, and appropriate use cases. Drawing on the Department of Labor’s AI Literacy Framework, this type of learning should be sustainable and ongoing, with dedicated staff time and incentives to participate. It should not only focus on new hires but also on upskilling the current workforce and be rewarded in the annual performance evaluation process.
  • Create a systematic practice of documenting and sharing AI success stories and lessons learned across the federal government and with the public. As agencies accumulate both success stories and lessons learned from their efforts to integrate AI into their operations, opportunities to share knowledge across agencies will both help facilitate adoption and compile valuable insights into the practical challenges employees have faced and, in some cases, resolved. Events like the Partnership for Public Service’s AI Use Case Showcase, which brought together practitioners from federal, state, and local governments to share lessons from specific AI deployments, demonstrate both the appetite for and value of peer-to-peer learning across levels of government. When staff see their efforts celebrated and used to form foundational knowledge, it may reinforce experimentation, responsible risk‑taking, and continuous improvement. It may also help to facilitate adoption across smaller agencies with less capacity for trial and error. Separately, a public-facing repository hosted on AI.gov that spotlights novel applications across agencies can help demonstrate some of the value AI adoption is bringing to the public, counter perceptions that government writ large is not a place for innovation and experimentation, and fuel recruitment interest by creating a positive narrative around the scale and impact of innovation.
  • Develop more centralized resources and standardized processes. Efforts to build more centralized resources have alleviated some of the challenges related to capacity and procurement, particularly for smaller agencies. For example, GSA’s OneGov Strategy—which streamlines pricing for software, so every agency does not separately negotiate the same contracts—is helpful. GSA’s AI training series is also valuable for building capacity, though there remain limited incentives for staff to participate in these upskilling efforts given existing demands on their time. Continued investments in shared AI toolkits, standardized procurement vehicles, and cross-agency technical assistance can help ensure AI capabilities are distributed across government, including in agencies with less capacity to develop their own resources.

Continuing to focus on the fundamental building blocks that thwart tech modernization more broadly

  • Continue processes for procurement reform and revisit rules and regulations related to tech adoption. Although the security of any technology system in the federal government is the main priority, the federal government could make it more straightforward—procedurally and technically—to test, deploy, and maintain AI systems. This type of reform is already ongoing, including through FedRAMP 20x, which aims to accelerate the FedRAMP approval process and eliminate some of its more cumbersome elements, including agency sponsorship and government-specific offerings. Other rules and regulations, including FAR, ATOs, and PRA requirements, also represent reported barriers to AI adoption and were designed with more static software offerings in mind. Revisiting these requirements (in partnership with Congress, when necessary) to assess where they are still fit for purpose and where they need to be refined will be important for adapting to the more dynamic, faster-paced development of current AI systems.
  • Make technology innovation budgeting more stable and predictable. The unpredictable nature of AI developments and the need for flexible funding to pilot and scale innovation may not align with formal budget cycles. Working with Congress, it may be useful for agencies to identify more flexible appropriation strategies to account for this uncertainty and reduce the timeline between planning and funding allocations, while simultaneously incorporating strategies to guard against misuse. To encourage federal agencies to innovate, it is also critical for Congress to support opportunities for flexible, pooled funding of technology-related projects. To this end, continued support for the Technology Modernization Fund, or another type of flexible spending, is vital. Making the application less challenging and the selection process more transparent could also encourage greater participation in the TMF.

Fostering public trust to improve confidence in how AI is used in government services

  • Strengthen and harmonize current transparency practices. AI.gov previously served as a hub for AI activity across the federal government, including an agency‑by‑agency catalogue of chief AI officers, compliance plans, and AI use cases. The functionality of this website has since been reframed to include remarks, executive actions, fact sheets, and articles, with no consolidation of AI activities across agencies. To foster public trust and accountability, it should be expanded to include how agencies are implementing this guidance in practice. AI.gov could host current and historical documents, offering a clear record of how federal AI oversight and adoption has evolved. The goal of this type of effort would not be to increase the reporting burden on agencies or compromise security and sensitivity but rather consolidate existing agency-level transparency efforts that are already required into a single, easy-to-access location for Congress and the public.
  • Address privacy-related gaps in AI guidance. A recent report by the Government Accountability Office (GAO) found that OMB’s government-wide AI guidance fails to fully address privacy-related issues associated with AI agency use, including those related to sensitive data. The GAO report recommends OMB identify and share known privacy risks that agencies should address in their AI policies and facilitate information sharing on privacy-related topics, including best practices, metrics, and guidance on how to evaluate and audit AI models for privacy-related considerations, store and protect sensitive data, establish clear privacy norms for internally developed AI solutions, and measure privacy-related impacts, among other topics. Adopting these recommendations would give agencies clearer direction on managing privacy risks and facilitate public confidence in AI adoption across the federal government.
  • Maintain and improve consolidated federal AI use case inventories. AI use case inventories, introduced during the first Trump administration, provide an important window into how different agencies think about and prioritize the adoption of AI. At minimum, when use cases are consolidated and shared by OMB, they should contain a consistent identification for individual use cases to allow for the comparison of projects over time. Some agencies provide this in their individual reports, but others do not, and the consolidated use case inventory has dropped this column in the past. Agencies should be encouraged to be as specific as possible, recognizing that variation in agency missions and risk profiles may limit the details they are able to provide. OMB should also encourage agencies to fully detail their risk mitigation practices for high-risk use cases, as these are the ones most likely to concern the public and are often absent or noted as “in-progress” from inventories, even for AI systems that are already operational. To complement the raw dataset releases, OMB could also encourage agencies to write a public-facing overarching assessment on how AI adoption has changed over the past year and how it is supporting agencies in their efforts to serve the public.
  • Minimize overt politicization of LLMs in government applications. If the federal government tries to enforce an executive order designed to minimize political bias in LLMs procured by agencies, OMB and GSA should ensure that these requirements are applied evenly, rather than in a way that shifts any perceived bias from one end of the political spectrum to the other. Failure to do so may further undermine confidence in federal usage of AI systems.
  • Focus federal AI investment on high‑impact, positive use cases that clearly improve people’s lives and build public trust. In tandem to rolling out lower‑risk, back‑office systems that improve efficiency, agencies should prioritize high-impact, positive opportunities that simplify citizen interactions with government—such as tax services, benefits navigation, job and training matching, extreme weather prediction, and outbreak surveillance—or in areas where even the most efficient human is unable to fully complete the task (due to, for example, an overwhelming amount of data). These use cases (which may, in some cases but not always, align with high-impact service providers) could explore areas where AI can reduce friction, save time, or address common problems. To build trust, it is also important to avoid punitive or high‑stakes deployments unless there is a clear, evidence‑based public benefit and strong protections against harm are in place. In these cases, disclosures and transparency around risk mitigation practices and guardrails are critical.

Conclusion

Adoption of AI across the federal government is accelerating. From 710 use cases in 2023 to more than 3,600 in 2025, agencies are increasingly experimenting with AI to streamline operations, improve service delivery, and support mission-critical functions. Recent initiatives, such as FedRAMP 20x and USAi, have helped to reduce authorization barriers and provide agencies with a common platform for AI experimentation, while new partnerships and the development of in-house systems aim to accelerate access to cutting-edge AI capabilities.

Yet, clear bottlenecks remain. These include a shortage of AI-specialized talent, a cultural risk aversion that dampers experimentation, procurement and regulatory challenges, and growing public skepticism about the technology itself. Addressing these challenges will be important for building the institutional capacity, workforce expertise, and public confidence needed to deploy AI systems that genuinely improve how government serves the American people.

The stakes for getting AI adoption right are considerable. Highly publicized failures in the Netherlands and Australia caused significant financial and psychological hardships. The former even contributed to the collapse of the government. In the U.S., public trust in the federal government remains near historic lows, with recent data showing only 16% of Americans saying they trust Washington to do what is right most of the time or just about always. Against this backdrop, AI applications that violate rights or produce harmful outcomes could be extremely damaging. However, research shows that satisfaction with the provision of key public services remains a critical driver of trust in government, suggesting that well-executed AI deployments focused on tangible service improvements could help rebuild confidence in democratic institutions’ ability to serve their constituents.

  • Acknowledgements and disclosures

    The author would like to thank Elham Tabassi, Derek Belle, Nitya Nadgir, Enkhjin Munkhbayar, Selina Kao, Dominique Lanfear, Adam Lammon, Nicol Turner Lee, Darrell West, and Josie Stewart for feedback and support throughout the process of writing this report. She is also grateful to the current and former federal employees and federal technology adoption observers who offered insights that feature throughout this report in both the findings and recommendations. 

  • Footnotes
    1. Notably, the Department of Defense and certain intelligence agencies are exempt from reporting. Agencies also do not have to report national security or research use cases.
    2. Drawn from interviews with current federal employees. For example, some agencies listed statistical programming languages like R or Stata as discreet use cases, occasionally multiple times, while others did not. Some agencies listed the same tools multiple times for different offices but had different risk or AI classifications for the same tools. The level of detail also varies significantly from agency to agency, and sometimes from office to office within agencies.
    3. For example, in the 2025 inventories, several offices—sometimes within a single agency—made different determinations about what constituted a high-impact use case. Some of this variation may be due to divergent security considerations, but some of it is also likely due to differing interpretations of guidelines. For example, within DHS, Immigration and Customs Enforcement, marked any use case as “Presumed high-impact but determined not high-impact” when the output was not the “principal basis” for a decision. However, Customs and Border Protection marked these as “high-impact” while still noting, “The output is not used as the sole basis for action or decision making.” In another case, two different offices with the Department of Justice listed what appears to be the same use case with different risk classifications. The tool, known as the Prisoner Assessment Tool Targeting Estimated Risk and Needs (PATTERN), predicts recidivism risk of inmates and uses personally identifiable information; however, it is listed by one office as “high-impact” and another as “not high-impact.” This tool has previously been linked to determining eligibility for early release from prison. Despite the potential high impact of this tool, the inventory provides no additional details on risk mitigation practices.
    4. For example, in 2025, the Department of Education, Department of Health and Human Services, Department of State, and General Services Administration, among other agencies, did not include unique use case IDs. The Department of the Treasury’s use case ID naming conventions vary from office to office.
    5. Agency size classifications follow the Partnership for Public Service’s 2024 Best Places to Work rankings: large (more than 15,000 employees), midsize (1,000–14,999), and small (100-999). I extend the small category to include agencies with fewer than 100 employees. Seven agencies not ranked by the partnership were classified using 2024 federal employee data.
    6. In 2023, use cases by the Department of Energy (DOE), HHS, Department of Commerce (DOC), DHS, and the VA represented 65.5% (465) of the total AI inventory. In 2024, this concentration dropped to 51.7% (1,103) of the total inventory, with uses cases from HHS, DOJ, VA, DHS, and Department of the Interior (DOI) among the most prominent adopters of AI systems. In 2025, the concentration rose slightly to 52.5% (1,893), with use cases from HHS, NASA, VA, DOE and DOJ among the most prominent.
    7. Drawn from interview with former technologist.
    8. Importantly, some use cases that appear in the 2024 inventory also appear in the 2025 inventory under a different name, which means that these use cases would not be matched for this analysis.
    9. This includes when these mitigation measures are listed as “in process,” but the use case has already been deployed.
    10. I conducted 11 interviews with current and former employees across eight different agencies. I am extremely grateful for their time and insights.
    11. Although this is a known limitation, the pay for technology talent is comparatively high by federal government standards. Of the nearly 35,000 posts for which I could identify the relevant pay scale, I found 60% of job listings advertised a GS-12 or higher. Although pay varies by location and year, GS-12 jobs begin at around $100,000 and GS-15 jobs max out at just under $190,000. A bigger challenge I heard from interviews is related to limited potential for promotions in these technical roles. Where promotion opportunities do exist, they are highly competitive, making it difficult for technical talent to find a long-term home. Instead, technologists are limited to making a lateral move into a different agency or leaving the government entirely for private sector roles that offer clearer pathways for advancement.
    12. I pulled historic data from Jan. 1, 2016, to March 10, 2026, from the USAJobs API related to technical jobs using seven different job series where AI talent would most likely reside. These series include ones related to data science (1560), computer science (1550), information technology management (2210), operations research (1515), management and program analysis (0343), computer engineering (0854), and statistics (1530). With the exception of 1530, these job series were highlighted as relevant to AI and AI-enabling roles in OPM’s hiring surge memorandum in support of Executive Order 14110. In total, I collected over 165,000 jobs, which I then filtered to include those that were technical in nature. In order to remove pure IT roles and non-technical jobs, I asked an LLM to generate a dictionary of keywords related to AI, data science, statistics, and other technical terms, which I then reviewed and modified based on subject-matter knowledge.
    13. The act required agencies to designate a chief data officer, develop comprehensive data governance frameworks, create learning agendas and evaluation plans, and establish processes for accessing and analyzing data to support evidence-based decisionmaking, among other requirements. To improve implementation, Congress—in its National Defense Authorization Act of 2022—created a new data science job series, which helped to establish consistent competencies, qualifications, and pay structures for personnel performing data science work.
    14. For an example of this in practice, see recent research on the GAMECHANGER application at the Department of Defense. Drawn from interviews with current and former technologists.
    15. Importantly, this does not include parts of agencies where frontier-pushing research is the main objective, such as the Defense Advanced Research Projects Agency (DARPA) or the Intelligence Advanced Research Projects Activity (IARPA).
    16. One example of this type of framework is the Innovation Adoption Kit, developed by the Department of the Navy.
    17. For more information on the shifting business models of AI developers, see Bogen and Maréchal, “Risky Business: Advanced AI Companies’ Race for Revenue.”
    18. Motivations for this skepticism are varied, including concerns about job displacement, environmental considerations, and a decline in cognitive capabilities and human-to-human interactions, among others.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).