Sections

Commentary

Hybrid jobs: How AI is rewriting work in finance

CHIEW // Shutterstock

Artificial intelligence (AI) is not destroying jobs in finance, it is rewriting them. As models begin to handle underwriting, compliance, and asset allocation, the traditional architecture of financial work is undergoing a fundamental shift.

This is not about coders replacing bankers. It is about a sector where knowing how the model works—what it sees and how it reasons—becomes the difference between making and automating decisions. It is also about the decline of traditional credentials and the rise of practical experience and critical judgement as key assets in a narrowing workforce.

In what follows, we explore how the rise of generative AI and autonomous systems is reshaping the financial workforce: Which roles are fading, which ones are emerging, and how institutions—and policymakers—can bridge the looming talent divide.

The cognitive turn in finance

For decades, financial expertise was measured in credentials such as MBAs (Master of Business Administration) and CFAs (Chartered Financial Analysts). But AI is shifting the terrain. Models now read earnings reports, classify regulatory filings, flag suspicious transactions, and even propose investment strategies. And its capability is getting better—faster, cheaper, and more scalable than any human team.

This transformation is not just a matter of tasks being automated; it is about the cognitive displacement of middle-office work. Where human judgment once shaped workflows, we now see black-box logic making calls. The financial worker is not gone, but their job has changed. Instead of crunching numbers, they are interpreting outputs. Instead of producing reports, they are validating the ones AI generates.

The result is a new division of labor—one that rewards hybrid capabilities over siloed specialization. In this environment, the most valuable professionals are not those with perfect models, but those who know when not to trust them.

Market signals

This shift is no longer speculative. Industry surveys and early adoption data point to a fast-moving frontier.

  • McKinsey (2025) reports that while only 1% of organizations describe their generative AI deployments as mature, 92% plan to increase their investments over the next three years.
  • The World Economic Forum emphasizes that AI is already reshaping core business functions in financial services—from compliance to customer interaction to risk modeling.
  • Brynjolfsson et al. (2025) demonstrate that generative AI narrows performance gaps between junior and senior workers on cognitively demanding tasks. This has direct implications for talent hierarchies, onboarding, and promotion pipelines in financial institutions.

Leading financial institutions are advancing from experimental to operational deployment of generative AI. Goldman Sachs has introduced its GS AI Assistant across the firm, supporting employees in tasks such as summarizing complex documents, drafting content, and performing data analysis. This internal tool reflects the firm’s confidence in GenAI’s capability to enhance productivity in high stakes, regulated environments. Meanwhile, JPMorgan Chase has filed a trademark application for “IndexGPT,” a generative AI tool designed to assist in selecting financial securities and assets tailored to customer needs.

These examples are part of a broader wave of experimentation. According to IBM’s 2024 Global Banking and Financial Markets study, 80% of financial institutions have implemented generative AI in at least one use case, with higher adoption rates observed in customer engagement, risk management, and compliance functions.

The human factor

These shifts are not confined to efficiency gains or operational tinkering. They are already changing how careers in finance are built and valued. Traditional markers of expertise—like time on desk or mastery of rote processes—are giving way to model fluency, critical reasoning, and the ability to collaborate with AI systems. In a growing number of roles, being good at your job increasingly means knowing how and when to override the model.

Klarna offers a telling example of what this transition looks like in practice. By 2024, the Swedish fintech reported that 87% of its employees now use generative AI in daily tasks across domains like compliance, customer support, and legal operations. However, this broad adoption was not purely additive: The company had previously laid off 700 employees due to automation but subsequently rehired in redesigned hybrid roles that require oversight, interpretation, and contextual judgment. The episode highlights not just the efficiency gains of AI, but also its limits—and the enduring need for human input where nuance, ethics, or ambiguity are involved.

The bottom line? AI does not eliminate human input—it changes where it is needed and how it adds value.

New roles, new skills

As job descriptions evolve, so does the definition of financial talent. Excel is no longer a differentiator. Python is fast becoming the new Excel. But technical skills alone will not cut it. The most in demand profiles today are those that speak both AI and finance, and can move between legal, operational, and data contexts without losing the plot.

Emerging roles reflect this shift: model risk officers who audit AI decisions; conversational system trainers who finetune the behavior of large language models (LLMs); product managers who orchestrate AI pipelines for advisory services; and compliance leads fluent in prompt engineering.

For many institutions, the bigger challenge is not hiring this new talent—it is retraining the workforce they already have. Middle office staff, operations teams, even some front office professionals now face a stark reality: Reskill or risk being functionally sidelined.

But reinvention is possible—and already underway. Forward-looking institutions are investing in internal AI academies, pairing domain experts with technical mentors and embedding cross-functional teams that blur the lines between business, compliance, and data science.

At Morgan Stanley, financial advisors are learning to work alongside GPT-4-powered copilots trained on proprietary knowledge. At BNP Paribas, Environmental, Social, and Governance (ESG) analysts use GenAI to synthesize sprawling unstructured data. At Klarna, multilingual support agents have been replaced—not entirely by AI—but by hybrid teams that supervise and retrain it.

Non-technological barriers to automation: The human frontier

Despite the rapid pace of automation, there remain important limits to what AI can displace—and they are not just technical. Much of the critical decisionmaking in finance depends on tacit knowledge: The unspoken, experience-based intuition that professionals accumulate over years. This kind of knowledge is hard to codify and even harder to replicate in generative systems trained on static data.

Tacit knowledge is not simply a nice-to-have. It is often the glue that binds together fragmented signals, the judgment that corrects for outliers, the intuition that warns when something “doesn’t feel right.” This expertise lives in memory, not in manuals. As such, AI systems that rely on past data to generate probabilistic predictions may lack precisely the cognitive friction—the hesitations, corrections, and exceptions—that make human decisionmaking robust in complex environments like finance.

Moreover, non-technological barriers to automation range from cultural resistance to ethical concerns, from regulatory ambiguity to the deeply embedded trust networks on which financial decisions still depend. For example, clients may resist decisions made solely by an AI model, particularly in areas like wealth management or risk assessment.

These structural frictions offer not just constraints but breathing room: A window of opportunity to rethink education and training in finance. Instead of doubling down on technical specialization alone, institutions should be building interdisciplinary fluency—where practical judgment, ethical reasoning, and model fluency are taught in tandem.

Policy implications: Avoid a two-tier financial workforce

Without coordinated action, the rise of AI could bifurcate the financial labor market into two castes: Those who build, interpret, and oversee intelligent systems, and those who merely execute what those systems dictate. The first group thrives. The second stagnates.

To avoid this divide, policymakers and institutions must act early by:

  • Promoting baseline AI fluency across the financial workforce, not just in specialist roles.
  • Supporting mid-career re-skilling with targeted tax incentives or public-private training programs.
  • Auditing AI systems used in HR to ensure fair hiring and avoid algorithmic entrenchment of bias.
  • Incentivizing hybrid education programs that bridge finance, data science, and regulatory knowledge.

The goal is not to slow down AI; rather, it is to ensure that the people inside financial institutions are ready for the systems they are building.

The future of finance is not a contest between humans and machines. It is a contest between institutions that adapt to a hybrid cognitive environment and those that cling to legacy hierarchies while outsourcing judgment to systems they cannot explain.

In this new reality, cognitive arbitrage is the new alpha. The edge does not come from knowing the answers; it comes from knowing how the model got them and when it is wrong.

The next generation of financial professionals will not just speak the language of money. They will speak the language of models, ethics, uncertainty, and systems.

And if they do not, someone—or something else—will.

  • Acknowledgements and disclosures

    The Workforce for the Future initiative is grateful for the support of Walmart, Inc. The findings, interpretations, and conclusions in this report are solely those of the authors and do not represent positions or policies Walmart. Brookings is committed to quality, independence, and impact in all of its work. Activities supported by its donors reflect this commitment and the analysis and recommendations are solely determined by the scholars.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).