Highlights
Anthropic launched Claude for Financial Services, an enterprise-only AI platform designed to help financial institutions perform research, modeling and compliance with verified data sources and audit trails.
The solution integrates with providers like Snowflake, S&P Global and Morningstar, enabling users to verify financial data directly from the source and reduce hallucinations in AI-generated outputs.
Clients including Bridgewater and the Norwegian sovereign wealth fund are adopting the platform, which offers expanded context windows, Claude Code for custom modeling, and six weeks of training.
Anthropic on Tuesday (July 15) introduced Claude for Financial Services, an out-of-the-box generative artificial intelligence solution tailored to analysts, portfolio managers and underwriters at large financial institutions.
The service, which is only available in enterprise plans, is powered by Claude 4, Anthropic’s most advanced AI model family. It is integrated with Box, Snowflake, Morningstar, FactSet, S&P Global and other data sources so users can access internal and external data in one dashboard.
This is the first industry-specific service that Anthropic has formally introduced, according to Jonathan “JP” Pelosi, head of FSI at Anthropic.
“Where we saw a lot of traction early on was with these high-trust industries,” Pelosi told PYMNTS in an interview. “Our models, our solutions, are just very well positioned to help these firms.”
The platform enables financial professionals to conduct research, generate investment reports, and perform financial modeling with audit trails and verified source data. It can be used to modernize trading, automate compliance and run complex analyses including Monte Carlo simulations and risk modeling using Claude Code.
The service comes with six weeks of hands-on training, Pelosi said. Anthropic also partnered with consulting firms such as Deloitte and KPMG to provide implementation help.
The product expands Claude’s context window as well as usage limits to support large document analysis, a necessity for hedge funds, banks and insurers conducting due diligence or modeling transactions. Users can “look at hundreds of pages of financial documents,” without running into rate limits or losing continuity, Pelosi said.
Anthropic said client data is not used for AI model training.
However, the AI industry has a lot to prove before chief financial officers become comfortable with the technology. The PYMNTS Intelligence report “The Agentic Trust Gap: Enterprise CFOs Push Pause on Agentic AI” found that a nagging concern is hallucinations, where an AI agent can go off script and expose firms to cascading payment errors and other inaccuracies.
Even as financial services companies know that AI brings a speed advantage at a time of tighter spreads and market volatility, they are hesitating.
“For finance leaders, the message is stark: Harness AI’s momentum now, but build the guardrails before the next quarterly call — or risk owning the fallout,” the report said.
For highly regulated industries like financial services, the accurate responses of generative AI models are important, and Pelosi said the solution delivers on that front.
The concern over hallucinations has “significantly stymied meaningful adoption in the financial industry,” Pelosi said. “If you and I are in the business of making very large investments or analysis on very high-stakes transactions, we don’t have the luxury of saying, ‘Hopefully that [calculation is] right.”
The solution’s integration with data and service providers should enable users to verify the data against original sources, Pelosi said. Moreover, it can handle not only text, but also audio and images — for slides, graphs and the like.
But Pelosi stopped short of saying Anthropic has solved hallucinations completely.
Anthropic isn’t claiming that “Claude would never hallucinate again,” he said. However, “it’s making it easier and easier to validate the numbers that you’re making very big decisions on.”
To that end, Claude not only can cite sources, but it also expresses uncertainty and responds with “humility,” according to Pelosi. Giving a large language model this avenue when it can’t find the answer is one way to prevent hallucinations, which occur when a model doesn’t know the answer but wants to fulfill the user’s request. So, it makes things up.
The product also includes Claude Code for analysts who need to go beyond standard capabilities, enabling users to “write and debug code” for custom modeling, and it can also tap deep research, which can pull data from external and internal data libraries, Pelosi said.
Pelosi said the solution has been adopted by the likes of Bridgewater for analysis, the Norwegian sovereign wealth fund and others.
The focus on financial services comes as Anthropic’s revenue reportedly hit an annual run rate of $4 billion even as two leaders of its Claude Code product left for a competitor, Anysphere, a maker of viral coding assistant Cursor. Coding is one of the most popular use cases for generative AI and brings business to AI companies.
Claude for Financial Services is available on the AWS Marketplace, and it is expected to be available on the Google Cloud Marketplace soon.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
Read more:
We’re always on the lookout for opportunities to partner with innovators and disruptors.
Learn More