Salesforce pitches AI agents as the government sheds staff

Crowds enter the Dreamforce conference at the Moscone Center on Howard Street in San Francisco, California Tuesday, Oct. 14, 2025.

Crowds enter the Dreamforce conference at the Moscone Center on Howard Street in San Francisco, California Tuesday, Oct. 14, 2025. San Francisco Chronicle/Hearst Newspapers / Contributor

Natalie Alms By Natalie Alms,
Senior Correspondent, Nextgov/FCW

By Natalie Alms

|

Amid the White House’s ongoing push to reduce the federal workforce, “government will be the largest users of agentic technologies of any industry,” predicted one Salesforce exec.

SAN FRANCISCO — As the Trump administration shrinks the federal workforce, the private sector is developing artificial intelligence that could fill in gaps — specifically, agentic AI, technology that can take action without precise instructions. 

The “agentic enterprise,” where humans work alongside AI agents, was the theme of Salesforce’s annual conference last week.

Currently, it’s largely a hypothetical in the federal context. Agencies don’t have Salesforce-powered agents in production, and what exactly counts as an AI agent is debatable, as the term is used to describe systems with varying degrees of autonomy. 

Still, “I think government will be the largest users of agentic technologies of any industry,” said Paul Tatum, executive vice president of Salesforce’s global public sector sections, told Nextgov/FCW

Dozens of federal customers are piloting Salesforce agents, he said, predicting deployments within three to six months. 

Last week, the software company announced its new platform for agentic AI, which includes voice-enabled agents that can automate customer service interactions like phone calls using natural language processing, offering more conversational interactions instead of rigid phone trees.

The Trump administration may be willing to use agents, given its pro-AI stance combined with restrictions in many parts of the federal workforce and budget.

The exact effects of agents, including the potential for displacement, remain to be seen. But the federal government’s chief information officer, Gregory Barbaccia, has already said that AI “is the number one thing that is going to help people mitigate the staffing shortages,” pointing to automation as the “holy grail” for the federal government.

The federal workforce will be 300,000 employees smaller by the end of the year, the head of the Office of Personnel Management, Scott Kupor, has said.

“Governments never have enough resources and they never have enough budget. It’s actually the perfect use case where you can put an agent in place to help you,” Nadia Hansen, public sector industry advisor at Salesforce, told Nextgov/FCW.

Tatum called benefits and claims adjudication a “very easy use case” for agencies to start with, suggesting that agents could comb through backlogs and offer federal employees recommendations on each case.

Asked about the risk of using the technology for adjudications, Tatum emphasized human oversight and said that the agents could be put first to the easiest cases. In terms of evaluation, customers can use AI to test agents at scale, grading them against policy documents, he said.

The risks of agents are similar to those of generative AI, said Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination and Redesign at Brown University, although those risks are compounded by the fact that agents have autonomy to take actions, which introduces additional security concerns and oversight challenges. 

During its annual conference, Salesforce introduced “hybrid reasoning” into its Agentforce platform. It’s a type of AI model that the company said combines deterministic workflows with the flexibility of large language models — something Tatum also said can help address the risks that come from LLM’s non-deterministic nature.

Still, critics warn that AI-powered tools can and have automated bias and mistakes at scale. An automated fraud detection system in Michigan, for example, wrongly accused about 40,000 individuals of unemployment fraud between 2013 and 2015. 

Others have cautioned that governments shouldn’t wait to adopt agentic AI and risk widening any gaps between the technology people experience in the private versus the public sector.

What’s important, said Venkatasubramanian, is evaluating agents in context, where the technology can be judged by those using it, based on if it’s accomplishing what they need done. 

Outside of Austin, Texas, the city of Kyle deployed an AI agent with Salesforce in March as part of a customer service reorganization for the city’s non-emergency phone line, 311. The city has been struggling to keep up with rapid population growth over the last decade.

Individuals with service requests, like alerting the city to a new pothole, can use the agent on the city’s website or mobile app to get information and request city services.

“It’s not like the old school chatbots where it just provides you a link,” said Joshua Chronley, the city’s assistant director for finance. “It can collect the information [and] submit it for you.”

Those that opt-in can get progress updates as their request is fixed, too. 

The agent has the city’s entire knowledge base, including city council sessions, codes, ordinances, calendars and more, said Chronley, who noted that he’s confident that the agent is working well because the city has seen a 10% reduction in call volume, but no change in the number of service requests created.

Eventually, the city wants to use Salesforce’s voice technology agent on its phone lines, he said. Currently, the 311 line is still manned by employees, although those employees have access to the agent on the backend.

The city isn’t expecting immediate workforce impacts from adding the voice technology, although if it proves reliable long-term, Kyle may “elevate” some of its call center positions so that those employees are essentially managing the agents answering the phones and stepping in when people want to talk to a person, said Chronley. 

“It is all about augmentation,” Mia Jordan, a public sector industry advisor at Salesforce, told Nextgov/FCW about how agents work in relation to people. 

The company isn’t promoting that its agents should be used to replace human employees, she said, although “I do believe that if this administration does not have a desire to bring more people on board, they can use it for that purpose too.”