When “AI-First” Meets Reality
What Salesforce’s layoffs and strategic reversal reveal about the limits of automation
I wrote about this a few weeks ago in a piece titled AI Isn’t Cutting Developer Jobs, It’s Just Changing What We Do (25.11.2025, link: https://nazanwrites.beehiiv.com/p/ai-isn-t-cutting-developer-jobs-it-s-just-changing-what-we-do). At the time, the dominant narrative was already clear: AI would automate knowledge work at scale, companies would trim teams, and productivity gains would neatly replace human labor. Since then, reality has been less tidy. Yes, there have been job cuts, and yes, AI is deeply reshaping how work gets done but not in the clean, linear way many tech companies seemed to expect.
In September 2025, Salesforce became the poster child for the “AI will replace humans” narrative. CEO Marc Benioff casually confirmed that the company’s customer support organization had been reduced from roughly 9,000 people to about 5,000. The justification was straightforward: AI agents were already handling around half of all customer conversations, and according to Benioff, that simply meant he “needed less heads.” He framed the moment as a breakthrough, an overdue rebalancing enabled by generative AI and described the period as the most exciting phase of his career.
That confidence did not survive the year.
By December, Salesforce executives were publicly walking back earlier assumptions. Sanjna Parulekar, the company’s Senior Vice President of Product Marketing, admitted that leadership had been far more optimistic about large language models a year earlier. What changed wasn’t ideology, but exposure. As these systems were pushed into more complex, real-world workflows, their limitations became difficult to ignore. LLMs turned out to be inherently probabilistic, prone to drifting off task, skipping instructions, or behaving unpredictably once the problem space became even moderately complicated.
Salesforce’s own engineers confirmed the issue. CTO Muralidhar Krishnaprasad explained that once an AI agent receives more than a handful of explicit instructions, often as few as eight, it begins to fail silently, omitting steps or ignoring constraints. This wasn’t a theoretical concern. Customers like Vivint discovered that AI agents built on Salesforce tooling were failing to perform basic actions, such as sending customer satisfaction surveys, despite being clearly instructed to do so. Fixing these failures required the introduction of “deterministic” safeguards: rigid, rule-based triggers layered on top of the AI to ensure it actually did what it was supposed to do.
By the end of 2025, the company’s strategy had quietly shifted. The loud “AI-first” positioning gave way to something more restrained: a “data-first” approach focused on foundations, controls, and predictability rather than model intelligence. Benioff himself began emphasizing the need to reduce hallucinations and uncertainty by prioritizing structured data over autonomous reasoning. And while thousands of support roles had been cut earlier in the year, Salesforce spent the closing months of 2025 ramping up hiring on the sales side, acknowledging, somewhat sheepishly, that AI still lacks the human judgment, trust, and relational depth required for complex business interactions.
What this gap between expectations and reality makes clear is that AI didn’t fail, it was oversold, and then over-deployed. The expectation was replacement: fewer people, flatter teams, cleaner processes. The reality has been messier. Today’s models are powerful but fragile, impressive in demos and unreliable at the edges where real work lives. They accelerate some tasks while creating new ones: monitoring, correcting, scaffolding, and building guardrails around systems that cannot be trusted unattended. We’re no longer in the phase of asking whether AI works, but where it breaks, and how costly those failures are. That’s where caution becomes necessary, not as resistance to progress, but as an acknowledgment that probabilistic systems don’t behave like software, and pretending they do leads to brittle organizations and expensive reversals.
The Salesforce example brings the reality into sharp focus. The expectation was that AI would allow companies to do the same work with fewer people, cleanly and permanently. The reality has been closer to a reshuffle than a reduction. Headcount was cut where AI looked impressive on paper, then quietly rebuilt elsewhere once the limits became obvious. Automation handled the surface-level interactions, but humans were still needed to define boundaries, resolve ambiguity, repair failures, and carry relationships that models couldn’t sustain. What looked like efficiency at the executive level often translated into fragility on the ground. The lesson isn’t that AI has no place in tech organizations, it clearly does, but that treating it as a substitute for human judgment is a category error. Salesforce didn’t eliminate complexity; it displaced it, and then had to pay attention to where it landed.
Sources / Further Reading
– Slashdot (Sept 1, 2025): Coverage of Marc Benioff’s comments on reducing Salesforce support headcount following the rollout of AI agents.
– The Economic Times (Dec 23, 2025): Reporting on Salesforce executives acknowledging declining confidence in large language models and a shift toward more predictable automation.
– Salesforce Newsroom (Nov–Dec 2025): Official posts outlining Salesforce’s transition toward deterministic, data-first AI systems.


The part that resonates is the displacement of complexity. In most orgs, that complexity doesn’t disappear - it migrates into governance, monitoring, and exception handling.
AI-first often looks efficient at the executive layer because failure costs aren’t visible yet. But once probabilistic systems hit real workflows, the oversight surface expands slowly.
The org doesn’t get smaller. It gets differently structured.
Thank you for such a balanced take on the Salesforce situation. I absolutely think that a data-first approach is sustainable and effective. I think the problem with Salesforce is that they got rid of a lot of their institutional knowledge, which would be very difficult for them to buy back.