Scaling Enterprise AI: Insights from Franny Hsiao at Salesforce

Scaling Enterprise AI: Insights from Franny Hsiao at Salesforce

Scaling enterprise AI is a venture full of promise, yet it is often marred by architectural missteps that can hinder progress before projects even take flight. The initial allure of generative AI prototypes can quickly fade when the challenge shifts from simple model development to the intricate realities of data engineering and governance. As we approach the AI & Big Data Global 2026 in London, Franny Hsiao, the EMEA Leader of AI Architects at Salesforce, sheds light on common pitfalls and the architecture strategies successful organizations employ to ensure resilience in real-world scenarios.

The ‘Pristine Island’ Problem in Enterprise AI Scaling

Many setbacks in scaling enterprise AI can be traced back to the environments where pilot programs are launched. These initiatives often thrive in controlled settings, blissfully unaware of the complexities waiting in the broader landscape.

Hsiao notes, “The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end-to-end governance from the start.”

While pilots might begin on immaculate “pristine islands” with curated datasets, this approach overlooks the chaotic reality of enterprise data. Effective integration, normalization, and transformation are essential for managing real-world volume and variability. Neglecting these factors often leads to systems breaking under the strain.

“Data gaps and performance issues, such as inference latency, render the AI systems unusable—and worse, untrustworthy,” Hsiao cautions. Companies that successfully navigate these challenges are those that incorporate comprehensive observability and guardrails throughout the entire lifecycle of their systems. This ensures visibility and control over how effectively AI systems are operating and how users are engaging with them.

See also  Maximize Your ChatGPT Experience: A Guide to New Integrations with DoorDash, Spotify, Uber, and More

Engineering for Perceived Responsiveness

The deployment of large reasoning models, like Salesforce’s Atlas Reasoning Engine, presents a complex balance between the depth of computation and user satisfaction. Heavy computing can induce latency, risking user patience.

To counteract this, Salesforce emphasizes “perceived responsiveness through Agentforce Streaming.” This innovative approach allows for the progressive delivery of AI-generated responses while intensive computations are handled in the background.

Hsiao elaborates, “This strategy dramatically reduces perceived latency, which is often a stumbling block in production AI.”

Transparency also plays a crucial role. By providing progress indicators or designing intuitive visual cues like spinners and progress bars, organizations can keep users engaged and foster trust.

Combining visibility with mindful model selection—such as using smaller models for faster computations—creates an experience that feels both intentional and responsive.

Embracing Offline Intelligence at the Edge

In industries such as utilities and logistics, uninterrupted cloud connectivity is often unfeasible. Hsiao emphasizes that many enterprise clients seek offline functionality.

The trend toward on-device intelligence is particularly critical in field services. For instance, technicians can capture images of faulty parts while offline, with an on-device LLM (Large Language Model) providing immediate troubleshooting guidance from a cached knowledge base.

Once connectivity is restored, data synchronization occurs seamlessly, ensuring a unified source of truth without compromising workflow. Hsiao anticipates that continued innovation in edge AI will yield benefits like reduced latency, improved privacy, and greater cost-efficiency.

Navigating High-Stakes Gateways

Scaling enterprise AI deployments requires thoughtful governance. As Hsiao details, determining when human oversight is needed is essential—not as a crutch but as a strategy for accountability and continuous improvement.

See also  NVIDIA Partners with South Korea on Sovereign AI Initiatives at APEC CEO Summit

Salesforce’s approach includes mandating a “human-in-the-loop” system for critical actions, especially those that involve creating, uploading, or deleting vital data. This creates a constructive feedback loop where agents learn from human expertise, cultivating a model of collaborative intelligence rather than unchecked automation.

To foster trust in these agents, Salesforce has developed a Session Tracing Data Model (STDM), offering meticulous visibility into interactions. This model tracks every component, including user questions and system responses, enabling organizations to conduct comprehensive analyses for optimization and health monitoring.

Standardizing Communication Across Agents

As organizations deploy agents from various providers, establishing a common language is critical for multi-agent orchestration. Hsiao advocates for two key layers of standardization: orchestration and meaning.

Salesforce is moving towards open-source standards, such as MCP (Model Context Protocol) and A2A (Agent to Agent Protocol), to facilitate interoperability and mitigate vendor lock-in. Moreover, unifying semantics through initiatives like Open Semantic Interchange ensures that agents across diverse systems can understand each other’s intents.

Preparing for the Future: Agent-Ready Data

Looking ahead, focus is shifting from mere model capabilities to the accessibility of data. Many organizations grapple with outdated infrastructures where crucial data remains difficult to navigate and reuse.

Hsiao foresees that the next significant challenge—and opportunity—will revolve around making enterprise data agent-ready. This involves developing architectures that are searchable and context-aware, moving beyond traditional ETL (Extract, Transform, Load) systems to enable enriching, personalized user experiences.

Ultimately, the coming year is not about racing to build more advanced models; it’s about fortifying the orchestration and data structures that will allow production-grade systems to flourish.

See also  Unlocking Workplace Innovation: Google Enhances AI Initiatives with Gemini Enterprise

Salesforce is proud to sponsor this year’s AI & Big Data Global event in London, and Hsiao, along with other experts, will share deeper insights at the Salesforce booth located at stand #163. Don’t miss out on the chance to connect with industry leaders and explore the future of enterprise AI!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *