At this year’s Snowflake Summit, all eyes were on AI. From new agentic AI features to the acquisition of CrunchyData, Snowflake continues its rapid evolution into a full-stack data platform. But while the product roadmap dazzled, the real story was playing out in hallway conversations, security breakouts, and urgent questions at the booth.
Customers aren’t just excited—they’re overwhelmed. The AI era is here, but most organizations are still trying to get their access controls, governance, and data architecture in order. If there was one sentiment that echoed across the conference: “We want AI—but we need control first.”
Here’s our deep dive into what was announced, what customers are saying, and what it all means.
Foundational Moves: What Snowflake Announced
1. PostgreSQL Comes to Snowflake via CrunchyData
Snowflake’s acquisition of CrunchyData brings enterprise-grade PostgreSQL into the fold. This move marks a significant expansion beyond analytical workloads, acknowledging that transactional systems aren’t going away anytime soon.
The insight:
Snowflake isn’t just absorbing analytics—it’s aiming to become the one platform to run operational and analytical workloads side by side. This matters because it simplifies data pipelines, supports real-time applications, and opens the door for agentic AI to act directly on live transactions. It also helps organizations consolidate their data stack—reducing the need for separate systems, connectors, and operational overhead—making governance and observability more consistent and manageable across the board.
2. Agentic AI: From Insight to Action
Snowflake unveiled Snowflake Intelligence, a native framework for building and deploying AI agents that understand context, automate workflows, and take action on behalf of users. This includes AI copilots, data science agents, and natural language-driven model generation.
The insight:
This isn’t about LLMs answering questions—it’s about AI agents acting autonomously. For businesses, that introduces exciting opportunities and serious governance questions. Who controls the agent? What data can it see? And how do you audit its actions?
3. OpenFlow: Snowflake’s Native ETL Evolution
With OpenFlow, Snowflake introduced a managed ingestion and transformation engine built on Apache NiFi, integrated with dbt, and optimized for Apache Iceberg. It’s designed for real-time, event-driven pipelines—all within Snowflake.
The insight:
This move challenges traditional ETL vendors and signals that Snowflake intends to become your data engineering stack, not just your warehouse. For existing tools in the ecosystem, the message is clear: evolve or risk being absorbed.
4. Semantic Metadata for AI
New semantic views and metadata tagging features were introduced to help AI agents understand context, lineage, and sensitivity. These are designed to support data classification, policy enforcement, and AI explainability.
The insight:
AI readiness is now a metadata game. Without clear definitions and relationships, AI becomes a liability. Snowflake is laying the groundwork for secure, interpretable AI—something every enterprise will need to prioritize.
Real Talk from the Booth: What Customers Actually Said
Amid the product buzz, real pain points surfaced. Here’s what prospects and customers told us face-to-face:
DIY Access Management Is Failing
Even large enterprises admitted their access management is duct-taped together. From manual permissions to role bloat and shadow access, few have confidence in their current setup. There’s growing urgency to replace brittle DIY solutions with scalable, policy-driven models.
Everyone Wants Visibility Before Action
Before solving access and governance problems, companies need to see them. Visualization tools—especially those that show data access hierarchies and policy gaps—were a breakout hit. It’s not enough to enforce policies; organizations want to understand them in real time.
AI Without Access Control Is a Disaster Waiting to Happen
Customers love the idea of embedding AI in workflows—but not if it means losing control of sensitive data. The recurring request: “Help us secure the data before we unleash the AI.” That includes tokenization, masking, right-to-forget compliance, and clear auditability.
Session Themes: The Rise of Secure AI & Open Data Architectures
Across the breakout rooms and technical sessions, three consistent themes emerged:
Security Is Now Strategic
Sessions on authentication, access control, and auditability were packed. Snowflake’s enhancements to the Trust Center and authentication policies resonated with security-conscious organizations. Event-driven scans and tighter login controls were especially well received.
Agentic AI Is Cool—But Customers Want Guardrails
Agentic AI agents—autonomous, task-performing bots—were a highlight. But attendees had concerns about how to monitor and govern their behavior. There’s a growing appetite for explainable AI, especially when agents act across business-critical systems.
Iceberg Adoption Is Heating Up
Apache Iceberg isn’t just a compatibility feature—it’s becoming a strategic pillar. Organizations want to future-proof their data with open table formats, and Snowflake’s enhanced Iceberg support (including linked catalogs and fast metadata ops) helps make that vision real.
Breadth vs. Depth: Where the Market Is Headed
Snowflake is clearly moving to own more of the data lifecycle—from ingestion and storage to AI-driven insights. But with that expansion comes complexity. And that’s where the opportunity lies.
Customers don’t just want more tools. They want clarity, control, and cross-platform flexibility. Snowflake’s strength is its breadth—but it’s leaving gaps in depth, especially around governance, data policy management, and multi-cloud interoperability.
The takeaway:
As Snowflake builds horizontally, there’s still massive value in vertical expertise—especially for companies that need security, access, and governance to span across more than just Snowflake.
Wrapping Up
Snowflake Summit 2025 showed us the future of data is smarter, faster, and more automated than ever. But as the AI wave rises, it’s clear that control, compliance, and governance aren’t optional—they’re the foundation.
AI won’t replace your team—but lack of access control might.
Now is the time to help organizations secure their data, visualize their risks, and operationalize their policies—before the agents start running the show.
Key Takeways
- Format-Preserving Encryption (FPE) secures sensitive data without altering its structure, making it ideal for systems that depend on specific data formats—like phone numbers, emails, or customer IDs.
- FPE is particularly valuable in non-production environments (e.g., development, QA, analytics), where teams need realistic data without exposing raw PII.
- Deterministic FPE is essential when supporting joins, filters, and searches on encrypted fields—especially in environments where data needs to behave consistently.
- External data sharing benefits from FPE by preserving readability and enabling fine-grained access control, ensuring only trusted users can see decrypted values.
- In multi-tenant SaaS environments, FPE helps maintain tenant isolation within shared tables, supporting compliance and trust without duplicating schema or infrastructure.
- Effective implementation requires careful policy design, key management, and enforcement, including per-tenant encryption logic and session-aware decryption rules.
- FPE should not be used universally—some fields are better served by masking or tokenization. Selecting the right technique for each data type is critical to balancing security and performance.