Announcing the TimeXtender MCP Server for governed AI queries, expanded Snowflake-first architecture options, and XPilot rule recommendations and error insights.
Most business teams are running into the same hard limit with AI: it can produce an answer in seconds, but it cannot prove that the answer accurately matches how your company measures performance.
Ask, “How did revenue perform last quarter?” and the model might pick the wrong revenue field, miss a Finance exclusion, treat bookings as revenue, or join customer records in a way that double-counts renewals. The output looks polished, but it is not anchored to the definitions your dashboards, board reports, and forecasts depend on.
That is the AI context gap. It turns AI into a reliability problem, not a productivity boost.
Governed context matters more than raw speed. When AI is forced to use the same approved business definitions, relationships, and metrics that analysts already rely on, its answers become consistent with what the organization has agreed is true. That consistency is what turns AI from “interesting output” into something a business leader can use to make reliable decisions.
When AI can only see raw database schemas, it makes silent assumptions about what things like “revenue” mean, how to attribute customers, which exclusions apply, and which filters must always be present.
On the other hand, when AI queries through governed semantic models, those assumptions are replaced by approved logic that is already in use across dashboards, board reporting, forecasting, and performance reviews.
The biggest change in this release is the TimeXtender MCP Server as a Deliver endpoint, which connects AI clients and AI agents directly to your governed semantic models.
Instead of inferring meaning from raw table names and ad hoc joins, the TimeXtender MCP Server routes AI queries through the semantic layer your analysts already trust for dashboards, reporting, and planning.
What this enables in practice:
The TimeXtender MCP Server gives you a practical way to expose the governed business context from your semantic models to AI tools that people already use, including chat interfaces such as Claude, along with any AI agents you may create.
Your AI tools and agents will now be querying through your semantic model’s validated entities, relationships, and metrics, so the result aligns with approved business logic.
Natural language queries that map to approved definitions: Users can ask questions like “How did revenue perform last quarter by region?” and the AI can answer using the semantic model’s definitions, relationships, and metrics, rather than choosing a random revenue field or inventing joins. That is the difference between quick answers and decision-grade answers.
Faster iteration for analysts: Analysts can use chat to explore a question, refine filters, compare time periods, and drill into segments without rewriting queries from scratch. Because the semantic model defines the business logic, the analyst is spending time on analysis, not on re-validating whether the query matched the approved definition.
Dashboard and reporting acceleration: When the AI can reliably retrieve the right result set, it can also help generate downstream outputs such as draft charts and dashboard layouts. The time savings here comes from eliminating the back-and-forth to reconcile definitions. The workflow can move from question to a usable dashboard in just seconds when the semantic model is already in place.
Auditability and governance for AI-driven analysis: Because queries are constrained to what the semantic model exposes, and because the MCP Server supports audit logging and read-only query validation, teams have a clearer governance posture than “an AI tool pointed at the database.” That matters when business users start relying on AI outputs in recurring decision processes.
Once you can trust the answers, the next question is whether your data architecture can support AI and analytics at scale without extra staging, extra copies, and extra operational work. This release expands Snowflake-first patterns by adding native Snowflake storage in Ingest and broadening what’s supported for Prepare on Snowflake storage.
What’s new for Snowflake in this release:
The practical outcome is flexibility. Teams standardizing on Snowflake can simplify their end-to-end flow, and teams operating mixed environments can keep moving while Snowflake coverage continues to expand.
Governed semantics are necessary for reliable answers, but they do not remove the day-to-day work of keeping data dependable and pipelines healthy. This release introduces two XPilot capabilities designed to reduce manual effort where teams spend real time today.
What’s in Preview:
Many teams are standardizing their semantic layer for consistency, then distributing that logic into the tools business users live in every day. This release adds a Qlik Cloud Deliver endpoint so you can push TimeXtender semantic models into Qlik Cloud using recommended APIs and patterns, reducing custom work and ongoing maintenance.
What this enables:
If your organization uses Qlik Cloud for analysis and dashboards, this is the simplest path to keep governed semantics central while still meeting users where they work.
If your team builds multiple data products or serves multiple teams from the same prepared datasets, this release adds a simpler way to reuse work without duplicating logic.
This announcement includes two types of updates:
General Access (GA) in this release
Preview in this release (Early Access)
Next steps