Dataface Tasks

Expand dashboard search to return variable-scoped deep links

IDMCP_ANALYST_AGENT-EXPAND_DASHBOARD_SEARCH_TO_RETURN_VARIABLE_SCOPED_DEEP_LINKS
Statusnot_started
Priorityp1
Milestonem5-v1-2-launch
Ownerdata-ai-engineer-architect

Problem

search_dashboards can help an agent find the right dashboard, but the handoff is still too weak. Today the result is effectively "here is the dashboard," not "here is the dashboard already scoped to what the user asked for."

That is a gap for workflows like:

  1. "Take me to the Acme renewal dashboard for Q4."
  2. "Show me the support dashboard filtered to customer = Stripe and severity = P1."
  3. "Open the weekly revenue dashboard for EMEA for last month."

In all of those cases, the useful outcome is not merely identifying a dashboard with matching semantics. The useful outcome is a link or route target that opens the dashboard with the relevant variable state already applied.

We need dashboard search to expand from discovery to actionable navigation:

  • search should rank dashboards partly by whether they can satisfy the requested filters through existing variables
  • the agent should be able to infer candidate variable bindings from the request
  • the system should return a concrete navigation target with those bindings encoded safely
  • the receiving dashboard surface should apply those bindings predictably instead of silently falling back to defaults

This matters both for agent UX and human UX. If a user asks for "the Acme dashboard" and lands on an unfiltered default dashboard, the system feels dumb even if it technically found the right artifact.

Context

Relevant current surfaces and files: - dataface/ai/mcp/search.py — current search_dashboards implementation - dataface/ai/mcp/tools.py — MCP tool wrappers and return shapes - dataface/ai/tools.py — shared tool dispatch used by Cloud, Playground, terminal, and MCP consumers - apps/cloud/ dashboard routes/templates — receiving web surface for dashboard navigation - variable/url handling docs under docs/docs/variables/ and interactive HTML notes in ai_notes/considerations/INTERACTIVE_VARIABLES_HTML.md - tasks/workstreams/mcp-analyst-agent/tasks/add-eval-loop-for-dashboard-search-and-variable-scoped-navigation.md — companion eval task that should become the regression harness for this work

Current behavior gap: - search_dashboards is useful for retrieval, but not for opening a dashboard in a user-specific or context-specific state - existing links generally assume default variables or manual post-click filtering - there is no explicit contract for "search result + resolved variable bindings + open URL"

Why this came up now: - Ridge's recent linked-interactivity announcement highlights that agent value increases when conversational context can hand off into an already-scoped analytical surface - we should support the simpler but high-value version of that idea now: when search finds a dashboard, it should be able to send the user directly into the right dashboard state

Constraints: - keep YAML and declared variables as the source of truth; do not invent ad hoc hidden filters - no magic fallback: if a requested filter cannot map cleanly onto declared dashboard variables, the result should say so clearly - support safe serialization for variable types we actually own (string/select, number, date/date-range, boolean, maybe multi-select) rather than shoving arbitrary JSON into URLs - the result contract must work across MCP/agent use and Cloud web navigation - deep links should be stable enough for copy/share, not just ephemeral local UI state

Desired end state: - a user or agent can ask for an existing dashboard in a specific scope - search returns the best matching dashboard plus an open_url or route payload with concrete variable bindings - Cloud can open that dashboard with those bindings applied and visible

Possible Solutions

Extend search_dashboards to:

  1. inspect candidate dashboards for declared variables
  2. infer requested variable bindings from the query and optional structured context
  3. score results higher when variable coverage is strong
  4. return an open_url plus structured variable_bindings

Example shape:

{
  "dashboard_id": "...",
  "title": "Renewal Risk Dashboard",
  "reason": "Matches renewal analysis and supports customer + quarter filters",
  "variable_bindings": {
    "customer": "Acme",
    "quarter": "2025-Q4"
  },
  "open_url": "/org/acme/project/revenue/dashboards/renewal-risk?customer=Acme&quarter=2025-Q4"
}

Pros: Smallest concept count. Keeps search as the main entry point. Easy for chat agents and UI surfaces to consume.

Cons: Requires variable inference and URL-contract work in the same task.

Flow: 1. search_dashboards finds likely dashboards 2. a second tool resolves one chosen dashboard into a deep link with explicit variable bindings

Pros: Cleaner separation between retrieval and navigation.

Cons: Worse UX for common flows, more tool choreography, and the ranking layer still cannot prefer dashboards with stronger variable fit unless some of that logic leaks back into search.

Option C: UI-only patch in Cloud

Keep MCP/tool responses unchanged. Let Cloud chat parse search results and build variableized URLs locally in the browser or Django view layer.

Pros: Lower immediate MCP surface churn.

Cons: Wrong abstraction boundary. Terminal/MCP/IDE users would not get the improvement, and Cloud would grow bespoke logic for behavior that belongs in the shared search/navigation layer.

Plan

Recommended approach: make dashboard search return actionable deep links with explicit variable bindings.

Implementation steps

  1. Define the contract - Decide the response shape for navigable results:

    • variable_bindings
    • binding_status or per-variable resolution notes
    • open_url or equivalent route payload
    • Decide what counts as a resolvable binding vs an unsupported request
  2. Add dashboard-variable awareness to search - Enrich search indexing/scoring with variable metadata from dashboard YAML - Prefer dashboards whose declared variables can express the requested scope - Keep pure semantic match, but add a "can this dashboard actually open in the requested state?" factor

  3. Implement binding inference - Parse likely variable values from the user request and/or structured context - Map those values only onto declared dashboard variables - Reject ambiguous or lossy mappings rather than guessing silently

  4. Implement safe deep-link serialization - Standardize URL/querystring encoding for supported variable types - Ensure round-trip behavior on the dashboard view side - Validate that receiving routes apply values consistently and visibly

  5. Wire agent/UI surfaces - Update MCP/openai tool output contracts if needed - Ensure chat/search consumers can present "Open dashboard" affordances using the returned deep link - Prefer explicit messaging when only partial bindings were applied

  6. Add regression coverage - Search result includes open_url when bindings resolve cleanly - Search result prefers dashboards with matching variable coverage - Unsupported variable types or ambiguous mappings surface clear explanations - Dashboard route applies encoded variables correctly

Files likely involved

  • dataface/ai/mcp/search.py
  • dataface/ai/mcp/tools.py
  • dataface/ai/tools.py
  • Cloud dashboard routing / variable-application code under apps/cloud/
  • tests around MCP search and dashboard view routing

Open questions

  • Should search_dashboards accept optional structured filters in addition to free text?
  • Do we want a distinction between shareable canonical links and transient "open now" route state?
  • How do we represent partial success: matched dashboard, but only 1 of 3 requested filters mapped cleanly?

Implementation Progress

Not started.

QA Exploration

  • [ ] QA exploration completed (or N/A for non-UI tasks)

Expected QA once implemented: - verify chat/search can open an existing dashboard already filtered to the requested customer/time scope - verify the receiving dashboard visibly reflects applied variables - verify unsupported or partial bindings are explained instead of silently ignored

Review Feedback

  • [ ] Review cleared