Looker-to-Dataface migration skill via Looker API or CLI
Problem
Teams migrating from Looker to Dataface need a repeatable way to pull dashboard definitions (queries, tiles, filters) out of Looker and land them as git-friendly YAML. Ad-hoc copy-paste loses structure, duplicates query logic across tiles, and ignores how Looker actually namespaces work (models, explores, views, fields). Without a documented skill and optional automation, every migration reinvents discovery, auth, and mapping—and dbt-native query ownership stays misaligned with how explores grouped fields in Looker.
Context
- Looker exposes dashboards, looks, and underlying query specs via the Looker API; some orgs also use Looker CLI (
gzcli/ project workflows) for scripted export—both have different auth and completeness trade-offs. - Dataface faces reference named queries (often dbt-backed); migrated content must not silently invent SQL—gaps should be explicit (see core “no magic” rules).
- Repo pattern for agent workflows:
.codex/skills/(and registration inAGENTS.md/ sync as needed). Any Python tool likely lives underdataface/cli/or a smallscripts/entry with tests intests/. - Prior art in-repo: quickstart dashboard research skill and YAML layout docs under
datafaceexamples; no Looker importer today.
Possible Solutions
-
Looker API–first Python CLI — Authenticate with API keys, list dashboards, fetch dashboard elements and merged query JSON, emit a folder tree. Pros: rich metadata, no local Looker dev env required. Cons: rate limits, must map API shapes to Dataface types yourself.
-
Looker CLI / project export–first — Export LookML or use CLI where the org already standardizes on it. Pros: aligns with version-controlled LookML workflows. Cons: not all tile state lives in LookML; dashboard JSON may still need API.
-
Hybrid (recommended) — Use API for dashboard and tile/query payloads; optionally cross-reference LookML from repo for field labels, descriptions, and
sql_table_namehints. Document when operators must run API-only vs repo+API.
Output layout (recommended direction): Organize by explore (or view when tiles are view-scoped) so shared fields and filters collect once:
faces/imports/looker/<model>/<explore>.yml(or.yaml) — query fragments / named queries derived from Looker queries (dimensions, measures, filters) with comments linking Looker field IDs to intended dbt models.faces/<dashboard_slug>.yml— layout and charts that import or reference those named queries, mirroring how multiple tiles reuse the same explore in Looker.
Alternative: one file per Looker dashboard only—simpler at first but maximizes duplication when many tiles share an explore; call out as a conscious shortcut for small migrations.
Plan
- Spike: authenticate and fetch one dashboard + all tile queries via API; record raw JSON shapes and a minimal mapping table to Dataface query/chart concepts.
- Decide official transport for M2: document API as default, CLI as optional supplement for LookML-heavy orgs.
- Author
.codex/skills/<name>/SKILL.md: prerequisites (credentials, base URL), step-by-step migration, folder convention above, and an explicit gap list (drill menus, liquid, merged results, custom extensions). - Implement minimal generator (script or CLI subcommand): input = dashboard id or slug; output = proposed directory tree with stub queries and a face skeleton; fail loudly on unsupported tile types.
- Add one fixture-based test or golden-file test if generator is code; otherwise skill-only with a checked-in example export snippet in
ai_notes/or docs—pick the smallest artifact that proves repeatability. - Validate task frontmatter after edits:
just task validate <this-file>.
Implementation Progress
QA Exploration
N/A — skill/CLI and YAML generation; no product UI change. Spot-check generated YAML with dft compile/render on a sample migration.
- [x] QA exploration completed (or N/A for non-UI tasks)
Review Feedback
- [ ] Review cleared