Three frameworks.
88 scoring units.
One diagnostic.
Team Operations measures how your team operates. Development Lifecycle measures how you build. Product Assessment measures what you ship. The diagnostic is the divergence between them.
A strong team doesn't guarantee a strong product. A disciplined build doesn't guarantee what you ship is AI-native. An AI-native product doesn't guarantee the team can sustain it. The diagnostic is the gap.
Each framework answers one question.
The answer to each is a stage between 1 (Foundation) and 5 (Compounding). The three answers together are the diagnostic.
Is your team operating at the level it needs to?
Is your team building AI products the way they should be built?
Is your product actually AI-native, or is it AI theater?
One framework is a scorecard.
Three frameworks, read together,
become a diagnostic.
The tension engine reads all three frameworks and names the pattern. Every scored team lands in one of three patterns, or in an aligned state below Stage 3.
Your team is more mature than your product. Capability isn't translating into outcomes.
Triggered when Team Operations stage is higher than Product Assessment stage by one or more stages. The Development Lifecycle axis tells you whether the gap sits in how you build or in what you decide to build.
Your product is more AI-native than your team can sustain. Ship velocity may exceed operational maturity.
Triggered when Product Assessment stage is higher than Team Operations stage. The Development Lifecycle axis tells you whether you're shipping fast with discipline, or fast without it.
Your team, your build, and your product are all above Stage 3 and aligned. Compounding advantage available.
Triggered when all three frameworks score Stage 3 or higher and no gap exceeds one stage. The rarest pattern. The one every team is trying to reach.
The pattern is the diagnosis.
The ranked gap list is the prescription.
The next improvement is the action.
How scoring works.
Dacard reads signals from 50+ tools your team already uses. GitHub commits, Linear issues, Slack channels, Figma files, Notion docs, PostHog events, Sentry errors, website content. Each signal maps to one or more framework units via a calibrated inference map.
Each of the 88 units is scored 1 to 5 by an LLM reading the signals against a rubric. The rubric for each unit is public. Every score surfaces the evidence it's based on and the confidence level. Low-confidence units prompt the user to add context.
Scoring is anchored by five named archetypes per framework. The archetypes are benchmark teams (AI-First Studio, Eng-Heavy Series A, Pre-AI SaaS, AI Wrapper, Compounding Ops) that define what each stage looks like in practice. New scores calibrate against the archetype distribution.
Each framework has theater patterns that cap scores when claims exceed evidence. Eval theater, context theater, emergence theater, cost theater, spec theater, harness theater. Twelve patterns across three frameworks. The theater check is automatic. You can't score your way into Stage 4 by claiming what the code doesn't show.
An initial benchmark cohort of hand-scored companies per framework anchors the distribution. Known AI-native companies, traditional SaaS adopting AI, and design-partner cohort teams. Your score is positioned against the distribution, not against an arbitrary 100-point scale.
Framework v1.0 (April 2026) is locked through Q2 2027. Rubrics and stage definitions don't change during v1.0. Signals underneath evolve continuously. Your Stage 2 in April and your Stage 3 in October are comparable because the rubric didn't change underneath them.
Why three frameworks.
Most measurement tools cover one framework. Jellyfish measures engineering (one of six Team Operations functions). LinearB measures dev workflow. Swarmia measures engineering effectiveness. Amplitude measures product usage. Each answers one question.
Three frameworks answer a different question: where's the divergence?
Team at Stage 4 and product at Stage 2 isn't the same diagnosis as team at Stage 2 and product at Stage 2. Same product score, different prescription. That's what a single-framework tool can't tell you.
Development Lifecycle is the newest framework. It's what turns two-framework tension analysis (team vs product) into three-framework diagnosis (team vs build vs product). Whether the gap is in what you decide or how you build is a different problem with a different fix.
Three frameworks is the minimum you need to name the pattern correctly.
Two tell you there's a problem.
Three tell you which problem.
See your pattern.
Score your product in 2 minutes. Free. No sign-up. Just your product's website.