SHUR IQ / Lyapunov Stability Layer / Research Brief / May 2026
Viz Hub →
V
Research Brief — Lyapunov Stability Layer v0.2 — May 1, 2026

What if a brand's score isn’t a measurement — but a position?

Every week the ShurIQ panel scores 21 micro-drama brands across five dimensions and publishes a composite. Some scores climb. Some collapse. Some bounce back; others don’t. The composite reads where each brand sits this week — it doesn’t say whether that position is one the brand’s dynamics will hold, or one it’s about to slide off. So we asked: what would it look like to treat the score as a state in a dynamical system, and ask whether the system has a stable equilibrium underneath each brand? This brief reports an overnight run of that experiment on eight weeks of real data.

0.714
Real-data validation — within 0.003 of synthetic baseline
2
Basins recovered by per-cluster fit (16 + 5 brands)
3
Brands flagged structurally unstable inside the active basin
Capital-efficiency spread on a single intervention across the V gradient
The frame shift

The composite reads where a brand is. The Lyapunov layer reads whether the dynamics hold it there. A high composite on a steep slope is structurally fragile — the score is not the system. Three of the top ten W17 brands carry composites above 60 with V values that put them outside their basin radius. Both readings live alongside each other; neither replaces the other.

1 — The question that started this

A prompt sent to a Claude Code session, late on April 30

The original ask was direct: take this Lyapunov stability scaffold (synthetic-data tested) and run it on real ShurIQ panel data tonight. Don’t simulate — use the W10–W17 micro-drama state archive that the weekly publish already produces. Update the dimension columns to match the real five (content strength, narrative ownership, distribution power, community strength, monetization infrastructure). Run the experiment, write the report, surface anything interesting that wasn’t in the task list.

The motivating intuition was a long-running unease with the way composite scores are read. Composite scores are observations — they describe altitude. But a brand’s altitude doesn’t tell you whether the ground underneath is firm. A brand can score 78 today because last week was good, while sitting in a part of the state space the dynamics are pulling it away from. A different brand can score 60 in a place the dynamics will hold it for a year. The composite cannot tell those two stories apart.

In one sentence
We wanted a second reading on each brand — not where it is, but whether the dynamics will keep it there.

2 — What got built overnight

A pipeline, two augmented sites, five motion graphics, one fresh stack ranking

The overnight run produced a real-data Lyapunov fit on the W10–W17 micro-drama panel and a per-cluster fit that recovered the vertical’s two basins. It produced a per-brand stability classification, a disturbance-recovery probe (ISS), and a capital-efficiency calculator that ranks ten archetypal interventions per brand. It rebuilt the W17 stack ranking with a stability tab, deployed two new sites, and generated five motion graphics illustrating the concept. Every output is reproducible from the artifacts in out/.

What you’re reading right now

This brief is one of the two sites that came out of the overnight run. The other is the W17 micro-drama republish with a Stability tab added (microco-w17-stability.pages.dev) — same journalism, augmented with the new stability reading on every brand. Both sites are linked from the header.

3 — What this layer is measuring

Altitude is the composite. Slope is the new reading.

Picture the five-dimensional ShurIQ state space as a landscape. Each brand is a point on it. The composite is the brand’s altitude — a single number summarizing where it sits across the five dimensions. The Lyapunov function V(x) is a different scalar field on the same landscape: it’s the height of the slope the brand is standing on, measured from the nearest stable equilibrium. V = 0 means the brand is at the bottom of a valley. V growing means the brand is climbing away from the valley.

The composite says 78 / 100, ranked third. V says this position is 6.49 units up a slope and outside the basin radius of 4.41 — the dynamics aren’t pulling this brand back if it falls. A brand can score high on the composite and high on V at the same time: that’s a brand at altitude on a cliff. A brand can score modestly on the composite and low on V: that’s a brand in the valley, doing fine, robust to shocks. The whole point is that these are independent readings — both produced from the same panel, neither replacing the other.

Plain reading
Composite reads where the brand is right now. V reads whether the dynamics will hold the brand there. The Lyapunov layer is a fragility reading, not a score replacement.
“Your score dropped because you’re outside the stable basin and need structural intervention” is a different finding from “your score dropped due to a transient disturbance and will revert.” The composite cannot distinguish them. V can.

4 — How the layer works, in one screen

The mechanism, before the findings

Six pieces, each with one paragraph. The Concept tab and Methods tab go deeper; this is the orientation.

State vector x. Each brand at each week is a five-dimensional vector: [content_strength, narrative_ownership, distribution_power, community_strength, monetization_infrastructure], scaled 0–100. A brand’s trajectory across W10–W17 is eight points moving through this 5-D space.

Equilibrium x*. A point in the same 5-D space — the centroid of the cohort of brands whose final-quarter scores are high-mean and low-variance. It’s the point the panel’s steady-state brands have settled around. The basin of attraction is the surrounding region from which other brands are pulled toward it.

The Lyapunov function V. A learned scalar function V(x) = (x − x*)ᵀ P (x − x*), where P is fit by a semidefinite program. V(x*) = 0, V is positive everywhere else, and V decreases along observed trajectories. If a valid V exists, the equilibrium is structurally stable.

Validation. The fit is held out: 20% of brands are reserved, V is fit on the rest, then we score the fit by (a) the share of held-out transitions where V actually decreases and (b) how well low V predicts brand recovery. A combined score ≥ 0.65 means the fit is accepted. Real micro-drama hits 0.714.

Per-cluster extension. When a vertical hosts more than one regime, a single V is the wrong tool. K-means clustering partitions brands by their last-quarter position; a separate V is fit per cluster. On real micro-drama this lifts coverage from 0.714 to 0.923 — the discovery is that the vertical has two basins, not one.

The two new readings the pipeline outputs. First, stability class per brand — stable / marginal / unstable, a discrete label derived from V vs. basin radius. Second, capital efficiency per intervention — −ΔV / cost, ranking which interventions move a given brand toward equilibrium most cheaply. Both feed the augmented stack ranking on the W17 republish.

The smoke test, then the real run

Pipeline behavior preserved across synthetic and real panels

The pipeline was first run on three synthetic panels designed as positive and negative controls. A stable-equilibrium industry produced a quadratic-SDP fit with validation 0.717. A Lorenz-driven chaotic industry was correctly rejected at 0.512. A bistable two-attractor industry was rejected at 0.626 — the expected result for a single V trying to model two basins. Discrimination was confirmed before the real run.

On real W10–W17 panel data covering 21 micro-drama brands, the same pipeline produced validation 0.714 — within 0.003 of the synthetic baseline. Chaotic and bistable comparators on adjacent verticals (viral short-form, subscription streaming) re-rejected at 0.516 and 0.622 respectively. The pipeline transfers; behavior is preserved.

How to read 0.714
The validation score is on a 0–1 scale, so 0.714 reads as 71.4%. Two checks must clear simultaneously: V must decrease along held-out trajectories (internal consistency) and low V must predict which brands are recovering (external predictivity). 0.65 is the acceptance threshold; 0.714 sits comfortably above. Adjacent comparators rejected at 0.516 / 0.622 — the test discriminates.
Why this matters operationally

A null result — the layer reporting no stable function found for an industry — is itself a saleable insight. It tells the client that steady-state thinking will mislead in this regime, and that score-based targets should not be used as planning anchors. The chaotic-rejection signal is a deliverable, not a failure mode.

The structural finding: micro-drama is bimodal

Single-V fit at 0.714 hides two basins; clustered fit pushes coverage to 0.923

A single Lyapunov function assumes one basin. When a vertical hosts two regimes — leaders converging to one steady state, laggards drifting toward another — a single-V fit reports a depressed validation score that under-states the actual structure. K-means on last-quarter mean position, then quadratic SDP per cluster, recovers the basins.

The 21-brand micro-drama panel splits cleanly into a 16-brand active-competitor basin centered on [60.8, 59.1, 73.1, 57.0, 66.3] with cluster validation 0.895, and a 5-brand laggard basin centered on [27.7, 25.6, 28.5, 24.6, 16.5] with cluster validation 1.000. The laggard cohort: verza-tv, rtp, klip, both-worlds-freeli, mansa. Best-cluster decay fraction rises from 0.714 to 0.923.

What clustering buys
A single Lyapunov function can describe one bowl. The W17 board has two — K-means clustering (a standard partitioning algorithm) splits the 21 brands by where they sit in 5-D, then a separate V is fit per group. Each group is internally well-described, so the combined score rises sharply.

This is not a fitting artifact. It is the pipeline detecting the polarization that micro-drama analysts describe qualitatively — a tight pack of pure-play competitors and a long tail of subscale or exiting platforms — and quantifying it as basin geometry.

↑1
ReelShort overtakes DramaBox on stability tiebreak
Both brands carry composite 83.5. ReelShort sits at V 1.97, inside basin radius 4.41. DramaBox at V 2.48 is marginal. When composites tie, the Lyapunov-adjusted ranking promotes the structurally tighter brand.

Where the new ranking departs from the composite

Three brands inside the active-competitor cluster sit outside the basin radius

Composite-only ranking puts DramaBox first and ReelShort second by 0.05 of a point. The Lyapunov-adjusted ranking flips them on stability, with ReelShort's lower V breaking the tie. The more consequential moves happen lower in the table.

Disney holds composite rank 3 at 78.0 with V 6.49 — outside basin radius 4.41 and classified unstable on the single-V reading. Netflix drops two ranks (7→9) with composite 62.9 and V 4.49. Google/100Zeros drops four ranks (9→13) with composite 61.4 and V 10.13 — the worst structural fragility on the board. All three carry above-mean composites, and all three sit outside the active-competitor basin. Their scores are observations of a position the dynamics do not currently support.

Composite vs. stability
Where composite and stability disagree: the composite reads where the brand is right now; V reads whether the dynamics will hold the brand there. A brand can score 78 today (Disney) but sit at V 6.49 — high altitude on a steep slope. Inversely, a brand can score 60 (GoodShort) but sit at V 0.40 — modest altitude on flat ground.

In the other direction, GoodShort climbs two ranks (10→8) with V 0.40 — the tightest grip on equilibrium in the active basin. CandyJar, Amazon, ShortMax, ViU, and Lifetime A&E each move up one rank on the same logic.

Capital efficiency: the V gradient as a budget multiplier

Same intervention, different brands, 7× spread in $ per unit of V reduction

Once V is fit, every candidate intervention has a Lyapunov-implied capital efficiency: −ΔV / cost-in-millions. Higher is better — a positive value means the intervention pulls the brand toward equilibrium per dollar spent. The interesting structure is what happens when the same intervention is applied across different starting positions.

shortmax sits near equilibrium at V 0.33. Its top-ranked intervention — a lean brand refresh at $0.35M — produces ΔV of −0.081, capital efficiency 0.233. lifetime-ae is marginal at V 2.44. A community / fandom program at $1.20M produces ΔV of −0.896, capital efficiency 0.747. google-100zeros is far from basin at V 10.13. The same community / fandom program at the same $1.20M produces ΔV of −2.033, capital efficiency 1.694 — roughly 7× the per-dollar V reduction of the near-basin brand.

Brands far from equilibrium see larger absolute ΔV from the same dollar. The V gradient formalizes the practitioner's intuition that struggling brands have more to gain from focused intervention — and gives a concrete ranking when several interventions are on the table.

This is the foundation for a control-Lyapunov-function (CLF) layer in the next sprint: turn discrete intervention scoring into an LP/QP optimization that picks the Δx maximizing −ΔV subject to a budget constraint. The current calculator scores ten archetypal interventions; the CLF layer would score the continuous space.

Why the gradient matters
V grows quadratically with structural distance from x* — double the distance, quadruple the V; quadruple the distance, sixteen times the V. The same intervention shaves more V off a brand sitting at the steeper part of the curve. This is the formal version of "struggling brands have more to gain."

What this is and is not

Honest scope notes

The fitted V is empirical, not formally proven. Real SOS verification requires polynomial dynamics; the candidate function here is a learned object that satisfies decay on observed trajectories. It is useful, not bulletproof. Industry parameters are treated as fixed during the fit, so the layer must be re-run periodically against the rolling panel.

A brand inside the basin is associated with future stability; the framework does not by itself prove that a given intervention will land the brand inside the basin. CLF-based recommendations are the next step in closing that gap. Causal validation against forward-looking outcomes belongs to a separate workstream.

The layer is not a replacement for the composite score. It is a complement — a structural reading that runs alongside the measurement reading and tells the client which composite values to trust as steady-state targets and which to treat as transient.

Ω
Concept — Lyapunov stability for brand intelligence

What V means, what a basin of attraction means, why it matters here.

A short reference for the regulatory-grade reader. The math is canonical control-theory; what's new is applying it to brand state vectors fit from panel data.

The state vector

Five SBPI dimensions treated as coordinates in R⁵

Each brand's position is a five-dimensional vector x ∈ R⁵:

x₁ = content strength
x₂ = narrative ownership
x₃ = distribution power
x₄ = community strength
x₅ = monetization infrastructure

The position evolves week over week as a function of the brand's process portfolio (content cadence, distribution mix, capital allocation), exogenous noise (competitor moves, platform changes, cultural events), and industry parameters (saturation, audience growth, capital intensity). Formally: xₜ₊₁ = f(xₜ, uₜ, wₜ; θ_I).

The Lyapunov function

A scalar field that decreases along stable trajectories

A Lyapunov function is a scalar map V: R⁵ → R≥0 with three properties:

1. V(x*) = 0 — zero at the equilibrium
2. V(x) > 0 elsewhere — positive everywhere else
3. V̇(x) ≤ 0 along observed trajectories — non-increasing in expectation

If such a V exists and is verified, you have provable convergence to x* from any state inside the sublevel set Ω = {x : V(x) ≤ c} for the largest c where decay holds. Ω is the basin of attraction: the set of starting positions from which the system reverts to x*.

V landscape with brand sliding to equilibrium x*
Animated: V landscape with a brand-state sliding down to the equilibrium x*. The basin is the region from which the trajectory descends.
Plain reading

Picture the state space as a landscape. V(x) is the altitude function. The equilibrium x* is the bottom of a valley. The basin Ω is the rim of that valley — everything inside rolls down to x*; everything outside doesn't necessarily.

A brand “inside the basin” is in a position where the dynamics pull it back toward equilibrium under disturbance. A brand “outside the basin” is in a position the dynamics do not pull it back from.

Three function classes the pipeline tries

In order of restrictiveness; first one that validates wins
Method 1 / accepted on real data
Quadratic — V(x) = (x−x*)ᵀ P (x−x*)

Captures unimodal ellipsoidal basins. Fits via semidefinite program (convex, fast). The P matrix is interpretable: its eigenstructure tells you which dimensions are most stability-critical. This is the workhorse fit and the one that accepted on the real micro-drama panel at validation 0.714.

Method 2 / fallback
SOS polynomial lift — V(x) = z(x)ᵀ Q z(x)

Captures non-quadratic but smooth basins by lifting x into a vector z of monomials. Still fits as a convex SDP on the lifted vector. Curse of dimensionality on the lifted size; harder to interpret. Tried second when quadratic decay holds but the basin geometry is non-elliptical.

Method 3 / heavy fallback
Neural Lyapunov — V(x) = ‖gθ(x−x*)‖² + ε‖x−x*‖²

Universal approximator. The squared-norm-of-MLP form guarantees V ≥ 0 and V(x*) = 0 by construction. Black box; needs more data; non-convex training. Reserved for highly non-convex basins where interpretability is secondary. Currently has a serialization bug (TorchScript-jit-scripted modules don't pickle through joblib) — logged as a deferred two-line fix.

Per-cluster extension
K-means cluster → quadratic SDP per cluster

For multi-modal industries, k-means on last-quarter mean position partitions the panel; one quadratic V is fit per cluster. Reports per-cluster validation and a best-cluster decay fraction (each holdout transition routed to its lowest-V cluster). Recovered the 16-brand and 5-brand basins on real micro-drama at 0.895 and 1.000.

Validation score

Combined metric: internal consistency + external predictivity

The acceptance metric is a 50/50 blend:

score = 0.5 × decay_fraction + 0.5 × recovery_AUC

where decay_fraction is the share of holdout transitions where V(xₜ₊₁) ≤ V(xₜ) (V actually decreases), and recovery_AUC is the AUC of using −V(xₜ) as a predictor of “brand recovers within 90 days.” A score ≥ 0.65 means V is both internally consistent (decreases along trajectories) and externally predictive (lower V correlates with recovery). Below 0.65 is a regime-mismatch signal — itself worth knowing.

Stability classes

How a brand's V reading is rendered as a discrete label

For each brand, three readings combine into a stability class: V vs. basin radius, V̇ estimate, and distance to x*.

stable — V below basin threshold, V̇ non-positive
marginal — V near threshold or V̇ ambiguous
unstable — V above basin threshold; brand sits outside the sublevel set

Of the 21 W17 micro-drama brands: 10 stable, 6 marginal, 5 unstable. Note that “unstable” under the single-V fit on cluster 0 is reread by the per-cluster fit — some laggard-cluster brands flagged unstable in the single-V reading are stable inside their own cluster's tighter basin. The W17 ranking surfaces both readings.

?
Glossary — Concepts in This Brief

One screen of definitions for the dynamics-first vocabulary.

Every term in the editorial that's borrowed from control theory or statistics is defined here. Cross-references at the bottom of each entry point back to where the term is used.

Term
State vector x

The five-dimensional vector of a brand's SBPI scores: content_strength, narrative_ownership, distribution_power, community_strength, monetization_infrastructure. Each component is scaled 0–100. The brand's position on the 5-D SBPI map.

Term
Equilibrium x*

A point in the same 5-D space that represents the steady state the panel's brands tend to drift toward. Computed as the centroid of brands whose final-quarter scores are high-mean and low-variance — the cohort that has settled. The low point of the SBPI map.

Term
Lyapunov function V(x)

A scalar function from the 5-D state space into the non-negative reals, with three properties: V(x*) = 0, V(x) > 0 elsewhere, and V decreases along observed trajectories. If such a V exists, it certifies that x* is a stable equilibrium. Think of V as the altitude function on the map — it's zero at the bottom, positive everywhere else, and you walk downhill along the trajectories.

Term
Control-theoretic

Borrowed from control theory — the engineering discipline of analyzing dynamical systems (how state evolves over time and how to steer it). Bridges to mechanical, electrical, and chemical engineering. We are borrowing the analytical framework; we are not designing controllers.

Term
Basin of attraction

The set of starting states from which the system reverts to the equilibrium x*. Formally Ω = {x : V(x) ≤ c} for the largest c where V's decay condition still holds — the largest "valley" inside which downhill motion is guaranteed. Inside the basin = the dynamics will pull the brand back from a shock; outside = the dynamics won't necessarily.

Term
Quadratic V — (x−x*)ᵀ P (x−x*)

V grows as the square of distance from x*. Slower than exponential growth (which doubles at a fixed rate); faster than linear. At distance 2 from x*, V is 4×; at distance 4, V is 16×. The matrix P is what the SDP fits — its eigenstructure tells you which dimensions cost more to stray in.

Term
Structural distance

Not raw Euclidean distance from x*, but distance weighted by the matrix P — i.e., V itself. Some directions in the 5-D space cost more (P weights them higher). A brand can be far from x* in raw terms but close in P-weighted terms (the "cheap" directions), or vice versa. Structural distance is what V actually measures.

Term
K-means clustering

Standard partitioning algorithm. Picks k centroids and assigns each point to the nearest centroid, iterating until centroids stabilize. We cluster brands by their last-quarter mean state to identify distinct attractors in the panel. k=2 separates the W17 board into 16 active competitors and 5 laggards.

Term
Validation score

A 50/50 blend of decay fraction (share of holdout transitions where V actually decreases) and recovery AUC (how well lower V predicts whether a brand ends up in the top tertile at end of window). On a 0–1 scale, so 0.714 reads as 71.4%. Two conjoint criteria must clear, so it isn't a regression-style "fit"; it's a structural test. Threshold for acceptance is 0.65.

Term
Decay fraction

Share of held-out transitions where V(xₜ₊₁) ≤ V(xₜ). The internal-consistency check on V — does the function actually decrease along the trajectories we did not train on?

Term
In basin / outside basin

A brand is in the basin if V(x_now) ≤ basin_radius (a learned threshold — the 75th percentile of training V values). Outside means the brand sits in a position the dynamics will not necessarily revert from under disturbance.

Term
Stability class

stable — V well below threshold, V̇ non-positive. marginal — V near threshold or V̇ ambiguous. unstable — V above threshold; brand sits outside the sublevel set. W17 distribution: 10 stable, 6 marginal, 5 unstable.

Term
Lyapunov-adjusted score

Adjusted = composite × (1 − 0.05 × V / basin_radius). Take the original composite and apply a small penalty proportional to structural fragility (V in units of basin_radius). The penalty is at most 5% per basin-radius unit, so it compresses brand pairs whose composites are close but whose stability differs — surfaces rank inversions only where they really matter.

Term
ISS — Input-to-State Stability

A robustness probe: inject a synthetic shock Δx of varying magnitude at a random period and forward-simulate to see how fast V returns to its pre-shock level. Stable basins give bounded recovery; chaotic regimes don't. The ISS chart reads relative shape across magnitudes, not absolute level across regimes.

Term
Capital efficiency

−ΔV / cost-in-millions. A discrete intervention (e.g., +6 community strength at $1.2M) shifts the brand's state by Δx. The Lyapunov-implied value of that intervention is the V-drop per dollar spent. Same intervention applied at higher V yields larger absolute ΔV — the V gradient is a budget multiplier.

Term
Composite score

The weighted SBPI composite — a 0–100 number per brand combining the five dimensions. Answers where is this brand right now? The composite and the Lyapunov layer answer different questions (altitude vs. slope); both readings are produced from the same panel.

4
Findings — W17 real-data run

Four headline results from the overnight run on real ShurIQ panel data.

Each finding is reproducible from the artifacts in out/. Validation scores, basin geometries, and per-brand readings are written to disk and stable across reruns.

01
Reproduction
Real-data validation 0.714 — within 0.003 of the synthetic baseline.
The single-V quadratic SDP fit reaches 0.714 on the real micro-drama panel. The synthetic smoke-test baseline was 0.717. The pipeline is not memorizing synthetic structure; behavior transfers across data sources. Adjacent verticals (viral short-form, subscription streaming) re-reject at 0.516 / 0.622 as expected, confirming the rejection signal is preserved as well.
02
Bimodality
Real micro-drama is bimodal — per-cluster fit recovers two basins.
K-means clustering reveals a 16-brand active-competitor basin (per-cluster validation 0.895) and a 5-brand laggard basin (per-cluster validation 1.000). Best-cluster decay fraction rises from 0.714 to 0.923. Single-V fits will under-report basin geometry on this vertical going forward; per-cluster Lyapunov is the recommended default for the micro-drama publish.
03
Regime discrimination
Chaotic and bistable comparators correctly rejected.
Synthetic chaotic (Lorenz-driven) and synthetic bistable industries score 0.512 / 0.516 and 0.626 / 0.622 respectively — below the 0.65 acceptance threshold. Reporting no stable function found is itself a saleable insight: it tells the client that steady-state thinking will mislead and that score-based targets should not be used as planning anchors in that regime.
04
Capital efficiency
Same intervention, 7× spread in capital efficiency across the V gradient.
A community / fandom program at $1.20M produces capital efficiency 0.747 on lifetime-ae (V 2.44, marginal) and 1.694 on google-100zeros (V 10.13, far from basin) — the same dollar buys roughly 7× the V reduction at the far end. The ranking is sensitive to where each brand sits relative to x*, not just to absolute scores.
Real W10–W17 V trajectories animated
Animated: real W10–W17 micro-drama V trajectories. Green = inside basin, red = outside. Decay along observed transitions is the visual signature of the 0.714 validation score.

Reproducibility table

Synthetic-vs-real validation scores per industry
Industry Designed regime Synthetic baseline Real-data run Δ Outcome
micro_drama_streaming Stable equilibrium 0.717 0.714 −0.003 Accepted
viral_short_form Chaotic (Lorenz) 0.512 0.516 +0.004 Rejected (correct)
subscription_streaming Bistable 0.626 0.622 −0.004 Rejected (correct)

Real-data scores cluster within ±0.05 of the synthetic baseline across all three industries. The pipeline is not regime-shifted by the source change.

21 brands separating into two basins of attraction
Animated: the 21-brand panel resolving into a 16-brand active-competitor basin and a 5-brand laggard basin. The bimodality recovered by per-cluster fitting.

Per-cluster basin geometry

Real micro-drama, k=2
Cluster Label Brands Validation Equilibrium x* Basin radius
0 Active competitor 16 0.895 [60.8, 59.1, 73.1, 57.0, 66.3] 3.66
1 Laggard 5 1.000 [27.7, 25.6, 28.5, 24.6, 16.5] 1.00

The laggard cohort is verza-tv, rtp, klip, both-worlds-freeli, mansa. Cluster 1's perfect 1.000 validation reflects a small, tight cohort whose holdout transitions all decay; basin radius is correspondingly narrow. Cluster 0 is the operational basin for the rest of the panel.

Same intervention applied at three V positions
Animated: a single +6 community-strength intervention applied to brands at three V positions. Larger V gradient produces larger ΔV at fixed cost.

Capital-efficient interventions, three sample brands

Top-ranked recommendation per brand from specs/interventions.json
Brand V Position Top intervention ΔV Cost ($M) Capital efficiency
shortmax 0.33 Near equilibrium Lean brand refresh −0.081 0.35 0.233
lifetime-ae 2.44 Marginal Community / fandom program −0.896 1.20 0.747
google-100zeros 10.13 Far from basin Community / fandom program −2.033 1.20 1.694

Same intervention category at the same cost — community / fandom program at $1.20M — produces 7× the per-dollar V reduction on the far-from-basin brand vs. the marginal one. The V gradient is doing the work: brands far from x* have steeper local slopes, so a fixed Δx maps to a larger ΔV.

ISS / disturbance recovery

Random-direction shocks; mean-reversion forward simulation

For the accepted micro-drama V, random-direction shocks of magnitude ‖dx‖ ∈ {2, 5, 10} were injected at random periods and forward-simulated. Recovery is defined relative to pre-shock V, so transient noise is excluded. Median recovery for all three magnitudes was zero periods; max recovery was 1, 2, and 3 periods respectively. The basin is robust to small and medium shocks; even large shocks recover in ≤ 3 periods under modest mean-reversion stiffness. Comparator readings on chaotic / bistable industries use a fallback V whose absolute scale is not comparable, so they are presented as relative-shape signals only.

21
Stack Ranking — W17-2026, Lyapunov-adjusted

Composite vs. Lyapunov-adjusted ranking, with rank deltas and stability class.

Adjusted composite is the brand's composite minus a small penalty proportional to its V reading. Brands inside their basin are unaffected; brands outside their basin are demoted. Three rank flips are non-trivial: ReelShort ↑1, Netflix ↓2, Google/100Zeros ↓4.

Stack rank lines morphing from composite to Lyapunov-adjusted
Animated: 21 brand-rank lines morphing from composite ranking to Lyapunov-adjusted ranking. ReelShort, GoodShort, CandyJar climb; Netflix, Google/100Zeros, DramaBox descend on V.
Rank Brand Cluster Composite Adjusted V Stability In basin? Δ rank

Basin radius for the single-V fit is 4.41. In basin compares V to that radius. Stability class combines V vs. radius with V̇ estimate and distance to x*. Cluster 0 is the active-competitor basin (16 brands, equilibrium ~[61, 59, 73, 57, 66]); cluster 1 is the laggard basin (5 brands, equilibrium ~[28, 26, 28, 25, 16]).

Stability distribution

Single-V classification across the 21-brand panel
10
Stable — V below basin radius, V̇ non-positive
6
Marginal — V near threshold or V̇ ambiguous
5
Unstable — V above basin radius
f
Methods — technical summary

How V is fit, validated, and translated into a stack-ranking adjustment.

Compact reference for the regulatory-grade reader. All artifacts (joblib pickles, JSON outputs, parquet panels) are preserved in out/; the runner is scripts/run_experiments.py.

Pipeline at a glance

End-to-end overnight run, ~10 minutes total runtime
Step 1
Panel ingest

Real W10–W17 micro-drama panel: 21 brands × 8 weeks × 5 dimensions. Stored as parquet at out/panel.parquet. Synthetic comparator panels generated by synth_panel.py for chaotic and bistable regimes.

Step 2
V fit (3 methods, first valid wins)

Quadratic SDP → SOS polynomial lift → neural Lyapunov, in order. Each fit produces a candidate V, an estimated equilibrium x*, and a basin radius (largest c for which decay holds on the training set). Quadratic accepted on real micro-drama at 0.714.

Step 3
Per-cluster fit (k=2 default)

K-means on last-quarter mean position partitions the panel; one quadratic V is fit per cluster. Best-cluster decay fraction routes each holdout transition to its lowest-V cluster — proxy for combined-basin coverage.

Step 4
ISS probe

Random-direction shock injection (magnitude 2, 5, 10) at random periods, forward-simulated under mean-reversion stiffness 0.10. Recovery time = periods until V drops back below pre-shock level. 105–400 probes per industry per magnitude.

Step 5
Capital efficiency

Ten archetypal interventions in specs/interventions.json, each with delta_x vector and dollar cost. For a given brand state x: capital_efficiency(u) = −[V(x+Δx) − V(x)] / cost_M. Ranked per brand.

Step 6
Lyapunov-adjusted composite

Adjusted composite = composite − penalty proportional to V. Penalty is small inside the basin, larger outside. Drives the W17 rank deltas: ReelShort +1, Netflix −2, Google/100Zeros −4.

Acceptance threshold & max Lyapunov exponent

Two independent stability signals

A V fit is accepted when validation score ≥ 0.65. Below 0.65 is reported as “no stable V found” — itself a substantive finding, not a failure. The pipeline also estimates the max Lyapunov exponent (MLE) via Rosenstein nearest-neighbor divergence as a cross-check. MLE < 0 indicates trajectory convergence; MLE > 0 indicates chaos. On real micro-drama, the MLE estimate was 3.28 — positive, indicating local stretching, but the V fit still validates because the basin-recovery dynamic dominates the small-scale stretching at the panel timescale. Two independent readings of stability, neither dispositive on its own, both worth reporting.

Honest limitations

Carried verbatim from the theory notes

• The fitted V is empirical, not formally proven. Real SOS verification requires polynomial dynamics, which the panel does not have. The candidate V is a learned object that satisfies decay on observed trajectories — useful, not bulletproof.
• Industry parameters θ_I are treated as fixed during the fit. Industries evolve. The pipeline must re-fit periodically (the operational target is a nightly run joined to the existing Optuna pipeline).
• Causality is implicit. A brand inside the basin is associated with future stability, but the V framework alone does not prove that interventions reliably move brands inside the basin. CLF-based recommendations need separate causal validation.
• Dimensionality is currently five. If the dimension count grows past fifteen, the SDP becomes expensive and the SOS lift explodes. Plan for PCA on the score space before scaling.
• Bistable / multi-modal industries need cluster-aware fitting. A single quadratic V cannot capture two basins. Per-cluster fit handles this on micro-drama; future verticals may need k > 2.

Recommended next steps

Five concrete moves from the overnight run

1. Promote per-cluster Lyapunov to default for the micro-drama vertical. Bimodality is structural, not noise. Single-V fits will under-report basin geometry going forward.

2. Wire V into the weekly publish. Append per-brand V, basin membership, and stability classification to each weekly snapshot. One-screen schema addition; gives clients a structural-fragility reading alongside the composite.

3. Build a control-Lyapunov-function (CLF) layer. Capital efficiency currently scores discrete interventions against a fixed Δx. A CLF layer would optimize Δx against a budget constraint (LP or QP, not combinatorial). Better recommendations.

4. Replace mean-reversion ISS forward model with a fitted dynamics model. The current probe uses generic 0.10 stiffness; fitting f(x) per industry from the panel would produce industry-specific recovery curves and let ISS distinguish soft vs. brittle stability.

5. Optuna joint objective. Stability-aware signal-weighting is the natural next move: maximize V-validation × composite predictive accuracy in a single Optuna search.

Copy for your LLM

Paste this into Claude / ChatGPT / Gemini alongside the report. The prompt asks the LLM to read the methodology brief, define key terms, and answer at plain-English / practitioner / technical depth.


    
Download Markdown