What if a brand's score isn’t a measurement — but a position?
Every week the ShurIQ panel scores 21 micro-drama brands across five dimensions and publishes a composite. Some scores climb. Some collapse. Some bounce back; others don’t. The composite reads where each brand sits this week — it doesn’t say whether that position is one the brand’s dynamics will hold, or one it’s about to slide off. So we asked: what would it look like to treat the score as a state in a dynamical system, and ask whether the system has a stable equilibrium underneath each brand? This brief reports an overnight run of that experiment on eight weeks of real data.
The composite reads where a brand is. The Lyapunov layer reads whether the dynamics hold it there. A high composite on a steep slope is structurally fragile — the score is not the system. Three of the top ten W17 brands carry composites above 60 with V values that put them outside their basin radius. Both readings live alongside each other; neither replaces the other.
1 — The question that started this
The original ask was direct: take this Lyapunov stability scaffold (synthetic-data tested) and run it on real ShurIQ panel data tonight. Don’t simulate — use the W10–W17 micro-drama state archive that the weekly publish already produces. Update the dimension columns to match the real five (content strength, narrative ownership, distribution power, community strength, monetization infrastructure). Run the experiment, write the report, surface anything interesting that wasn’t in the task list.
The motivating intuition was a long-running unease with the way composite scores are read. Composite scores are observations — they describe altitude. But a brand’s altitude doesn’t tell you whether the ground underneath is firm. A brand can score 78 today because last week was good, while sitting in a part of the state space the dynamics are pulling it away from. A different brand can score 60 in a place the dynamics will hold it for a year. The composite cannot tell those two stories apart.
2 — What got built overnight
The overnight run produced a real-data Lyapunov fit on the W10–W17 micro-drama panel and a per-cluster fit that recovered the vertical’s two basins. It produced a per-brand stability classification, a disturbance-recovery probe (ISS), and a capital-efficiency calculator that ranks ten archetypal interventions per brand. It rebuilt the W17 stack ranking with a stability tab, deployed two new sites, and generated five motion graphics illustrating the concept. Every output is reproducible from the artifacts in out/.
This brief is one of the two sites that came out of the overnight run. The other is the W17 micro-drama republish with a Stability tab added (microco-w17-stability.pages.dev) — same journalism, augmented with the new stability reading on every brand. Both sites are linked from the header.
3 — What this layer is measuring
Picture the five-dimensional ShurIQ state space as a landscape. Each brand is a point on it. The composite is the brand’s altitude — a single number summarizing where it sits across the five dimensions. The Lyapunov function V(x) is a different scalar field on the same landscape: it’s the height of the slope the brand is standing on, measured from the nearest stable equilibrium. V = 0 means the brand is at the bottom of a valley. V growing means the brand is climbing away from the valley.
The composite says 78 / 100, ranked third. V says this position is 6.49 units up a slope and outside the basin radius of 4.41 — the dynamics aren’t pulling this brand back if it falls. A brand can score high on the composite and high on V at the same time: that’s a brand at altitude on a cliff. A brand can score modestly on the composite and low on V: that’s a brand in the valley, doing fine, robust to shocks. The whole point is that these are independent readings — both produced from the same panel, neither replacing the other.
4 — How the layer works, in one screen
Six pieces, each with one paragraph. The Concept tab and Methods tab go deeper; this is the orientation.
State vector x. Each brand at each week is a five-dimensional vector: [content_strength, narrative_ownership, distribution_power, community_strength, monetization_infrastructure], scaled 0–100. A brand’s trajectory across W10–W17 is eight points moving through this 5-D space.
Equilibrium x*. A point in the same 5-D space — the centroid of the cohort of brands whose final-quarter scores are high-mean and low-variance. It’s the point the panel’s steady-state brands have settled around. The basin of attraction is the surrounding region from which other brands are pulled toward it.
The Lyapunov function V. A learned scalar function V(x) = (x − x*)ᵀ P (x − x*), where P is fit by a semidefinite program. V(x*) = 0, V is positive everywhere else, and V decreases along observed trajectories. If a valid V exists, the equilibrium is structurally stable.
Validation. The fit is held out: 20% of brands are reserved, V is fit on the rest, then we score the fit by (a) the share of held-out transitions where V actually decreases and (b) how well low V predicts brand recovery. A combined score ≥ 0.65 means the fit is accepted. Real micro-drama hits 0.714.
Per-cluster extension. When a vertical hosts more than one regime, a single V is the wrong tool. K-means clustering partitions brands by their last-quarter position; a separate V is fit per cluster. On real micro-drama this lifts coverage from 0.714 to 0.923 — the discovery is that the vertical has two basins, not one.
The two new readings the pipeline outputs. First, stability class per brand — stable / marginal / unstable, a discrete label derived from V vs. basin radius. Second, capital efficiency per intervention — −ΔV / cost, ranking which interventions move a given brand toward equilibrium most cheaply. Both feed the augmented stack ranking on the W17 republish.
The smoke test, then the real run
The pipeline was first run on three synthetic panels designed as positive and negative controls. A stable-equilibrium industry produced a quadratic-SDP fit with validation 0.717. A Lorenz-driven chaotic industry was correctly rejected at 0.512. A bistable two-attractor industry was rejected at 0.626 — the expected result for a single V trying to model two basins. Discrimination was confirmed before the real run.
On real W10–W17 panel data covering 21 micro-drama brands, the same pipeline produced validation 0.714 — within 0.003 of the synthetic baseline. Chaotic and bistable comparators on adjacent verticals (viral short-form, subscription streaming) re-rejected at 0.516 and 0.622 respectively. The pipeline transfers; behavior is preserved.
A null result — the layer reporting no stable function found for an industry — is itself a saleable insight. It tells the client that steady-state thinking will mislead in this regime, and that score-based targets should not be used as planning anchors. The chaotic-rejection signal is a deliverable, not a failure mode.
The structural finding: micro-drama is bimodal
A single Lyapunov function assumes one basin. When a vertical hosts two regimes — leaders converging to one steady state, laggards drifting toward another — a single-V fit reports a depressed validation score that under-states the actual structure. K-means on last-quarter mean position, then quadratic SDP per cluster, recovers the basins.
The 21-brand micro-drama panel splits cleanly into a 16-brand active-competitor basin centered on [60.8, 59.1, 73.1, 57.0, 66.3] with cluster validation 0.895, and a 5-brand laggard basin centered on [27.7, 25.6, 28.5, 24.6, 16.5] with cluster validation 1.000. The laggard cohort: verza-tv, rtp, klip, both-worlds-freeli, mansa. Best-cluster decay fraction rises from 0.714 to 0.923.
This is not a fitting artifact. It is the pipeline detecting the polarization that micro-drama analysts describe qualitatively — a tight pack of pure-play competitors and a long tail of subscale or exiting platforms — and quantifying it as basin geometry.
Where the new ranking departs from the composite
Composite-only ranking puts DramaBox first and ReelShort second by 0.05 of a point. The Lyapunov-adjusted ranking flips them on stability, with ReelShort's lower V breaking the tie. The more consequential moves happen lower in the table.
Disney holds composite rank 3 at 78.0 with V 6.49 — outside basin radius 4.41 and classified unstable on the single-V reading. Netflix drops two ranks (7→9) with composite 62.9 and V 4.49. Google/100Zeros drops four ranks (9→13) with composite 61.4 and V 10.13 — the worst structural fragility on the board. All three carry above-mean composites, and all three sit outside the active-competitor basin. Their scores are observations of a position the dynamics do not currently support.
In the other direction, GoodShort climbs two ranks (10→8) with V 0.40 — the tightest grip on equilibrium in the active basin. CandyJar, Amazon, ShortMax, ViU, and Lifetime A&E each move up one rank on the same logic.
Capital efficiency: the V gradient as a budget multiplier
Once V is fit, every candidate intervention has a Lyapunov-implied capital efficiency: −ΔV / cost-in-millions. Higher is better — a positive value means the intervention pulls the brand toward equilibrium per dollar spent. The interesting structure is what happens when the same intervention is applied across different starting positions.
shortmax sits near equilibrium at V 0.33. Its top-ranked intervention — a lean brand refresh at $0.35M — produces ΔV of −0.081, capital efficiency 0.233. lifetime-ae is marginal at V 2.44. A community / fandom program at $1.20M produces ΔV of −0.896, capital efficiency 0.747. google-100zeros is far from basin at V 10.13. The same community / fandom program at the same $1.20M produces ΔV of −2.033, capital efficiency 1.694 — roughly 7× the per-dollar V reduction of the near-basin brand.
This is the foundation for a control-Lyapunov-function (CLF) layer in the next sprint: turn discrete intervention scoring into an LP/QP optimization that picks the Δx maximizing −ΔV subject to a budget constraint. The current calculator scores ten archetypal interventions; the CLF layer would score the continuous space.
What this is and is not
The fitted V is empirical, not formally proven. Real SOS verification requires polynomial dynamics; the candidate function here is a learned object that satisfies decay on observed trajectories. It is useful, not bulletproof. Industry parameters are treated as fixed during the fit, so the layer must be re-run periodically against the rolling panel.
A brand inside the basin is associated with future stability; the framework does not by itself prove that a given intervention will land the brand inside the basin. CLF-based recommendations are the next step in closing that gap. Causal validation against forward-looking outcomes belongs to a separate workstream.
The layer is not a replacement for the composite score. It is a complement — a structural reading that runs alongside the measurement reading and tells the client which composite values to trust as steady-state targets and which to treat as transient.
What V means, what a basin of attraction means, why it matters here.
A short reference for the regulatory-grade reader. The math is canonical control-theory; what's new is applying it to brand state vectors fit from panel data.
The state vector
Each brand's position is a five-dimensional vector x ∈ R⁵:
• x₁ = content strength
• x₂ = narrative ownership
• x₃ = distribution power
• x₄ = community strength
• x₅ = monetization infrastructure
The position evolves week over week as a function of the brand's process portfolio (content cadence, distribution mix, capital allocation), exogenous noise (competitor moves, platform changes, cultural events), and industry parameters (saturation, audience growth, capital intensity). Formally: xₜ₊₁ = f(xₜ, uₜ, wₜ; θ_I).
The Lyapunov function
A Lyapunov function is a scalar map V: R⁵ → R≥0 with three properties:
1. V(x*) = 0 — zero at the equilibrium
2. V(x) > 0 elsewhere — positive everywhere else
3. V̇(x) ≤ 0 along observed trajectories — non-increasing in expectation
If such a V exists and is verified, you have provable convergence to x* from any state inside the sublevel set Ω = {x : V(x) ≤ c} for the largest c where decay holds. Ω is the basin of attraction: the set of starting positions from which the system reverts to x*.
Picture the state space as a landscape. V(x) is the altitude function. The equilibrium x* is the bottom of a valley. The basin Ω is the rim of that valley — everything inside rolls down to x*; everything outside doesn't necessarily.
A brand “inside the basin” is in a position where the dynamics pull it back toward equilibrium under disturbance. A brand “outside the basin” is in a position the dynamics do not pull it back from.
Three function classes the pipeline tries
V(x) = (x−x*)ᵀ P (x−x*)Captures unimodal ellipsoidal basins. Fits via semidefinite program (convex, fast). The P matrix is interpretable: its eigenstructure tells you which dimensions are most stability-critical. This is the workhorse fit and the one that accepted on the real micro-drama panel at validation 0.714.
V(x) = z(x)ᵀ Q z(x)Captures non-quadratic but smooth basins by lifting x into a vector z of monomials. Still fits as a convex SDP on the lifted vector. Curse of dimensionality on the lifted size; harder to interpret. Tried second when quadratic decay holds but the basin geometry is non-elliptical.
V(x) = ‖gθ(x−x*)‖² + ε‖x−x*‖²Universal approximator. The squared-norm-of-MLP form guarantees V ≥ 0 and V(x*) = 0 by construction. Black box; needs more data; non-convex training. Reserved for highly non-convex basins where interpretability is secondary. Currently has a serialization bug (TorchScript-jit-scripted modules don't pickle through joblib) — logged as a deferred two-line fix.
For multi-modal industries, k-means on last-quarter mean position partitions the panel; one quadratic V is fit per cluster. Reports per-cluster validation and a best-cluster decay fraction (each holdout transition routed to its lowest-V cluster). Recovered the 16-brand and 5-brand basins on real micro-drama at 0.895 and 1.000.
Validation score
The acceptance metric is a 50/50 blend:
score = 0.5 × decay_fraction + 0.5 × recovery_AUC
where decay_fraction is the share of holdout transitions where V(xₜ₊₁) ≤ V(xₜ) (V actually decreases), and recovery_AUC is the AUC of using −V(xₜ) as a predictor of “brand recovers within 90 days.” A score ≥ 0.65 means V is both internally consistent (decreases along trajectories) and externally predictive (lower V correlates with recovery). Below 0.65 is a regime-mismatch signal — itself worth knowing.
Stability classes
For each brand, three readings combine into a stability class: V vs. basin radius, V̇ estimate, and distance to x*.
• stable — V below basin threshold, V̇ non-positive
• marginal — V near threshold or V̇ ambiguous
• unstable — V above basin threshold; brand sits outside the sublevel set
Of the 21 W17 micro-drama brands: 10 stable, 6 marginal, 5 unstable. Note that “unstable” under the single-V fit on cluster 0 is reread by the per-cluster fit — some laggard-cluster brands flagged unstable in the single-V reading are stable inside their own cluster's tighter basin. The W17 ranking surfaces both readings.
One screen of definitions for the dynamics-first vocabulary.
Every term in the editorial that's borrowed from control theory or statistics is defined here. Cross-references at the bottom of each entry point back to where the term is used.
xThe five-dimensional vector of a brand's SBPI scores: content_strength, narrative_ownership, distribution_power, community_strength, monetization_infrastructure. Each component is scaled 0–100. The brand's position on the 5-D SBPI map.
x*A point in the same 5-D space that represents the steady state the panel's brands tend to drift toward. Computed as the centroid of brands whose final-quarter scores are high-mean and low-variance — the cohort that has settled. The low point of the SBPI map.
V(x)A scalar function from the 5-D state space into the non-negative reals, with three properties: V(x*) = 0, V(x) > 0 elsewhere, and V decreases along observed trajectories. If such a V exists, it certifies that x* is a stable equilibrium. Think of V as the altitude function on the map — it's zero at the bottom, positive everywhere else, and you walk downhill along the trajectories.
Borrowed from control theory — the engineering discipline of analyzing dynamical systems (how state evolves over time and how to steer it). Bridges to mechanical, electrical, and chemical engineering. We are borrowing the analytical framework; we are not designing controllers.
The set of starting states from which the system reverts to the equilibrium x*. Formally Ω = {x : V(x) ≤ c} for the largest c where V's decay condition still holds — the largest "valley" inside which downhill motion is guaranteed. Inside the basin = the dynamics will pull the brand back from a shock; outside = the dynamics won't necessarily.
(x−x*)ᵀ P (x−x*)V grows as the square of distance from x*. Slower than exponential growth (which doubles at a fixed rate); faster than linear. At distance 2 from x*, V is 4×; at distance 4, V is 16×. The matrix P is what the SDP fits — its eigenstructure tells you which dimensions cost more to stray in.
Not raw Euclidean distance from x*, but distance weighted by the matrix P — i.e., V itself. Some directions in the 5-D space cost more (P weights them higher). A brand can be far from x* in raw terms but close in P-weighted terms (the "cheap" directions), or vice versa. Structural distance is what V actually measures.
Standard partitioning algorithm. Picks k centroids and assigns each point to the nearest centroid, iterating until centroids stabilize. We cluster brands by their last-quarter mean state to identify distinct attractors in the panel. k=2 separates the W17 board into 16 active competitors and 5 laggards.
A 50/50 blend of decay fraction (share of holdout transitions where V actually decreases) and recovery AUC (how well lower V predicts whether a brand ends up in the top tertile at end of window). On a 0–1 scale, so 0.714 reads as 71.4%. Two conjoint criteria must clear, so it isn't a regression-style "fit"; it's a structural test. Threshold for acceptance is 0.65.
Share of held-out transitions where V(xₜ₊₁) ≤ V(xₜ). The internal-consistency check on V — does the function actually decrease along the trajectories we did not train on?
A brand is in the basin if V(x_now) ≤ basin_radius (a learned threshold — the 75th percentile of training V values). Outside means the brand sits in a position the dynamics will not necessarily revert from under disturbance.
stable — V well below threshold, V̇ non-positive. marginal — V near threshold or V̇ ambiguous. unstable — V above threshold; brand sits outside the sublevel set. W17 distribution: 10 stable, 6 marginal, 5 unstable.
Adjusted = composite × (1 − 0.05 × V / basin_radius). Take the original composite and apply a small penalty proportional to structural fragility (V in units of basin_radius). The penalty is at most 5% per basin-radius unit, so it compresses brand pairs whose composites are close but whose stability differs — surfaces rank inversions only where they really matter.
A robustness probe: inject a synthetic shock Δx of varying magnitude at a random period and forward-simulate to see how fast V returns to its pre-shock level. Stable basins give bounded recovery; chaotic regimes don't. The ISS chart reads relative shape across magnitudes, not absolute level across regimes.
−ΔV / cost-in-millions. A discrete intervention (e.g., +6 community strength at $1.2M) shifts the brand's state by Δx. The Lyapunov-implied value of that intervention is the V-drop per dollar spent. Same intervention applied at higher V yields larger absolute ΔV — the V gradient is a budget multiplier.
The weighted SBPI composite — a 0–100 number per brand combining the five dimensions. Answers where is this brand right now? The composite and the Lyapunov layer answer different questions (altitude vs. slope); both readings are produced from the same panel.
Four headline results from the overnight run on real ShurIQ panel data.
Each finding is reproducible from the artifacts in out/. Validation scores, basin geometries, and per-brand readings are written to disk and stable across reruns.
x*, not just to absolute scores.
Reproducibility table
| Industry | Designed regime | Synthetic baseline | Real-data run | Δ | Outcome |
|---|---|---|---|---|---|
| micro_drama_streaming | Stable equilibrium | 0.717 | 0.714 | −0.003 | Accepted |
| viral_short_form | Chaotic (Lorenz) | 0.512 | 0.516 | +0.004 | Rejected (correct) |
| subscription_streaming | Bistable | 0.626 | 0.622 | −0.004 | Rejected (correct) |
Real-data scores cluster within ±0.05 of the synthetic baseline across all three industries. The pipeline is not regime-shifted by the source change.
Per-cluster basin geometry
| Cluster | Label | Brands | Validation | Equilibrium x* | Basin radius |
|---|---|---|---|---|---|
| 0 | Active competitor | 16 | 0.895 | [60.8, 59.1, 73.1, 57.0, 66.3] |
3.66 |
| 1 | Laggard | 5 | 1.000 | [27.7, 25.6, 28.5, 24.6, 16.5] |
1.00 |
The laggard cohort is verza-tv, rtp, klip, both-worlds-freeli, mansa. Cluster 1's perfect 1.000 validation reflects a small, tight cohort whose holdout transitions all decay; basin radius is correspondingly narrow. Cluster 0 is the operational basin for the rest of the panel.
Capital-efficient interventions, three sample brands
specs/interventions.json| Brand | V | Position | Top intervention | ΔV | Cost ($M) | Capital efficiency |
|---|---|---|---|---|---|---|
| shortmax | 0.33 | Near equilibrium | Lean brand refresh | −0.081 | 0.35 | 0.233 |
| lifetime-ae | 2.44 | Marginal | Community / fandom program | −0.896 | 1.20 | 0.747 |
| google-100zeros | 10.13 | Far from basin | Community / fandom program | −2.033 | 1.20 | 1.694 |
Same intervention category at the same cost — community / fandom program at $1.20M — produces 7× the per-dollar V reduction on the far-from-basin brand vs. the marginal one. The V gradient is doing the work: brands far from x* have steeper local slopes, so a fixed Δx maps to a larger ΔV.
ISS / disturbance recovery
For the accepted micro-drama V, random-direction shocks of magnitude ‖dx‖ ∈ {2, 5, 10} were injected at random periods and forward-simulated. Recovery is defined relative to pre-shock V, so transient noise is excluded. Median recovery for all three magnitudes was zero periods; max recovery was 1, 2, and 3 periods respectively. The basin is robust to small and medium shocks; even large shocks recover in ≤ 3 periods under modest mean-reversion stiffness. Comparator readings on chaotic / bistable industries use a fallback V whose absolute scale is not comparable, so they are presented as relative-shape signals only.
Composite vs. Lyapunov-adjusted ranking, with rank deltas and stability class.
Adjusted composite is the brand's composite minus a small penalty proportional to its V reading. Brands inside their basin are unaffected; brands outside their basin are demoted. Three rank flips are non-trivial: ReelShort ↑1, Netflix ↓2, Google/100Zeros ↓4.
| Rank | Brand | Cluster | Composite | Adjusted | V | Stability | In basin? | Δ rank |
|---|
Basin radius for the single-V fit is 4.41. In basin compares V to that radius. Stability class combines V vs. radius with V̇ estimate and distance to x*. Cluster 0 is the active-competitor basin (16 brands, equilibrium ~[61, 59, 73, 57, 66]); cluster 1 is the laggard basin (5 brands, equilibrium ~[28, 26, 28, 25, 16]).
Stability distribution
How V is fit, validated, and translated into a stack-ranking adjustment.
Compact reference for the regulatory-grade reader. All artifacts (joblib pickles, JSON outputs, parquet panels) are preserved in out/; the runner is scripts/run_experiments.py.
Pipeline at a glance
Real W10–W17 micro-drama panel: 21 brands × 8 weeks × 5 dimensions. Stored as parquet at out/panel.parquet. Synthetic comparator panels generated by synth_panel.py for chaotic and bistable regimes.
Quadratic SDP → SOS polynomial lift → neural Lyapunov, in order. Each fit produces a candidate V, an estimated equilibrium x*, and a basin radius (largest c for which decay holds on the training set). Quadratic accepted on real micro-drama at 0.714.
K-means on last-quarter mean position partitions the panel; one quadratic V is fit per cluster. Best-cluster decay fraction routes each holdout transition to its lowest-V cluster — proxy for combined-basin coverage.
Random-direction shock injection (magnitude 2, 5, 10) at random periods, forward-simulated under mean-reversion stiffness 0.10. Recovery time = periods until V drops back below pre-shock level. 105–400 probes per industry per magnitude.
Ten archetypal interventions in specs/interventions.json, each with delta_x vector and dollar cost. For a given brand state x: capital_efficiency(u) = −[V(x+Δx) − V(x)] / cost_M. Ranked per brand.
Adjusted composite = composite − penalty proportional to V. Penalty is small inside the basin, larger outside. Drives the W17 rank deltas: ReelShort +1, Netflix −2, Google/100Zeros −4.
Acceptance threshold & max Lyapunov exponent
A V fit is accepted when validation score ≥ 0.65. Below 0.65 is reported as “no stable V found” — itself a substantive finding, not a failure. The pipeline also estimates the max Lyapunov exponent (MLE) via Rosenstein nearest-neighbor divergence as a cross-check. MLE < 0 indicates trajectory convergence; MLE > 0 indicates chaos. On real micro-drama, the MLE estimate was 3.28 — positive, indicating local stretching, but the V fit still validates because the basin-recovery dynamic dominates the small-scale stretching at the panel timescale. Two independent readings of stability, neither dispositive on its own, both worth reporting.
Honest limitations
• The fitted V is empirical, not formally proven. Real SOS verification requires polynomial dynamics, which the panel does not have. The candidate V is a learned object that satisfies decay on observed trajectories — useful, not bulletproof.
• Industry parameters θ_I are treated as fixed during the fit. Industries evolve. The pipeline must re-fit periodically (the operational target is a nightly run joined to the existing Optuna pipeline).
• Causality is implicit. A brand inside the basin is associated with future stability, but the V framework alone does not prove that interventions reliably move brands inside the basin. CLF-based recommendations need separate causal validation.
• Dimensionality is currently five. If the dimension count grows past fifteen, the SDP becomes expensive and the SOS lift explodes. Plan for PCA on the score space before scaling.
• Bistable / multi-modal industries need cluster-aware fitting. A single quadratic V cannot capture two basins. Per-cluster fit handles this on micro-drama; future verticals may need k > 2.
Recommended next steps
1. Promote per-cluster Lyapunov to default for the micro-drama vertical. Bimodality is structural, not noise. Single-V fits will under-report basin geometry going forward.
2. Wire V into the weekly publish. Append per-brand V, basin membership, and stability classification to each weekly snapshot. One-screen schema addition; gives clients a structural-fragility reading alongside the composite.
3. Build a control-Lyapunov-function (CLF) layer. Capital efficiency currently scores discrete interventions against a fixed Δx. A CLF layer would optimize Δx against a budget constraint (LP or QP, not combinatorial). Better recommendations.
4. Replace mean-reversion ISS forward model with a fitted dynamics model. The current probe uses generic 0.10 stiffness; fitting f(x) per industry from the panel would produce industry-specific recovery curves and let ISS distinguish soft vs. brittle stability.
5. Optuna joint objective. Stability-aware signal-weighting is the natural next move: maximize V-validation × composite predictive accuracy in a single Optuna search.