DocsCore Concepts

Core Concepts

How Specter turns raw user behavior into verified product improvements.

The Specter Pipeline

Signals
User actions flow in
Contexts
Routed by product area
Analysis
AI diagnoses patterns
Proposals
Actionable improvements
Deployments
Track implementation
Verification
Before/after proof

Signals

A signal is any user action or event you send to Specter. Signals are the raw data that the analysis engine uses to diagnose problems and propose improvements.

Signal structure

Every signal has a type and an optional payload.

Required

typeWhat happened (e.g. swap_completed, error)

Optional

payloadExtra data about the event (any JSON object)
context_idRoute signal to a specific context

Signal polarity

Signals are categorized as positive, negative, or neutral based on your context configuration.

Positive signalsthings going well (swap_completed, thumbs_up, task_done)
Negative signalsfriction or failure (swap_failed, thumbs_down, error)
Neutral signalsneither positive nor negative (page_view, feature_used)
Minimum threshold: Specter requires at least 5 signals in the last 24 hours before running analysis. This ensures proposals are based on enough data to be meaningful.

Contexts

A context describes a specific area of your product. It tells Specter what your product does and which signal types indicate success or failure.

Context configuration

Each context has a name, description, and polarity definitions.

Examplecheckout-flow

“Multi-step checkout process. Users add items to cart, enter shipping info, and complete payment. Success = completed order. Failure = abandoned cart or payment error.”

Positive: order_completed, payment_success
Negative: cart_abandoned, payment_failed

Why contexts matter

Focused analysisSpecter analyzes signals per-context, so proposals are specific to one area of your product.

Polarity scoringBy defining which signal types are positive vs negative, Specter can measure your positive/negative ratio and detect when things are getting worse.

Auto-routingProducts with a single context automatically route all signals to it. With multiple contexts, include a context_id in your signals.

Manage contexts in Settings → Contexts.


Proposals

A proposal is an AI-generated improvement recommendation based on your signal data. Each proposal includes a diagnosis, suggested change, evidence, and predicted impact.

Proposal anatomy

Every proposal contains five key fields:

DiagnosisWhat's going wrong and why, based on signal patterns and polarity distribution.
Current StateA snapshot of your current signal metrics — positive rate, negative rate, volume.
Proposed ChangeA specific, actionable improvement you can implement.
Evidence SummaryThe signal patterns that support this diagnosis.
Predicted ImpactExpected improvement and how to measure it.

Proposal lifecycle

Pending
Awaiting review
Approved
Creates a deployment
Rejected
Not applicable
Deferred
Save for later
Priority score: Each proposal has a score from 1–100. Higher scores indicate more urgent issues based on signal volume and negative ratio.

Deployments

When you approve a proposal, Specter creates a deployment record. This captures the “before” state of your signal metrics so Specter can later compare them against the “after” state.

Before metrics

Captured at the moment you approve a proposal, scoped to the last 7 days of signals.

Positive Rate
42.5%
% of signals that are positive
Negative Rate
28.1%
% of signals that are negative
Total Signals
156
Signal volume in 7-day window

Verification

The verification loop is what makes Specter unique. After you implement a change, Specter automatically compares before/after signal metrics to determine if the change actually worked.

How verification works

1

You approve a proposal and implement the change in your product.

2

Specter’s verification cron runs 4 times daily and counts new signals since deployment.

3

Once enough post-deployment signals accumulate (dynamic threshold based on traffic volume), Specter calculates the after metrics and compares sentiment scores.

4

The deployment is marked as Verified (improved), Degraded (got worse), or Unverified (no significant change).

Verification statuses

DeployedWaiting for post-deployment signals to accumulate.
VerifiedSentiment improved after the change. The proposal worked.
DegradedSentiment dropped by >10%. The change may have had negative effects.
UnverifiedNo significant change in sentiment. Results are inconclusive.