Apache-2.0 · adapters for LangGraph, CrewAI, AutoGen

Stop guessing
max_iterations.

Every agent in production ships with a number somebody made up. LoopGain replaces it with the loop gain Aβ — a real-time Barkhausen-stability measurement — and tells you the iteration to stop on. It's the math from 1921. We're applying it to the loop you wrote last week.

$ pip install loopgain copy
open dashboard
v 0.1.7 py ≥ 3.10 deps 0 tests 118 passing
loop · checkout-rewriter / iter 1–20 FAST_CONVERGE
iter 01
Aβ_smoothed
ε
eta — iter
§01 · the problem

Every agent in production is running on a guess.

Search any agent codebase for max_iterations. You'll find 5. Sometimes 10. Nobody can defend the number, because no one has a principled way to pick it.

It's the universal pre-crash hack. It fires too early on a loop that was three iterations from converging, or too late on one that started diverging in iteration two — the model is now five rewrites deep into a hallucination and the bill is real.

we just set it to five and hope.
— literally every team that ships an agent
01
The loop hits the ceiling without converging. Two iterations short of target. You ship iteration 5 and call it a feature.
02
The loop is oscillating by iteration 3. The reviser is undoing what it just wrote. Every later token is waste, and a fixed cap can't see it.
03
The loop was about to converge — and got cut off. One more iteration would have landed at target. The cap fired anyway.

three failure modes a static cap can't tell apart · what Aβ tells you in one number

§02 · the fix

Five named bands. One number per iteration. A decision either way.

Each iteration produces an error signal. The ratio of consecutive errors is the loop gain . Smooth it over a small window and you get a stability reading. Five bands. Two of them say "stop." Three say "keep going." No more guessing.

FAST_CONVERGE
Aβ < 0.30

Error is shrinking by more than 3× per iteration. The loop is smashing it; the only correct move is to stay out of the way.

KEEP GOING
CONVERGING
0.30 ≤ Aβ < 0.85

Healthy progress. Error contracts by a meaningful fraction every step. Most well-tuned agent loops live here.

KEEP GOING
STALLING
0.85 ≤ Aβ < 0.95

The loop is moving, barely. Each iteration is shaving fractions off the error. Almost always a sign you're at a plateau that won't break on its own.

WATCH
OSCILLATING
0.95 ≤ Aβ ≤ 1.05

The model is undoing its own last revision. The next iteration is statistically guaranteed to be no better than two ago. Rollback. Stop.

ROLLBACK · STOP
DIVERGING
Aβ > 1.05

Each iteration is making the output strictly worse. The longer it runs, the more money you light on fire. Rollback. Stop.

ROLLBACK · STOP
smooth = EMA( en en−1 w=3 )

The whole library is a few hundred lines around this ratio. The result is a stop decision your model never had to make.

§03 · integration

Three lines, then your loop knows when to stop.

The raw API is two methods: observe() and should_continue(). Drop them around the loop you already have. Framework adapters wrap the same primitive — pick the one that matches your stack, or stay raw.

# amber stripe = the lines you add. everything else is your existing loop.
from loopgain import LoopGainlg = LoopGain(target_error=0.1, max_iterations=20)
while lg.should_continue():
    errors = verifier.verify(output)
    lg.observe(errors, output=output)    output = reviser.revise(output, errors)

result = lg.result
# result.outcome      → "converged" · "oscillating" · "diverged" · "max_iterations"
# result.best_output  → argmin(E(n)) — the actual best draft, not the last
# lg.eta              → log(ε_target / ε) / log(Aβ_smooth) — iterations to ε (live)
# result.gain_margin  → 1 / max(Aβ_smooth) — > 1 means stable headroom
# amber stripe = the lines you add. pip install 'loopgain[langgraph]'
from loopgain import LoopGainfrom loopgain.integrations import LangGraphAdapter
graph = build_verify_revise_graph().compile()
lg = LoopGain(target_error=0.1, max_iterations=20)adapter = LangGraphAdapter(
    lg=lg,
    error_fn=lambda update: len(update.get("verifier", {}).get("errors", [])),
)final_state = adapter.run(graph, {"draft": initial})
# adapter.stream() yields each step if you want the full trace.
# adapter.arun() / adapter.astream() are the async counterparts.
# amber stripe = the lines you add. pip install 'loopgain[crewai]'
from loopgain import LoopGainfrom loopgain.integrations import CrewAIAdapter
crew = Crew(agents=[writer_agent, verifier_agent], tasks=[task])
lg = LoopGain(target_error=0.1, max_iterations=20)adapter = CrewAIAdapter(
    lg=lg,
    task_error_fn=lambda task_output: count_failed_checks(task_output.raw),
)with adapter:               # installs callbacks; uninstalls on exit    adapter.install(crew)    result = crew.kickoff()

# Observations land on `lg.result` — same shape as the raw API.
# Existing callbacks you had installed are chained, not clobbered.
# amber stripe = the lines you add. pip install 'loopgain[autogen]'
from autogen_agentchat.teams import RoundRobinGroupChat
from loopgain import LoopGainfrom loopgain.integrations import AutoGenAdapter
team = RoundRobinGroupChat(participants=[generator, verifier])
lg = LoopGain(target_error=0.1, max_iterations=20)adapter = AutoGenAdapter(
    lg=lg,
    error_fn=lambda msg: parse_verifier_score(msg.content),
    observe_sources={"verifier"},      # only verifier drives observe()
)result = await adapter.run(team, task="draft, verify, revise")
# Legacy v0.2 ConversableAgent.initiate_chat is not supported.
§04 · what you get

A small library that does one thing precisely.

01

Five-band decision engine

One smoothed reading per iteration. Three bands say go, two bands say stop. The library never asks you to interpret a number — the band is the decision.

02

Best-so-far buffer

Every iteration's output is held in a ring buffer keyed by error. On rollback you don't get the latest draft — you get argmin(E(n)), the actually-best one the loop produced.

03

Closed-form ETA

If you're CONVERGING and the gain is stable, the number of iterations to your target ε is a logarithm, not a guess. log(ε_t / ε) / log(Aβ). Displayed live.

04

Framework adapters

LangGraph conditional edge. CrewAI callback. AutoGen termination check. Raw LoopGain class if you have your own runner. All four wrap the same core.

05

Opt-in telemetry

Off by default. If you turn it on, we receive band transitions and gain readings — never your prompts or outputs. The contract is in the README and the receiver is open-source.

06

Optional adapter installs

The core wheel has zero runtime deps. Framework adapters are pip extras — pip install 'loopgain[langgraph]', [crewai], [autogen], or [all]. Your service tree stays clean.

§05 · hosted dashboard

The same readings, on a screen built for an operations room.

Six panels over your real fleet. Loop Health Map, Convergence Profiles, Waste Report, Gain Margin Distribution, Rollback Log, ETA Accuracy. Alerts on band transitions. A live demo runs against synthetic traffic so you can poke at it without integrating first.

Loop Health Map 38 loops · demo
checkout-rewriter12.4k · 0.42
sql-resolver8.1k · 0.61
spec-extractor3.9k · 0.58
draft-emailer1.2k · 0.91
faq-router6.4k · 0.18
copy-revise2.7k · 0.74
incident-sum240 · 1.18
memo-loop410 · 0.98
policy-rewriter2.0k · 0.68
prompt-debugger3.0k · 0.22
tile size = run volume · tile color = Aβ band
Waste Report · 30d $3,847 saved · demo
caught DIVERGING
$2,981
caught OSCILLATING
$612
rescued from cutoff
$254
total saved
$3,847
vs. fixed max_iterations=5 baseline · gpt-4-class pricing
ETA Accuracy ±0.7 iter · demo
fast conv stall
predicted iter · actual iter — closed-form log estimate
view live demo demo runs against synthetic traffic. no auth, no signup.
§06 · pricing

Free if you self-host. Paid when you'd rather not.

Apache-2.0 across the stack — library, receiver, dashboard. Self-host the whole thing if you want; the code is there. The Team and Enterprise tiers are what you buy when you'd rather we run it: history, alerts, per-loop calibration, and the on-call pager.

tier · 01

Open Source

Freeforever · Apache-2.0
  • Full LoopGain library — zero runtime deps
  • Framework adapters (LangGraph, CrewAI, AutoGen)
  • Best-so-far buffer + rollback
  • Closed-form ETA + first-prediction capture
  • Self-host telemetry receiver + dashboard
$ pip install loopgain copy
tier · 02

Team

$199/ month · per workspace
  • Everything in Open Source
  • Hosted dashboard — no infra, no auth, no patches
  • 30-day run history, retained for you
  • Alerts delivered to Slack, email, or webhooks
  • Waste Report — dollar-accurate ROI for stakeholders
  • Per-iteration scrubber + share links
try the demo →

paid plans launching soon — join the waitlist

tier · 03

Enterprise

$999/ month · starts here
  • Everything in Team
  • Unlimited history
  • Custom Aβ thresholds per loop type
  • Read & ingest API
  • SSO (SAML / OIDC)
  • Audit log + evidence pack for SOC 2 prep
  • Dedicated support channel
§07 · why it works
1921 Heinrich Barkhausen Technische Hochschule Dresden
2026 LoopGain v0.1.7 your agent loop

One hundred and five years of control theory, finally pointed at the right loop.

The Barkhausen criterion is the foundational stability result in feedback engineering. It says: if the loop gain is greater than one, your system oscillates or diverges. If it's less than one, it converges. Every amplifier, every PID controller, every closed-loop electronic circuit since 1921 has used this idea.

An LLM agent loop is a feedback loop. Output in, scored output out, fed back as the next input. The math doesn't care that the gain element is a transformer instead of a vacuum tube. Apply Barkhausen and you get the same answer you'd get in any other control system: this loop is stable, this one isn't, stop the unstable one before it costs you anything else.