We’re excited to announce that Anchor has been officially benchmarked as the most reliable browser agent platform - with the lowest failure rate due to Captchas and bot detection across the board.

This milestone isn’t an accident. It’s the result of foundational engineering guided by a simple belief:
For agents to replace humans in real work, they have to behave like humans.
In practice, that meant making bold, technical choices early:
- We use fast machines, not throttled cloud VMs
- We run real, headful browsers, not headless or emulated versions
- We give agents fine-grained, human-like control over the browser - from scroll speed to mouse trajectory
But we didn’t stop at infrastructure. We built our own browser engine: Anchor Chromium - a purpose-built fork of Chromium optimized for agents, not humans.
This engine gives Anchor agents a native, low-level interface to the browser - letting them blend in with real user behavior and avoid common automation fingerprints.
That’s how we’ve made progress on what we believe is the core challenge in this space: reliability.
Why reliability is the inflection point
Most AI agents today still live in the world of demos and pilots. They work sometimes - until they don’t.
But for AI agents to run in production - inside real businesses, across real workflows - they need infrastructure that can guarantee success every time. Not "just enough to impress in a POC."
That’s what Anchor unlocks. And we’re seeing the results:
- Fewer failures due to bot protection
- Higher success rates across authentication gates
- Agents that complete workflows predictably, day after day
Reliability isn’t a layer we added later - it’s baked into every level of the system, from the CPU type to the browser fingerprint.
What this means for teams shipping AI automation
If you’re building agents that interact with third-party web apps, reliability is likely your biggest blocker to scale.
With Anchor, you get:
- Deterministic execution at scale
- Enterprise-ready security and observability
- A browser engine that adapts to how real users behave, not how scripts operate
Browser agents don’t have to be fragile anymore. We’ve proven it. And we’re just getting started.
This benchmark was made possible through a collaboration with Halluminate. Together we launched browserbench.ai, a new public tool to measure browser agent reliability against real-world obstacles like bot detection and Captchas.
.png)