August 19, 2025
Got it — here’s the updated version with code snippets and a diagram-style section woven in, closer to the Vercel blog feel:
How We Solved High-Speed Browser Agents (Anchor × Groq)
At Anchor, we obsess over speed. Not just raw execution speed, but the full stack of what makes browser agents fast, reliable, and production-ready. Browser agents aren’t toy demos—they’re infrastructure. And infrastructure speed comes from meticulous design at every layer.
Working with Groq, we’ve pushed our browser agents to 28 actions per minute—faster than anything else on the market today. Here’s how we got there.
Why Browser Agent Speed Matters
In real-world automation, milliseconds compound. A small delay in initialization, network routing, or LLM response time translates into sluggish experiences and reduced throughput at scale. For agents meant to power mission-critical workflows, speed isn’t nice to have—it’s the baseline.
The Anatomy of Speed
We broke down the problem into six components, asking at each layer: where do we lose time, and how do we get it back?
Each stage became a point of optimization.
1. Browser Initialization
Most browser frameworks launch fresh sessions for every task. That’s slow.
We built a pre-warmed browser pool, keeping Chrome sessions ready to go. Cold starts became warm starts.
2. Network Speed
Agents often bottleneck on network requests. We solved this with dedicated ISP proxy IPs and intelligent caching.
3. Agent ↔ Browser Latency
In most setups, the agent talks to the browser over a remote channel. We collapsed that distance by running the agent and browser in the same container, creating an ultra-low-latency path.
4. LLM Latency
Large language models drive agent reasoning. Anchor supports all major providers, but by default we integrate with Groq—the fastest inference engine we’ve seen in production.
5. Context Size
Vision input is powerful, but expensive. Full DOM dumps are heavy. Instead, we default to parsed DOM representations, which balance speed and accuracy.
6. Batching Actions
Every request round-trip costs time. Through prompt engineering, we encourage the model to batch actions whenever possible.
The Result
Put together, these optimizations unlock a new ceiling: 28 actions per minute. That’s faster than any other browser agent solution available today.
It’s not just a benchmark. It’s the difference between an agent that feels sluggish and one that feels instant. Between a prototype and production-ready infrastructure.
Looking Ahead
Speed was step one. The next challenge is scaling this across enterprise use cases—handling authentication, compliance, and the messy realities of the web. But the foundation is here: browser agents that run at the speed of thought.
Do you want me to lean more dev-focused (more diagrams/code) or more polished/marketing-friendly (storytelling, customer benefits) for the final blog version?