Techincal Dive

Techincal Dive

Techincal Dive

August 28, 2025

Page Load Reliability on Top Websites in the US

Page Load Reliability on Top Websites in the US

Page Load Reliability on Top Websites in the US

The AI megatrend has led many tools we take for granted to be reinvented so that they work seamlessly with AI models. Despite the inspiring promise of new AI-powered tools, reliability has been a consistent struggle for many new AI agents shipped to production. Complex operating environments like web browsers make this problem even more visible. New agentic browser infrastructure tools aim to solve this problem, but how exactly are they performing?

In the following test, we compare the reliability of Anchor Browser against Browserbase by measuring how many of the top 100 websites in the US each are able to fetch successfully — without running into obstructions or bot challenges. We believe this demonstrates basic page load reliability across a range of websites and how consistently each system may be able to access a website that's relevant to you.

Why use a remote browser?

For many years developers have toiled with finicky browser automation tools such as Selenium, Playwright, and Puppeteer. An always changing set of obstacles has made performing basic automation workflows challenging, such as anti-bot measures, unpredictable page load speeds, and environment consistency to name a few. That’s created an opportunity for new agentic browser tools to provide remotely hosted and controllable browser sessions that can programmatically interact with websites just as if they were a normal human.

The reliability problem: not all agentic browsers are created equal

There are dozens of potential pitfalls that could lead to a remote browser not being up for the task. Examples include improper session isolation, networking complexities, and fingerprints that show all the markings of an unwanted bot. When any one of dozens of potential issues flare up, it could lead to the agentic browser service failing to load a web page. By testing various remote browser tools against a standard performance benchmark, we should be able to get a sense for the reliability of the tool.

Test methodology

We evaluate page load reliability by testing the homepage of 100 real-world websites – the top 100 websites by traffic in the US according to SimilarWeb – using a standard browser configuration from Anchor Browser and Browserbase respectively, without proxies or stealth features. Tests were conducted sequentially on August 22, 2025, to control for website availability and network conditions. Both agentic browser services tested provide Playwright instances running Chromium. Each website is given a 30-second timeout to load, and success is determined by receiving a valid HTTP response and confirming a page title is present.

Below is an example code snippet using Anchor Browser's Browser Sessions API:

def test_website(self, url: str) -> bool:
        """Test if a website can be successfully fetched using Anchor Browser."""
        session_id = None
        browser = None
        
        try:
            logger.info(f"Testing website: {url}")
            
            # Create a new browser session
            session = self.anchor_client.sessions.create()
            session_id = session.data.id
            cdp_url = session.data.cdp_url
            
            # Connect to the browser using Playwright
            from playwright.sync_api import sync_playwright
            
            with sync_playwright() as p:
                # Connect to Anchor Browser session
                browser = p.chromium.connect_over_cdp(cdp_url)
                
                try:
                    # Get the first context and page
                    contexts = browser.contexts
                    if not contexts:
                        # Create a new context if none exists
                        context = browser.new_context()
                        page = context.new_page()
                    else:
                        context = contexts[0]
                        pages = context.pages
                        if not pages:
                            page = context.new_page()
                        else:
                            page = pages[0]
                    
                    # Navigate to the website
                    response = page.goto(url, timeout=30000)  # 30 second timeout
                    
                    # Check if the page loaded successfully
                    if response and response.status < 400:
                        # Additional check: make sure the page actually loaded content
                        title = page.title()
                        if title:  # Page has a title, likely loaded successfully
                            logger.info(f"Successfully tested {url} (Status: {response.status}, Title: {title[:50]}...)")
                            return True
                        else:
                            logger.warning(f"Page loaded but no title found for {url}")
                            return False
                    else:
                        status_code = response.status if response else "No response"
                        logger.warning(f"Failed to load {url} (Status: {status_code})")
                        return False
                        
                finally:
                    # Close the browser connection first
                    if browser:
                        try:
                            browser.close()
                        except:
                            pass
                        
        except Exception as e:
            logger.warning(f"Failed to test {url}: {e}")
            return False
            
        finally:
            # Clean up the session after browser is closed
            if session_id:
                try:
                    self.anchor_client.sessions.terminate(session_id)
                    logger.debug(f"Session {session_id} terminated")
                except Exception as e:
                    logger.debug(f"Failed to terminate session {session_id}: {e}")
                    pass  # Session might have already expired

View Anchor Browser Full Script

View Browserbase Full Script

Results are automatically recorded as “Yes/No” in a Google Sheet, providing a clear baseline success rate that shows how reliably the agentic browser service can access standard websites under normal conditions.

Test results

The results reveal a significant performance difference between the two agentic browser services. Anchor Browser achieved a 93% success rate, successfully loading 93 out of 100 websites, while Browserbase managed a 71% success rate with 71 successful loads. This 22 percentage gap suggests that Anchor Browser's underlying infrastructure and browser management approach provides notably more reliable page loading for standard web automation tasks. Both services fell short of perfect reliability, highlighting inherent challenges in browser automation. However, Anchor Browser's relatively high success rate indicates better handling of the various issues that can cause page loads to fail in automated environments.

Raw data

Interestingly, some of the sites can actually be successfully loaded when accessed manually through Anchor Browser's and Browserbase web playground interfaces, suggesting the failures may stem from programmatic initialization or API configuration rather than fundamental compatibility issues.

Conclusion

While browser automation remains inherently challenging, the choice of agentic browser service can significantly impact your success rate. Anchor Browser's 93% success rate demonstrates that reliable, production-ready browser automation is achievable when the underlying infrastructure is properly designed to handle the complexities of modern web environments.

For developers and teams building AI agents that need to interact with websites, these reliability differences translate directly into fewer failed workflows, reduced debugging time, and more predictable automation outcomes. Anchor Browser's strong performance demonstrates that infrastructure choices do contribute material value to production reliability. As with any web automation workflow, implementing proper error handling remains an important best practice.

As AI agents become more prevalent in business workflows, the reliability of the tools they depend on becomes increasingly critical. Choosing the best browser automation service isn’t about sophisticated features or pricing, but about a foundational reliability that enables AI systems to work consistently across a range of production environments.

If you're evaluating agentic browser services for your own use case, we encourage you to run similar reliability tests on the specific websites and workflows that matter most to your application. You can get started with Anchor Browser's free tier to test your critical user journey, or explore the interactive playground to see how different configurations perform with your target sites.

Questions

Have questions about this testing methodology or want to discuss your specific browser automation challenges? We'd love to hear from you. Reach out to the Anchor Browser team via AnchorBrowser.io.

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post