Comparison chart between BrowserBase and WebRun highlighting real Chrome browsers and natural language task automation

If you've been using BrowserBase for headless browser automation, you're familiar with the challenges—bot detection, anti-fingerprinting, and the constant cat-and-mouse game of making headless browsers appear "real."

WebRun takes a different approach: real Chrome browsers running on real desktops. No headless emulation. No fingerprint spoofing. Just actual browsers that behave exactly like a human's would—because they are.

The best part? You can be up and running in under five minutes.

Why Consider the Switch

Before we dive into code, here's how the two platforms compare:

FeatureBrowserBaseWebRun
Browser TypeHeadless browsersReal Chrome on real desktops
Automation ModelWrite Playwright/Puppeteer codeNatural language tasks
Response TimeStandard browser + LLM latencySub-100ms decisions (hybrid CNN-LLM)
SetupInstall SDK + frameworkZero installation—just HTTP
AI IntegrationRequires Stagehand + external LLM keysBuilt-in, no additional keys needed
Human-in-the-loopManual implementationBuilt-in guardrails + manual takeover
MCP ConfigurationMultiple env vars + npx commandSingle URL
Live ViewDebugger URL (requires SDK call)Native WebRTC (returned with session)

The Key Difference

BrowserBase gives you managed headless browser infrastructure that you control with code (Playwright, Puppeteer, Selenium) or their Stagehand framework. You write scripts, manage browser contexts, and handle the automation logic yourself. And because they're headless, you'll often need stealth mode, proxy rotation, and fingerprint management to avoid detection.

WebRun runs real Chrome browsers on real desktop environments—not headless emulation. This means sites see an actual browser with genuine fingerprints, eliminating most bot detection issues entirely. You describe what you want in plain English, and the AI agent handles the clicking, typing, navigating, and extracting. No selectors to maintain. No scripts to debug when sites change. No stealth mode required.

Migration by Example

Simple Task Execution

BrowserBase (Stagehand):

import Stagehand from "@browserbasehq/stagehand";
import Browserbase from "@browserbasehq/sdk";

const bb = new Browserbase({
  apiKey: process.env.BROWSERBASE_API_KEY
});

const session = await bb.sessions.create({
  projectId: process.env.BROWSERBASE_PROJECT_ID
});

const stagehand = new Stagehand({
  env: "BROWSERBASE",
  apiKey: process.env.BROWSERBASE_API_KEY,
  projectId: process.env.BROWSERBASE_PROJECT_ID
});

await stagehand.init();
await stagehand.page.goto("https://news.ycombinator.com");

const headlines = await stagehand.extract(
  "extract the top 10 post titles and URLs",
  z.object({
    posts: z.array(z.object({
      title: z.string(),
      url: z.string()
    }))
  })
);

await stagehand.close();

WebRun (REST API):

const response = await fetch("https://connect.webrun.ai/start/run-task", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer enig_..."
  },
  body: JSON.stringify({
    taskDetails: "Go to Hacker News and extract the top 10 post titles and URLs"
  })
});

const result = await response.json();
console.log(result.result.data.message);

That's it. No SDK installation. No schema definitions. No session management. One HTTP request.

Multi-Step Workflows

BrowserBase requires you to manage browser contexts and chain Stagehand commands. WebRun lets you create a persistent session and send sequential tasks.

BrowserBase:

const stagehand = new Stagehand({ env: "BROWSERBASE", ... });
await stagehand.init();

await stagehand.page.goto("https://amazon.com");
await stagehand.act("search for wireless keyboards");
await stagehand.act("click on the first result");

const product = await stagehand.extract("extract product name and price", schema);
await stagehand.close();

WebRun:

const headers = {
  "Content-Type": "application/json",
  "Authorization": "Bearer enig_..."
};

// Create session
const session = await fetch("https://connect.webrun.ai/start/start-session", {
  method: "POST",
  headers,
  body: JSON.stringify({
    taskDetails: "Go to amazon.com",
    startingUrl: "https://amazon.com"
  })
}).then(r => r.json());

// Send follow-up tasks
await fetch("https://connect.webrun.ai/start/send-message", {
  method: "POST",
  headers,
  body: JSON.stringify({
    sessionId: session.sessionId,
    message: {
      actionType: "newTask",
      newState: "start",
      taskDetails: "Search for wireless keyboards and click the first result"
    }
  })
});

// Extract data
const result = await fetch("https://connect.webrun.ai/start/send-message", {
  method: "POST",
  headers,
  body: JSON.stringify({
    sessionId: session.sessionId,
    message: {
      actionType: "newTask",
      newState: "start",
      taskDetails: "Extract the product name and price"
    }
  })
}).then(r => r.json());

// Clean up
await fetch("https://connect.webrun.ai/start/send-message", {
  method: "POST",
  headers,
  body: JSON.stringify({
    sessionId: session.sessionId,
    message: { actionType: "state", newState: "terminate" }
  })
});

MCP Integration

If you're using MCP with Claude Desktop or Cline, the migration is dramatic.

BrowserBase:

{
  "mcpServers": {
    "browserbase": {
      "command": "npx",
      "args": ["@browserbasehq/mcp-server-browserbase"],
      "env": {
        "BROWSERBASE_API_KEY": "your_api_key",
        "BROWSERBASE_PROJECT_ID": "your_project_id",
        "GEMINI_API_KEY": "your_gemini_api_key"
      }
    }
  }
}

BrowserBase requires three environment variables, a local npx command, and you need to provide your own LLM API key (Gemini).

WebRun:

{
  "mcpServers": {
    "WebRun": {
      "url": "https://connect.webrun.ai/mcp/sse?apiKey=YOUR_API_KEY"
    }
  }
}

One URL. No local commands. No additional LLM keys—WebRun's AI is built-in.

Real-time Streaming

Both platforms offer real-time session viewing. The difference is in how you access it.

BrowserBase:

// Requires additional SDK call after session creation
const liveViewLinks = await bb.sessions.debug(session.id);
const liveViewLink = liveViewLinks.debuggerFullscreenUrl;

// Embed in iframe
<iframe src={liveViewLink} />

WebRun:

// Streaming URLs returned automatically with session creation
const session = await fetch("https://connect.webrun.ai/start/start-session", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer enig_..."
  },
  body: JSON.stringify({
    taskDetails: "Navigate to example.com"
  })
}).then(r => r.json());

// URLs ready immediately—no extra API call
const iframe = document.createElement("iframe");
iframe.src = session.streaming.webViewURL;
document.body.appendChild(iframe);

// Or use WebRTC for lower latency
// session.streaming.webRTCURL

WebRun returns streaming URLs directly in the session response—no additional SDK calls needed.

OpenAI-Compatible Endpoint

Already using LangChain, LlamaIndex, or the OpenAI SDK? WebRun provides a drop-in compatible endpoint:

from openai import OpenAI

client = OpenAI(
    api_key="enig_...",
    base_url="https://connect.webrun.ai/v1"
)

response = client.chat.completions.create(
    model="webrun-agent",
    messages=[
        {"role": "user", "content": "Go to google.com and search for Anthropic"}
    ]
)

print(response.choices[0].message.content)

BrowserBase doesn't offer an OpenAI-compatible endpoint—you'd need to build this integration yourself.

New Capabilities You Get for Free

Built-in Human-in-the-Loop

WebRun provides two ways to intervene when the AI agent needs help:

Guardrails (Programmatic Responses)

When the agent encounters login forms, purchase confirmations, CAPTCHAs, or ambiguous instructions, it pauses and triggers a guardrail. Your application can respond automatically:

import { io } from "socket.io-client";

const socket = io("https://connect.webrun.ai", {
  auth: { sessionId: session.sessionId },
  transports: ["websocket"]
});

socket.on("message", (data) => {
  if (data.type === "guardrail_trigger") {
    console.log("Agent needs input:", data.data.value);
    
    // Respond programmatically
    socket.emit("message", {
      actionType: "guardrail",
      taskDetails: "The password is: hunter2",
      newState: "resume"
    });
  }
});

Manual Takeover (Direct Browser Control)

For situations requiring human judgment—complex CAPTCHAs, nuanced form inputs, or debugging—you can take direct control of the browser:

// Pause the agent and enable manual control
await fetch("https://connect.webrun.ai/start/send-message", {
  method: "POST",
  headers,
  body: JSON.stringify({
    sessionId: session.sessionId,
    message: { actionType: "takeover" }
  })
});

// Use the live stream URL to view and interact with the browser
// session.streaming.webViewURL

// When done, release control back to the agent
await fetch("https://connect.webrun.ai/start/send-message", {
  method: "POST",
  headers,
  body: JSON.stringify({
    sessionId: session.sessionId,
    message: { actionType: "release" }
  })
});

With BrowserBase, you'd need to implement this pause-and-resume logic yourself using their Live View feature.

Sub-100ms Decision Making

WebRun's hybrid CNN-LLM architecture means the agent makes decisions in under 100 milliseconds. Instead of waiting for a full LLM round-trip on every action, visual element detection happens locally, and the LLM is only consulted for complex decisions.

Real Browsers, Zero Detection

Because WebRun runs real Chrome on real desktops—not headless browsers—you don't need stealth mode, proxy rotation, or fingerprint spoofing. Sites see a genuine browser environment with authentic characteristics. No more cat-and-mouse games with bot detection.

Zero Setup, Real Browsers

No SDKs to install. No browser binaries to manage. No Playwright versions to track. WebRun is pure HTTP—works from any language, any environment, any CI/CD pipeline. And because it's real Chrome on real desktops, you get authentic browser behavior without any configuration.

Quick Reference

BrowserBaseWebRun
Headless browsersReal Chrome on real desktops
bb.sessions.create()POST /start/start-session
stagehand.act()Natural language in taskDetails
stagehand.extract()Natural language in taskDetails
stagehand.close(){"actionType": "state", "newState": "terminate"}
Stagehand MCP (3 env vars)WebRun MCP (1 URL)
Live view via sessions.debug()Streaming URLs in session response
Bring your own LLMAI built-in
DIY human interventionGuardrails + manual takeover
Stealth mode for detectionReal browsers—no stealth needed

Getting Started

  • Get your API key at app.webrun.ai
  • Make your first API call—no installation needed
  • Try the MCP integration with Claude Desktop
  • Enable video streaming for debugging
The migration is straightforward because WebRun removes complexity rather than adding it. You stop writing automation scripts and start describing what you want. The AI handles the rest.

Check out our full documentation at docs.webrun.ai for detailed guides and examples.