Documentation Index
Fetch the complete documentation index at: https://gologin.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Gologin Web Unlocker is a scraping API that returns fully rendered HTML for any URL. For AI agents, this means reliable web access without browser orchestration — one HTTP call per page.
Feed page content into an LLM
Scrape a page and pass the HTML to a model for extraction, summarization, or analysis — no parsing logic needed.
Python + Claude
import requests
import anthropic
html = requests.get(
"https://parsing.webunlocker.gologin.com/v1/scrape",
params={"url": "https://news.ycombinator.com"},
headers={"apikey": "YOUR_API_KEY"}
).text
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-5-20250514",
max_tokens=1024,
messages=[
{
"role": "user",
"content": f"Extract the top 5 story titles and URLs from this HTML:\n\n{html[:8000]}"
}
]
)
print(message.content[0].text)
JavaScript + Claude
import Anthropic from "@anthropic-ai/sdk";
const html = await fetch(
"https://parsing.webunlocker.gologin.com/v1/scrape?url=" +
encodeURIComponent("https://news.ycombinator.com"),
{ headers: { apikey: "YOUR_API_KEY" } }
).then((r) => r.text());
const client = new Anthropic();
const message = await client.messages.create({
model: "claude-sonnet-4-5-20250514",
max_tokens: 1024,
messages: [
{
role: "user",
content: `Extract the top 5 story titles and URLs from this HTML:\n\n${html.slice(0, 8000)}`,
},
],
});
console.log(message.content[0].text);
Define a scrape tool backed by Web Unlocker. The agent decides when to fetch a page and what URL to scrape.
import requests
import anthropic
def scrape(url: str) -> str:
"""Fetch a URL and return its rendered HTML."""
return requests.get(
"https://parsing.webunlocker.gologin.com/v1/scrape",
params={"url": url},
headers={"apikey": "YOUR_API_KEY"}
).text
tools = [
{
"name": "scrape",
"description": "Fetch the rendered HTML of any URL, including JavaScript-heavy and protected sites.",
"input_schema": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "The URL to scrape"}
},
"required": ["url"]
}
}
]
client = anthropic.Anthropic()
messages = [
{"role": "user", "content": "What is the current price of the iPhone 16 Pro on apple.com?"}
]
while True:
response = client.messages.create(
model="claude-sonnet-4-5-20250514",
max_tokens=1024,
tools=tools,
messages=messages
)
if response.stop_reason == "end_turn":
print(response.content[0].text)
break
tool_results = []
for block in response.content:
if block.type == "tool_use" and block.name == "scrape":
html = scrape(block.input["url"])
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": html[:10000]
})
messages.append({"role": "assistant", "content": response.content})
messages.append({"role": "user", "content": tool_results})
The agent decides when to scrape and what URL to fetch. Web Unlocker handles the retrieval — JavaScript rendering, anti-bot bypass, proxy rotation, CAPTCHA solving.
Use the Web Unlocker SDK
For TypeScript/Node.js agents, the gologin-webunlocker SDK provides typed methods:
import { WebUnlocker } from "gologin-webunlocker";
const client = new WebUnlocker({
apiKey: process.env.GOLOGIN_WEBUNLOCKER_API_KEY!
});
// get structured metadata instead of raw HTML
const { data } = await client.scrapeJSON("https://example.com");
console.log(data.title, data.description, data.headings);
// get clean text for LLM context
const { text } = await client.scrapeText("https://example.com");
// batch scrape with concurrency control
const results = await client.batchScrape(
["https://example.com/1", "https://example.com/2", "https://example.com/3"],
{ concurrency: 3 }
);
See Web Unlocker SDK reference for all methods.
Use the CLI
For agents that call shell commands (like Claude Code), the CLI is the simplest path:
# plain text — good for LLM context
gologin-web-access scrape-text https://example.com
# structured JSON — title, description, headings, links
gologin-web-access scrape-json https://example.com
# batch scrape — many pages at once
gologin-web-access batch-scrape url1 url2 url3 --format text --summary
# search the web
gologin-web-access search "best antidetect browser 2026" --limit 10
# extract structured data with a schema
gologin-web-access extract https://example.com --schema ./schema.json
See Web Access CLI reference for all commands.
Common agent patterns
| Pattern | How |
|---|
| Research agent | Agent calls scrape tool → reads pages → synthesizes findings |
| Price monitoring | batch-scrape product pages on schedule → compare with previous run |
| Competitive intelligence | crawl competitor sites → feed to LLM for analysis |
| Lead enrichment | scrape-json company pages → extract structured data |
| Content aggregation | search + scrape-text → build curated feeds |