Skip to main content

Web Scraping using Spider API

Open In Colab Open on GitHub

This notebook shows how to use the open source Spider web crawler together with AutoGen agents.

First we need to install the Spider SDK and the AutoGen library.

! pip install -qqq pyautogen spider-client

Setting up the LLM configuration and the Spider API key is also required.

import os

config_list = [
{"model": "gpt-4o", "api_key": os.getenv("OPENAI_API_KEY")},
]

spider_api_key = os.getenv("SPIDER_API_KEY")

Let’s define the tool for scraping and crawling data from any website with Spider. Read more about tool use in this tutorial chapter.

from typing import Any, Dict, List

from spider import Spider
from typing_extensions import Annotated


def scrape_page(
url: Annotated[str, "The URL of the web page to scrape"],
params: Annotated[dict, "Dictionary of additional params."] = None,
) -> Annotated[Dict[str, Any], "Scraped content"]:
# Initialize the Spider client with your API key, if no api key is specified it looks for SPIDER_API_KEY in your environment variables
client = Spider(spider_api_key)

if params is None:
params = {"return_format": "markdown"}

scraped_data = client.scrape_url(url, params)
return scraped_data[0]


def crawl_page(
url: Annotated[str, "The url of the domain to be crawled"],
params: Annotated[dict, "Dictionary of additional params."] = None,
) -> Annotated[List[Dict[str, Any]], "Scraped content"]:
# Initialize the Spider client with your API key, if no api key is specified it looks for SPIDER_API_KEY in your environment variables
client = Spider(spider_api_key)

if params is None:
params = {"return_format": "markdown"}

crawled_data = client.crawl_url(url, params)
return crawled_data
[{'content': 'Spider - The Fastest Web Crawling Service[Spider v1 Logo Spider ](/)[Pricing](/credits/new)[GitHub](https://github.com/spider-rs/spider) [Twitter](https://twitter.com/spider_rust) Toggle ThemeSign InRegisterTo help you get started with Spider, we’ll give you $200 in credits when you spend $100. [Get Credits](/credits/new)LangChain integration [now available](https://python.langchain.com/docs/integrations/document_loaders/spider)The World\'s Fastest and Cheapest Crawler API==========View Demo* Basic* StreamingExample requestPythonCopy```import requests, osheaders = {    \'Authorization\': os.environ["SPIDER_API_KEY"],    \'Content-Type\': \'application/json\',}json_data = {"limit":50,"url":"http://www.example.com"}response = requests.post(\'https://api.spider.cloud/crawl\',  headers=headers,  json=json_data)print(response.json())```Example ResponseUnmatched Speed----------### 2.5secs  ###To crawl 200 pages### 100-500x  ###Faster than alternatives### 500x  ###Cheaper than traditional scraping services Benchmarks displaying performance between Spider Cloud, Firecrawl, and Apify.Example used tailwindcss.com - 04/16/2024[See framework benchmarks ](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md)Foundations for Crawling Effectively----------### Leading in performance ###Spider is written in Rust and runs in full concurrency to achieve crawling dozens of pages in secs.### Optimal response format ###Get clean and formatted markdown, HTML, or text content for fine-tuning or training AI models.### Caching ###Further boost speed by caching repeated web page crawls.### Smart Mode ###Spider dynamically switches to Headless Chrome when it needs to.Beta### Scrape with AI ###Do custom browser scripting and data extraction using the latest AI models.### Best crawler for LLMs ###Don\'t let crawling and scraping be the highest latency in your LLM & AI agent stack.### Scrape with no headaches ###* Proxy rotations* Agent headers* Avoid anti-bot detections* Headless chrome* Markdown LLM Responses### The Fastest Web Crawler ###* Powered by [spider-rs](https://github.com/spider-rs/spider)* Do 20,000 pages in seconds* Full concurrency* Powerful and simple API* 5,000 requests per minute### Do more with AI ###* Custom browser scripting* Advanced data extraction* Data pipelines* Perfect for LLM and AI Agents* Accurate website labelingSee what\'s being said----------[<img alt="Merrick Christensen" src="https://pbs.twimg.com/profile_images/1715432638574678016/EpXREbKV_normal.jpg" height="48" width="48" />](https://twitter.com/iammerrick/status/1787873425446572462)[Merrick Christensen](https://twitter.com/iammerrick/status/1787873425446572462)[@iammerrick ](https://twitter.com/iammerrick/status/1787873425446572462)· [Follow](https://twitter.com/intent/follow?screen_name=iammerrick)[](https://twitter.com/iammerrick/status/1787873425446572462)Rust based crawler Spider is next level for crawling & scraping sites. So fast. Their cloud offering is also so easy to use. Good stuff. [ github.com/spider-rs/spid… ](https://github.com/spider-rs/spider)[ 3:53 PM · May 7, 2024 ](https://twitter.com/iammerrick/status/1787873425446572462) [](https://help.twitter.com/en/twitter-for-websites-ads-info-and-privacy)[12 ](https://twitter.com/intent/like?tweet_id=1787873425446572462) [Reply ](https://twitter.com/intent/tweet?in_reply_to=1787873425446572462)[ Read more on Twitter ](https://twitter.com/iammerrick/status/1787873425446572462)[<img alt="William Espegren" src="https://pbs.twimg.com/profile_images/1725251248796864513/GjyNyMl8_normal.jpg" height="48" width="48" />](https://twitter.com/WilliamEspegren/status/1789419820821184764)[William Espegren](https://twitter.com/WilliamEspegren/status/1789419820821184764)[@WilliamEspegren ](https://twitter.com/WilliamEspegren/status/1789419820821184764)· [Follow](https://twitter.com/intent/follow?screen_name=WilliamEspegren)[](https://twitter.com/WilliamEspegren/status/1789419820821184764)Web crawler built in rust, currently the nr1 performance in the world with crazy resource management Aaaaaaand they have a cloud offer, that’s wayyyy cheaper than any competitor Name a reason for me to use anything else? [ github.com/spider-rs/spid… ](https://github.com/spider-rs/spider)[ 10:18 PM · May 11, 2024 ](https://twitter.com/WilliamEspegren/status/1789419820821184764) [](https://help.twitter.com/en/twitter-for-websites-ads-info-and-privacy)[2 ](https://twitter.com/intent/like?tweet_id=1789419820821184764) [Reply ](https://twitter.com/intent/tweet?in_reply_to=1789419820821184764)[ Read 1 reply ](https://twitter.com/WilliamEspegren/status/1789419820821184764)[<img alt="Troy Lowry" src="https://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png" height="48" width="48" />](https://twitter.com/Troyusrex/status/1791497607925088307)[Troy Lowry](https://twitter.com/Troyusrex/status/1791497607925088307)[@Troyusrex ](https://twitter.com/Troyusrex/status/1791497607925088307)· [Follow](https://twitter.com/intent/follow?screen_name=Troyusrex)[](https://twitter.com/Troyusrex/status/1791497607925088307)[ @spider\\_rust ](https://twitter.com/spider_rust) First, the good: Spider has enabled me to speed up my scraping 20X and with a bit higher quality than I was getting before. I am having a few issues however. First, the documentation link doesn\'t work ([ spider.cloud/guides/(/docs/… ](https://spider.cloud/guides/(/docs/api)))I\'ve figured out how to get it to work…[ 3:54 PM · May 17, 2024 ](https://twitter.com/Troyusrex/status/1791497607925088307) [](https://help.twitter.com/en/twitter-for-websites-ads-info-and-privacy)[1 ](https://twitter.com/intent/like?tweet_id=1791497607925088307) [Reply ](https://twitter.com/intent/tweet?in_reply_to=1791497607925088307)[ Read 2 replies ](https://twitter.com/Troyusrex/status/1791497607925088307)FAQ----------Frequently asked questions about Spider<details itemscope="" itemprop="mainEntity" itemtype="https://schema.org/Question" open=""><summary class="cursor-pointer">What is Spider?----------</summary>Spider is a leading web crawling tool designed for speed and cost-effectiveness, supporting various data formats including LLM-ready markdown.</details><details itemscope="" itemprop="mainEntity" itemtype="https://schema.org/Question"><summary class="cursor-pointer">Why is my website not crawling?----------</summary>Your crawl may fail if it requires JavaScript rendering. Try setting your request to \'chrome\' to solve this issue.</details><details itemscope="" itemprop="mainEntity" itemtype="https://schema.org/Question"><summary class="cursor-pointer">Can you crawl all pages?----------</summary>Yes, Spider accurately crawls all necessary content without needing a sitemap.</details><details itemscope="" itemprop="mainEntity" itemtype="https://schema.org/Question"><summary class="cursor-pointer">What formats can Spider convert web data into?----------</summary>Spider outputs HTML, raw, text, and various markdown formats. It supports JSON, JSONL, CSV, and XML for API responses.</details><details itemscope="" itemprop="mainEntity" itemtype="https://schema.org/Question"><summary class="cursor-pointer">Is Spider suitable for large scraping projects?----------</summary>Absolutely, Spider is ideal for large-scale data collection and offers a cost-effective dashboard for data management.</details><details itemscope="" itemprop="mainEntity" itemtype="https://schema.org/Question"><summary class="cursor-pointer">How can I try Spider?----------</summary>Purchase credits for our cloud system or test the Open Source Spider engine to explore its capabilities.</details><details itemscope="" itemprop="mainEntity" itemtype="https://schema.org/Question"><summary class="cursor-pointer">Does it respect robots.txt?----------</summary>Yes, compliance with robots.txt is default, but you can disable this if necessary.</details>[API](/docs/api) [Pricing](/credits/new) [Guides](/guides) [About](/about) [Docs](https://docs.rs/spider/latest/spider/) [Privacy](/privacy) [Terms](/eula) [FAQ](/faq)© 2024 Spider from A11yWatch[GitHubGithub](https://github.com/spider-rs/spider) [X - Twitter ](https://twitter.com/spider_rust)', 'error': None, 'status': 200, 'url': 'https://spider.cloud'}]

Create the agents and register the tool.

from autogen import ConversableAgent, register_function

# Create web scraper agent.
scraper_agent = ConversableAgent(
"WebScraper",
llm_config={"config_list": config_list},
system_message="You are a web scraper and you can scrape any web page to retrieve its contents."
"Returns 'TERMINATE' when the scraping is done.",
)

# Create web crawler agent.
crawler_agent = ConversableAgent(
"WebCrawler",
llm_config={"config_list": config_list},
system_message="You are a web crawler and you can crawl any page with deeper crawling following subpages."
"Returns 'TERMINATE' when the scraping is done.",
)

# Create user proxy agent.
user_proxy_agent = ConversableAgent(
"UserProxy",
llm_config=False, # No LLM for this agent.
human_input_mode="NEVER",
code_execution_config=False, # No code execution for this agent.
is_termination_msg=lambda x: x.get("content", "") is not None and "terminate" in x["content"].lower(),
default_auto_reply="Please continue if not finished, otherwise return 'TERMINATE'.",
)

# Register the functions with the agents.
register_function(
scrape_page,
caller=scraper_agent,
executor=user_proxy_agent,
name="scrape_page",
description="Scrape a web page and return the content.",
)

register_function(
crawl_page,
caller=crawler_agent,
executor=user_proxy_agent,
name="crawl_page",
description="Crawl an entire domain, following subpages and return the content.",
)

Start the conversation for scraping web data. We used the reflection_with_llm option for summary method to perform the formatting of the output into a desired format. The summary method is called after the conversation is completed given the complete history of the conversation.

# Scrape page
scraped_chat_result = user_proxy_agent.initiate_chat(
scraper_agent,
message="Can you scrape william-espegren.com for me?",
summary_method="reflection_with_llm",
summary_args={"summary_prompt": """Summarize the scraped content"""},
)
UserProxy (to WebScraper):

Can you scrape william-espegren.com for me?

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
WebScraper (to UserProxy):

***** Suggested tool call (call_qCNYeQCfIPZkUCKejQmm5EhC): scrape_page *****
Arguments:
{"url":"https://www.william-espegren.com"}
****************************************************************************

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING FUNCTION scrape_page...
UserProxy (to WebScraper):

UserProxy (to WebScraper):

***** Response from calling tool (call_qCNYeQCfIPZkUCKejQmm5EhC) *****
[{"content": "William Espegren - Portfoliokeep scrollingMADE WITHCSS, JSMADE BYUppsalaWilliam EspegrenWith \u00b7LoveOpen For Projects[CONTACT ME](https://www.linkedin.com/in/william-espegren/)[Instagram](https://www.instagram.com/williamespegren/)[LinkedIn](https://www.linkedin.com/in/william-espegren/)[Twitter](https://twitter.com/WilliamEspegren)[team-collaboration/version-control/github Created with Sketch.Github](https://github.com/WilliamEspegren)", "error": null, "status": 200, "url": "https://www.william-espegren.com"}]
**********************************************************************

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
WebScraper (to UserProxy):

I successfully scraped the website "william-espegren.com". Here is the content retrieved:

```
William Espegren - Portfolio

keep scrolling

MADE WITH
CSS, JS

MADE BY
Uppsala

William Espegren
With Love

Open For Projects

[CONTACT ME](https://www.linkedin.com/in/william-espegren/)
[Instagram](https://www.instagram.com/williamespegren/)
[LinkedIn](https://www.linkedin.com/in/william-espegren/)
[Twitter](https://twitter.com/WilliamEspegren)
[Github](https://github.com/WilliamEspegren)
```

Is there anything specific you would like to do with this information?

--------------------------------------------------------------------------------
UserProxy (to WebScraper):

Please continue if not finished, otherwise return 'TERMINATE'.

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
WebScraper (to UserProxy):

TERMINATE

--------------------------------------------------------------------------------
# Crawl page
crawled_chat_result = user_proxy_agent.initiate_chat(
crawler_agent,
message="Can you crawl william-espegren.com for me, I want the whole domains information?",
summary_method="reflection_with_llm",
summary_args={"summary_prompt": """Summarize the crawled content"""},
)
UserProxy (to WebCrawler):

Can you crawl william-espegren.com for me, I want the whole domains information?

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
WebCrawler (to UserProxy):

***** Suggested tool call (call_0FkTtsxBtA0SbChm1PX085Vk): crawl_page *****
Arguments:
{"url":"http://www.william-espegren.com"}
***************************************************************************

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING FUNCTION crawl_page...
UserProxy (to WebCrawler):

UserProxy (to WebCrawler):

***** Response from calling tool (call_0FkTtsxBtA0SbChm1PX085Vk) *****
[{"content": "William Espegren - Portfoliokeep scrollingMADE WITHCSS, JSMADE BYUppsalaWilliam EspegrenWith \u00b7LoveOpen For Projects[CONTACT ME](https://www.linkedin.com/in/william-espegren/)[Instagram](https://www.instagram.com/williamespegren/)[LinkedIn](https://www.linkedin.com/in/william-espegren/)[Twitter](https://twitter.com/WilliamEspegren)[team-collaboration/version-control/github Created with Sketch.Github](https://github.com/WilliamEspegren)", "error": null, "status": 200, "url": "http://www.william-espegren.com"}]
**********************************************************************

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
WebCrawler (to UserProxy):

The crawl of [william-espegren.com](http://www.william-espegren.com) has been completed. Here is the gathered content:

---

**William Espegren - Portfolio**

Keep scrolling

**MADE WITH:** CSS, JS

**MADE BY:** Uppsala

**William Espegren**

**With Love**

**Open For Projects**

**[CONTACT ME](https://www.linkedin.com/in/william-espegren/)**

- [Instagram](https://www.instagram.com/williamespegren/)
- [LinkedIn](https://www.linkedin.com/in/william-espegren/)
- [Twitter](https://twitter.com/WilliamEspegren)
- [Github](https://github.com/WilliamEspegren)

---

If you need further information or details from any specific section, please let me know!

--------------------------------------------------------------------------------
UserProxy (to WebCrawler):

Please continue if not finished, otherwise return 'TERMINATE'.

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
WebCrawler (to UserProxy):

TERMINATE

--------------------------------------------------------------------------------

The output is stored in the summary.

print(scraped_chat_result.summary)
# print(crawled_chat_result.summary) # We show one for cleaner output
The website belongs to William Espegren, who is based in Uppsala and possesses skills in CSS and JavaScript. He is open to new projects. You can contact him through the following links:

- [LinkedIn](https://www.linkedin.com/in/william-espegren/)
- [Instagram](https://www.instagram.com/williamespegren/)
- [Twitter](https://twitter.com/WilliamEspegren)
- [GitHub](https://github.com/WilliamEspegren)

Feel free to reach out to him for project collaborations.