How to Crawl a Website With EasySpider

Rubén del Campo
Rubén del Campo
March 4, 2025 · 4 min read

EasySpider offers a no-code visual system that allows anyone, including non-technical users, to crawl websites in a few clicks.

In this tutorial, you'll learn how to set up and use EasySpider to extract data from websites at scale. Let's begin!

What Is EasySpider?

EasySpider is an open-source, no-code browser automation and data collection software. Its intuitive point-and-click interface allows you to automate browsers and execute web scraping tasks without writing any code.

You can visually create tasks by selecting the content of interest on a web page and following the on-screen prompts to complete your design.

For technical users, EasySpider is both flexible and highly customizable. It allows you to configure your crawler according to your project needs. For example, you can add extensions, execute JavaScript, and manipulate the browser using Selenium statements.

EasySpider is a robust solution with numerous features to simplify browser automation and web crawling tasks. For more details on these features, check out the EasySpider documentation.

Start Crawling With EasySpider Web Crawler

We'll use the ScrapingCourse E-commerce test site as a target page to demonstrate how to crawl websites using EasySpider.

ScrapingCourse.com Ecommerce homepage
Click to open the image in full screen

By the end of this tutorial, you'll have a functional EasySpider task for extracting product information (product name, price, and image URL) from the target website.

Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Step 1: Set up EasySpider

To set up EasySpider, first, navigate to GitHub's Releases page and download the latest version for your operating system.

Once your download is complete, unzip the archive into a directory where you'd like to store your project and launch the EasySpider executable (EasySpider.exe).

This displays the main user interface, as seen below.

Click to open the image in full screen

Next, select the Design/Modify Task option and choose a design mode (Start Clean Mode).

Click to open the image in full screen

This action automatically loads your default browser and displays a task list in a "task manager" type interface. This list includes placeholder tasks for popular websites to help you get started quickly.

Click to open the image in full screen

That's it! You're all set up.

Step 2: Open the Target URL With EasySpider

Click the New Task button, enter your target URL, and click the "Start Design" button.

Click to open the image in full screen

This opens the Workflow Manager window and navigates to your target URL using the initial browser instance.

The Workflow Manager is displayed above the browser, as in the image below, allowing you to easily select page elements and design tasks.

Click to open the image in full screen

Step 3: Extract Data From the Website

EasySpider allows you to extract data from websites by clicking page elements and following the prompt in the Toolbox to design your task.

In this tutorial, we'll extract each product's name, price, and image URL.

Below is a step-by-step guide.

Right-click on the product name and choose the Select All option to select all products on the page.

Click to open the image in full screen

Then, choose the Loop click this element prompt in the Toolbox to click every product card on the page.

Click to open the image in full screen

This opens the selected product page. Right-click on the product name, price, and image to select each data point. Then click the Collect Data option.

Click to open the image in full screen

This adds the Collect Data operation to your flow chart and opens its properties window on the right side of your Workflow manager.

Click to open the image in full screen

The operation properties window allows you to customize an operation according to your needs. In this case, click on the Field Names to edit them to match their corresponding data points (Product Name, Product Price, and Product Image).

Click to open the image in full screen

In your flowchart, you'll notice that the Collect Data and Click Element operations are nested in a Loop.

Click to open the image in full screen

So, even though the Collect Data operation only shows the data points of one product, the flowchart instructs EasySpider to click through each product card and extract the name, price, and image URL.

That's it! You've designed a task to extract product information.

Step 4: Export the Scraped Data

Exporting data to a usable format, such as CSV, is essential for quick analysis. EasySpider makes this easy to do using its export options.

To access these options, click the Save Task button at the top left of your Workflow Manager window. This opens a dialogue box containing various options for saving your task.

Locate the Export Data Format dropdown menu and select CSV. Other export options include XLSX, TXT, JSON, and MySQL.

Click to open the image in full screen

After that, edit the necessary options according to your needs and click the Save button to save your task.

To view your result, navigate to your project directory and locate the Data folder. Your result will be named Task_0.

Here's a screenshot sample for reference.

Click to open the image in full screen

Congratulations, you've successfully completed your first crawl using EasySpider.

Avoid Getting Blocked While Crawling With EasySpider

Getting blocked is a common challenge when web crawling. This is because web crawlers exhibit obvious bot-like patterns that make it easy for anti-bot solutions to distinguish them from human users.

Here's the result of trying to access the Antibot Challenge page, a protected website, using EasySpider.

scrapingcourse cloudflare blocked screenshot
Click to open the image in full screen

This happens because EasySpider is unable to pass the anti-bot challenge and ultimately gets blocked.

EasySpider allows some flexibility for handling website challenges. For example, you can rotate proxies and set custom user agents to disguise your web activity.

For more information on using proxies, check out our web scraping proxy guide.

However, it's important to note that these methods do not work against advanced anti-bot systems.

To guarantee you can crawl any website without getting blocked, consider ZenRows' Universal Scraper API, the most reliable solution for scalable web crawling.

ZenRows is an easy-to-use, scalable web scraping solution. You only need a single API call to bypass any anti-bot system and extract your desired data.

Some of its features include advanced anti-bot bypass out of the box, headless browser support, geo-located requests, actual user agent spoofing, request header management, and more.

Here's ZenRows in action against the same anti-bot challenge where EasySpider failed.

To follow along with this example, sign up for your free API key. This will direct you to the Request Builder page.

building a scraper with zenrows
Click to open the image in full screen

Input your target URL and activate Premium Proxies and JS Rendering boost mode.

Next, select your preferred language and choose the API option. ZenRows works with any language and provides ready-to-use snippets for the most popular ones.

We'll use Python for this example.

Copy the generated code on the right to your editor for testing.

Your code should look like this:

scraper.py
# pip3 install requests
import requests

url = 'https://www.scrapingcourse.com/antibot-challenge'
apikey = '<YOUR_ZENROWS_API_KEY>'
params = {
    'url': url,
    'apikey': apikey,
    'js_render': 'true',
    'premium_proxy': 'true',
}
response = requests.get('https://api.zenrows.com/v1/', params=params)
print(response.text)

This code bypasses the anti-bot challenge and retrieves the HTML.

Output
<html lang="en">
<head>
    <!-- ... -->
    <title>Antibot Challenge - ScrapingCourse.com</title>
    <!-- ... -->
</head>
<body>
    <!-- ... -->
    <h2>
        You bypassed the Antibot challenge! :D
    </h2>
    <!-- other content omitted for brevity -->
</body>
</html>

Congratulations! You now know how to crawl any website without getting blocked.

Conclusion

You've now learned how to crawl a website using EasySpider. From setting up your project, here's a quick recap of your progress.

You now know how to:

  • Access a target website using EasySpider.
  • Extract specific data from a target page.
  • Export scraped data to CSV.

Bear in mind that to use these skills, you must first overcome anti-bot challenges. While EasySpider's no-code architecture makes it beginner-friendly, advanced anti-bot solutions will always block your EasySpider requests.

To crawl any website without getting blocked, consider ZenRows, an easy-to-implement and scalable solution. Sign up now to try ZenRows for free!

Ready to get started?

Up to 1,000 URLs for free are waiting for you