The Anti-bot Solution to Scrape Everything? Get Your Free API Key! 😎

7 Anti-Scraping Techniques You Need to Know

October 18, 2022 · 7 min read

Data is the world's most valuable asset. Companies know this very well, which is why they try to protect their data at all costs. Some of their data's publicly accessible through the Web. But they don't want competitors to steal it with web scraping. That's why more and more websites are adopting anti-scraping measures.

In this article, you'll learn everything you need to know about the most popular anti-scraping techniques. Of course, you'll see how to defeat them.

What Is Anti-Scraping?

Anti scraping refers to all techniques, tools, and approaches to protect online data against scraping. In detail, anti scraping involves making it more difficult to automatically extract data from a web page. Specifically, anti scraping's about identifying and blocking requests from bots or malicious users.

For this reason, anti scraping also includes anti-bot protection and anything you can do to block scrapers. If you aren't familiar with this, anti bot is a technology that aims to block unwanted bots. This is because not all bots are bad. For example, the Google bot crawls your website so that Google can index it.

Now, you might be asking the following question.

What is the Difference Between Anti-Scraping and Scraping?

Scraping and anti-scraping are two opposite concepts. Web scraping's about extracting data from web pages using scripts. While anti-scraping's about protecting the data contained in a web page.

The two concepts are inherently connected. Anti-scraping techniques evolve based on what methods scrapers use to retrieve data from a web page. At the same time, scraping technologies evolve to prevent a scraper from being recognized and blocked.

Now, the next question should arise.

How Do You Stop Scraping?

There are several techniques behind anti-scraping technology. Also, there are a lot of anti-scraping software or anti-scraping services. These technologies are becoming increasingly complex and effective against web scrapers.

At the same time, keep in mind that preventing web scraping isn't an easy task. As anti web scraping techniques evolve, so do ways to bypass anti scraping. But it's fundamental to know what challenges await you.

How To Bypass Anti Scraping?

Bypassing anti scraping means finding a way to overcome all data protection systems that a website implements. The best way to skip these systems is to know how they work and what to expect.

Only this way, you can equip your web scraper with what it needs to bypass web scraping.

To understand how these techniques try to prevent web scraping, let's have a look at the most popular anti-scraping approaches.

Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Top 7 Anti-Scraping Techniques

If you want your web scraping process to be effective, you need to all the obstacles your scraper may have to face. So, let's dig into the 7 most popular and adopted anti web scraping techniques and how to avoid them.

1. Auth Wall or Login Wall

Does the following image look familiar to you?

LinkedIn's auth wall
LinkedIn's auth wall

Most websites, such as LinkedIn, hide their data behind an auth wall/login wall. That's especially true when it comes to social platforms like Twitter and Facebook. When a website implements a log wall, only authenticated users can access its data.

A server identifies a request as authenticated based on its HTTP headers. In detail, some cookies generally store the values to send as authentication headers. If you aren't familiar with this concept, an HTTP cookie is a small piece of data stored in the browser. The browser creates the login cookies based on the response obtained from the server after login.

So, to crawl sites that adopt a login wall, your crawler must first have access to login cookies. The values contained in the cookies are sent as HTTP headers. This means that you can retrieve the values by looking at a request in the DevTools after logging in.

Similarly, your scraper can use a headless browser to simulate the login operation and then navigate it. This could make the logic of your scraping process more complex. Luckily, ZenRows API handles headless browsers for you.

Note that, in this case, you must have valid credentials for your target website if you want to scrape it.

2. IP Address Reputation

One of the simplest anti-scraping techniques involves blocking requests from a particular IP. In detail, the website tracks the requests it receives. Then, when too many requests come from the same IP, the website bans it.

At the same time, the site might block the IP because it makes requests at regular intervals. Again, the site could mark requests from that IP as generated by a bot. That's one of the most common anti-bot protection systems.

Also, these anti-scraping and anti-bot systems can undermine your IP address reputation forever. You can check here if an IP has been compromised. Anyway, you should avoid using your IP when performing web scraping.

The only way to avoid being blocked because of the IP is to introduce random timeouts between requests. Or, you can use an IP rotation system via premium proxy servers. Note that ZenRows offers an excellent premium proxy service.

3. User Agent and/or Other HTTP Headers

Just like IP-based banning, anti-scraping technologies can use some HTTP headers to identify malicious requests and block them. Again, the website keeps track of the last requests received. If these don't contain an accepted set of values in some HTTP header, it blocks them.

In detail, the most relevant header you should take into account is the User-Agent header. This is a string that identifies the application, operating system, and/or vendor version the HTTP request comes from. So, your crawlers should always set a real User-Agent header.

Similarly, the anti-scraping system may block requests that don't have a Referrer header. This HTTP header is a string that contains an absolute or partial address of the web page that makes the request.

4. Honeypots

A honeypot is a decoy system projected to look like a legitimate system. Such systems generally have some security issues. Their goals are to divert malicious users and bots from real targets. Also, through these honeypots, protection systems can study how attackers act.

When it comes to anti-scraping, a honeypot can be a fake website that doesn't implement any anti-scraping system. These honeypots generally provide false or wrong data. Also, it may be collecting data from the requests it receives to train the anti-scraping systems.

The only way you have to avoid a honeypot trap is to make sure that the data contained on the target website is real. Otherwise, you can ignore the threat by protecting your real IP behind a proxy server.

A web proxy acts as an intermediary between your computer and the rest of the machines on the internet. When using a proxy to perform requests, the target website will see the IP address and headers of the proxy server instead of yours. This doesn't allow the honeypot trap to be effective.

Also, you should avoid following hidden links when crawling a website. A hidden link is a link marked with the display: none or visibility: hidden CSS rule. That's because honeypot pages typically come from links that are contained in the page but invisible to the user.

5. JavaScript Challenges

A JavaScript challenge is a mechanism that anti-scraping systems use to prevent bots from accessing a webpage.

Every user, even legitimate ones, can get be confronted against hundreds of JS challenges for a single page.

A JavaScript challenge consists of some kind of challenge executed by the browser via JS. Any browser with JavaScript enabled will be automatically able to understand and execute the it.

The Cloudflare JavaScript challenge page
Click to open the image in full screen

A JavaScript challenge adds a short seconds delay. That's the time required by the anti-bot system to perform the challenge. The challenge executes transparently, without the user even realizing about it.

So, any automated system that isn't equipped with a JavaScript stack won't be able to pass the challenge. And considering that crawlers generally perform server-to-server requests without using a browser, they can't bypass the anti-scraping system.

So, if you want to overcome a JavaScript challenge, you need a browser. In detail, your scraper could use a headless browser, such as Selenium. These technologies execute a real browser without the graphical interface behind the scene.

Cloudflare and Akamai offer the most difficult JavaScript challenges on the market. Bypassing them isn't easy, but definitely possible. Learn how to bypass Cloudflare and how to bypass Akamai.

6. CAPTCHAS

A CAPTCHA is a type of challenge–response test used to determine whether a user is human. CAPTCHAs involve finding a solution to a problem that only a human can solve. For example, they may ask you to select images of a particular animal or object.

Example of hCatpcha
Example of a CAPTCHA

CAPTCHAs are one of the most popular anti-bot protection systems. That's especially true considering that many CDN (Cloud Delivery Network) services now offer them as built-in anti-bot solutions.

CAPTCHAs prevent non-human automated systems from accessing and browsing a site. In other words, CAPTCHAs prevent scrapers from crawling a website. At the same time, there are ways to automatically overcome them.

Learn more on how you can automate CAPTCHA solving.

7. User Behavior Analysis

UBA (User Behavior Analytics) is about collecting, tracking, and elaborating user data through monitoring systems. Then, a user behavior analysis process determines whether the current user is a human or a bot.

During this process, an anti-scraping software uses the UBA and looks for patterns of human behavior. If it doesn't find them, the system labels the user as a bot and blocks it. This is because any anomaly represents a potential threat.

Bypassing these systems can be very challenging. This is because they evolve based on the data they collect about users. Since they depend on artificial intelligence and machine learning, the solution you find today to bypass them may not work in the future. At the same time, only advanced anti-scraping services, like ZenRows, offer this protection solution.

Conclusion

You've got an overview about everything you should know about anti-scraping techniques, from basic to advanced approaches. As shown above, there are several ways you might get blocked while scraping. However, there are also several methods and tools to bypass anti scraping systems.

What matters is to know these anti-scraping technologies, so you know what to expect.

Specifically, in this article you've learned:
  • What anti scraping is and how it differs from web scraping.
  • How anti-scraping systems work.
  • What are the most popular and adopted anti-scraping techniques, and how to avoid them.

If you liked this, take a look at our guide on web scraping without getting blocked.

Thanks for reading! We hope that you found this guide helpful. You can sign up for free, try ZenRows, and let us know any questions, comments, or suggestions.

Did you find the content helpful? Spread the word and share it on Twitter, LinkedIn, or Facebook.

Frustrated that your web scrapers are blocked once and again? ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

The easiest way to do Web Scraping

From Rotating Proxies and Headless Browsers to CAPTCHAs, a single API call to ZenRows handles all anti-bot bypass for you.