Cloudflare’s advanced bot protection makes web scraping a real challenge, especially when using tools like cURL, which are easy to detect and block. We’ll show you how to work around Cloudflare using cURL, using the Cloudflare Challenge page as an example.
How Do I Bypass Cloudflare in cURL?
Bypassing Cloudflare with cURL is challenging due to the stark differences between cURL's connection properties and those of a real browser. Like other WAFs such as DataDome and PerimeterX, Cloudflare's advanced detection systems can easily identify and block cURL requests, as they lack the characteristics of legitimate browser traffic.
While adjusting HTTP headers to mimic a browser might seem simple, Cloudflare's multi-layered detection goes beyond basic header checks. Imitating real user behavior is tough with cURL, making it easy for Cloudflare to spot bots. Tools like cURL-impersonate help by mimicking browser-level details, but they often fall short against stronger defenses.
Successfully bypassing Cloudflare with cURL requires deep knowledge of browser behavior and Cloudflare's detection methods, as well as constant adaptation to evolving security measures.
Let's get down to code.
Sending a Request Using Base cURL
Let's run through a quick scraping example. We'll use cURL to access Cloudflare Challenge, a page under Cloudflare's anti-bot protection.
Start by sending a request to the target website.
curl https://www.scrapingcourse.com/cloudflare-challenge
You'll get the following output on running this code:
curl : The remote server returned an error: (403) Forbidden.
Expectedly, that didn't work.
Cloudflare detected our cURL-based scraper as a bot and denied access to the content. You can learn more about Error 403 in web scraping in our guide.
Let's try adding some evasion measures.
1. Set Real HTTP Request Headers
One of the primary ways Cloudflare identifies and blocks bots is by analyzing HTTP headers.
By default, cURL sends headers that are easily distinguishable from those of a real browser, which makes it easy for Cloudflare to detect it. However, we can use this knowledge to our advantage by setting more realistic headers in our cURL requests.
Let's start by examining the default headers sent by cURL. We'll use HTTPBin, a service that returns back the headers it receives:
curl https://httpbin.io/headers
You'll get a similar output on running this code:
{
"headers": {
"Accept": [
"*/*"
],
"Host": [
"httpbin.io"
],
"User-Agent": [
"curl/8.8.0"
]
}
}
This is problematic for bypassing Cloudflare because it clearly identifies itself as a cURL request through the User-Agent
and lacks many headers typically sent by real browsers. The Accept
header "/"
is overly broad and unusual for browser requests.
Additionally, it's missing crucial modern browser headers like Sec-Ch-Ua
and Accept-Language
. These factors combined make it easy for anti-bot systems like Cloudflare to detect and potentially block the request as non-human traffic.
Now, look at what we get when we open the HTTPBin site in our browser:
{
"headers": {
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8"
],
"Accept-Encoding": [
"gzip, deflate, br, zstd"
],
"Accept-Language": [
"en-US,en;q=0.5"
],
"Connection": [
"keep-alive"
],
"Host": [
"httpbin.io"
],
"Sec-Ch-Ua": [
"\"Chromium\";v=\"128\", \"Not;A=Brand\";v=\"24\", \"Brave\";v=\"128\""
],
"Sec-Ch-Ua-Mobile": [
"?0"
],
"Sec-Ch-Ua-Platform": [
"\"Windows\""
],
"Sec-Fetch-Dest": [
"document"
],
"Sec-Fetch-Mode": [
"navigate"
],
"Sec-Fetch-Site": [
"none"
],
"Sec-Fetch-User": [
"?1"
],
"Sec-Gpc": [
"1"
],
"Upgrade-Insecure-Requests": [
"1"
],
"User-Agent": [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36"
]
}
}
As you can see, the cURL headers are entirely different from those of the browser. Thus, Cloudflare can easily identify and block us.
Let's try using our browser headers in our cURL request to access the Cloudflare challenge target page:
curl 'https://www.scrapingcourse.com/cloudflare-challenge' \
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Accept-Language: en-US,en;q=0.5' \
-H 'Connection: keep-alive' \
-H 'Sec-Ch-Ua: "Chromium";v="128", "Not;A=Brand";v="24", "Brave";v="128"' \
-H 'Sec-Ch-Ua-Mobile: ?0' \
-H 'Sec-Ch-Ua-Platform: "Windows"' \
-H 'Sec-Fetch-Dest: document' \
-H 'Sec-Fetch-Mode: navigate' \
-H 'Sec-Fetch-Site: none' \
-H 'Sec-Fetch-User: ?1' \
-H 'Sec-Gpc: 1' \
-H 'Upgrade-Insecure-Requests: 1' \
-H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36' \
--compressed
Here's the output:
<!DOCTYPE html>
<html lang="en-US">
<head>
<title>Just a moment...</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<!-- ... other header content omitted for brevity -->
</head>
<body>
<!-- ... content omitted for brevity -->
</body>
</html>
The above HTML output with a title that says, "Just a moment..." indicates that we're still getting detected by Cloudflare. This Cloudflare Just a Moment page demonstrates that we cannot bypass Cloudflare's protection despite our efforts to mimic a real browser by setting legitimate HTTP headers.
While this method doesn't succeed, it's an important first step in understanding how Cloudflare detects bots. In the following sections, we'll explore more advanced techniques that build upon this foundation to improve our chances of successfully bypassing Cloudflare's protection.
2. Utilizing Cookies
When a browser interacts with a website protected by Cloudflare, a complex series of interactions occur, including the exchange of cookies and other session data. These elements play a crucial role in Cloudflare's bot detection system.
To attempt to bypass Cloudflare using this method, we'll need to capture and replicate the cookies data that a real browser receives. Here's how we can do this:
Visit the target website (in this case, the Cloudflare Challenge page) in a real browser. Open the browser's Developer Tools (usually F12 or Ctrl+Shift+I). Go to the Network tab and refresh the page. Find the main page request in the network log. Examine the request headers, focusing on cookies and any Cloudflare-specific headers.

Now, let's try to replicate this request using cURL:
curl 'https://www.scrapingcourse.com/cloudflare-challenge' \
-H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36' \
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8' \
-H 'Accept-Language: en-US,en;q=0.9' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Connection: keep-alive' \
-H 'Upgrade-Insecure-Requests: 1' \
-H 'Sec-Fetch-Dest: document' \
-H 'Sec-Fetch-Mode: navigate' \
-H 'Sec-Fetch-Site: none' \
-H 'Sec-Fetch-User: ?1' \
-H 'Cookie: cf_clearance=<YOUR_CF_CLEARANCE_COOKIE>' \
--compressed
Replace <YOUR_CF_CLEARANCE_COOKIE>
with the actual value from your browser session. It's crucial to ensure that the User Agent in this cURL command matches exactly with the one used to obtain the clearance cookie. Also, the IP address used to fetch the clearance cookie must be the same as the one used to make this request. Cloudflare's security measures are sensitive to changes in these parameters, and mismatches can result in your scraper getting detected.
When you run this command with the correct cookie and matching User-Agent, you'll successfully bypass Cloudflare's protection (but there's a catch):
<html lang="en">
<head>
<!-- ... -->
<title>Cloudflare Challenge - ScrapingCourse.com</title>
<!-- ... -->
</head>
<body>
<!-- ... -->
<h2>
You bypassed the Cloudflare challenge! :D
</h2>
<!-- other content omitted for brevity -->
</body>
</html>
While this method can successfully bypass Cloudflare's initial challenge, it has significant limitations. The cf_clearance
cookie typically has a short lifespan, often expiring within hours, and is tied to both the specific IP address and User Agent used to solve the challenge. This means you'll need to frequently refresh the cookie and ensure your IP and User Agent remain consistent, which can be challenging for large-scale or long-running scraping tasks.
Moreover, this approach is site-specific and doesn't work across different Cloudflare-protected websites. It's also vulnerable to changes in Cloudflare's challenge mechanisms and may not bypass additional protection measures implemented by the target website. Also, there's always a risk of your IP being blacklisted if your activities are detected as suspicious.
Lastly, advanced Cloudflare configurations employ browser fingerprinting techniques to detect discrepancies between your cURL request and a real browser, potentially making this method ineffective. Given these limitations, it's not a reliable solution.
3. cURL-impersonate to Simulate a Browser
cURL-impersonate is a modified version of the standard cURL library that aims to mimic the behavior of real browsers. It replicates the TLS and HTTP/2 handshake of popular browsers like Chrome and Firefox. This approach can be effective in bypassing some basic anti-bot measures.
We'll use cURL-impersonate to imitate a real browser this time. Start by downloading and installing it.
- The main cURL-impersonate project supports Linux and macOS.
- A patch, cURL-impersonate-win, can work on Windows as well.
Now, let's try to access our target website by simulating we're running Chrome104. On your command line tool, open the folder containing the package. Then, send the following request:
curl_chrome104 --url https://www.scrapingcourse.com/cloudflare-challenge
You'll get the following output on running this code:
<!DOCTYPE html>
<html lang="en-US">
<head>
<title>Just a moment...</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<!-- ... other header content omitted for brevity -->
</head>
<body>
<!-- ... content omitted for brevity -->
</body>
</html>
This result shows that even with cURL-impersonate, Cloudflare still isn't fooled. Seeing the "Just a moment..." page means our request is being flagged as automated.
cURL-impersonate works well against simple anti-bot checks, but it struggles with more advanced systems like Cloudflare. That’s because Cloudflare uses layered defenses, including JavaScript challenges and fingerprinting, which cURL-impersonate can't fully mimic.
Another drawback is that it doesn’t always keep pace with the latest browser versions. Some sites flag outdated versions as suspicious, making detection more likely. And since cURL-impersonate can’t handle JavaScript-rendered content, it falls short when scraping modern, dynamic websites.
These limitations highlight the challenges of bypassing Cloudflare and underscore the need for more sophisticated approaches when dealing with heavily protected websites.
Despite these limitations, there's still a reliable way to bypass Cloudflare. When built-in solutions fall short, it's often beneficial to turn to specialized tools designed for this purpose.
In the next section, we'll explore how to use ZenRows, a web scraping API that can effectively bypass Cloudflare protection.Â
4. Web Scraping API to Get The Job Done
Web scraping APIs offer a powerful solution for bypassing Cloudflare's sophisticated measures. These purpose-built tools provide a significantly higher success rate and a much simpler process than manual methods or modified cURL libraries.
ZenRows stands out as a leading web scraping API, specifically designed to overcome Cloudflare and other advanced anti-bot systems.
Key features of ZenRows include automatic header optimization, premium proxy auto-rotation, JavaScript rendering capabilities, CAPTCHA solving, browser fingerprint simulation, and more.
These features work together to create a robust solution that handles even the most challenging Cloudflare protections.
Bypass Cloudflare Using cURL With ZenRows
Let's see how we can use ZenRows to bypass Cloudflare using cURL. We'll target the same website that we couldn't access with cURL-impersonate.
Sign up for ZenRows, and you'll get redirected to the Request Builder page. Paste your target URL. Click on the Premium Proxies and JS Rendering checkboxes. Finally, click on the cURL tab on the right.

Here's how your generated code would look like:
curl "https://api.zenrows.com/v1/?apikey=<YOUR_ZENROWS_API_KEY>&url=https%3A%2F%2Fwww.scrapingcourse.com%2Fcloudflare-challenge&js_render=true&premium_proxy=true"
When you run this command, you'll receive the full HTML content of the page, successfully bypassing Cloudflare:
<html lang="en">
<head>
<!-- ... -->
<title>Cloudflare Challenge - ScrapingCourse.com</title>
<!-- ... -->
</head>
<body>
<!-- ... -->
<h2>
You bypassed the Cloudflare challenge! :D
</h2>
<!-- other content omitted for brevity -->
</body>
</html>
Congratulations! You successfully bypassed Cloudflare using ZenRows.
ZenRows offers the most reliable and straightforward way to bypass Cloudflare with cURL. It handles all the complexities of browser simulation, proxy management, anti-bot evasion, and more. Whether you're dealing with Cloudflare or other anti-bot systems, ZenRows provides a robust solution that can scale with your web scraping needs.
In conclusion, while methods like cookies or cURL-impersonate can sometimes work for basic anti-bot measures, a specialized web scraping API like ZenRows is the most effective tool for consistently bypassing Cloudflare and other advanced protection systems. By leveraging ZenRows, you can ensure reliable access to the data you need, saving time and resources in your web scraping projects.