Are you encountering the Error 1020 "Access Denied" response while using your scraper? This error is common in web scraping and comes from Cloudflare's anti-bot security measure.
Implementing Cloudflare bypass techniques during scraping can prevent the Error 1020 "Access Denied" message. In this article, we'll explore 4 quick fixes tailored to address this specific error:
- Use a rotating proxy to hide your IP.
- Customize and Rotate User Agent Headers.
- Mask headless browser with Undetected ChromeDriver.
- Use a web scraping API.
What Is Error 1020 "Access Denied" Delivered By Cloudflare?
An Error 1020 "Access Denied" happens when Cloudflare's firewall detects suspicious activities from the client or browser accessing a Cloudflare-protected website. If you see an Error 1020 like the one below while scraping, it means the security service has blocked the traffic from your scraper's IP address.

Sometimes, the 1020 error may appear differently than shown, especially while using an HTTP client like Python's Requests library. In that case, you may get a generic forbidden response, such as the Cloudflare 403 error.
You can use the following techniques to bypass Cloudflare's 1020 "Access Denied" error.
1. Use a Rotating Proxy to Hide Your IP
Cloudflare sometimes triggers the error 1020 if you break a website's rate-limiting rules or send multiple requests from a single IP within seconds. One way to mitigate this is to rotate proxies to mimic different users. This technique automatically switches your IP every few seconds or per request, making it difficult for the website to detect and block you.
Proxies can be free or premium, depending on whether they require a subscription. However, it's important to note that free proxies have a short lifespan and can be easily detected since they're shared publicly.
Premium proxies are the most reliable. They're more secure and dedicated to you, typically requiring authentication credentials such as passwords and usernames. Most premium proxy providers also offer proxy rotation out of the box, eliminating the technicalities of hardcoding the process.
Here's an example of how to use an authenticated premium proxy with Python's Requests library:
# pip3 install requests
import requests
# define your proxy credential
proxies = {
"http": "http://<PROXY_USERNAME>:<PROXY_PASSWORD>@<PROXY_DOMAIN>:<PROXY_PORT>",
"https": "https://<PROXY_USERNAME>:<PROXY_PASSWORD>@<PROXY_DOMAIN>:<PROXY_PORT>",
}
url = "https://httpbin.io/ip"
# add the proxy credentials to your request
response = requests.get(url, proxies=proxies)
print(response.text)
Check our guide on using proxies with Python for a more detailed tutorial.
That said, while selecting a premium proxy service for web scraping, opt for residential ones, which offer authentic IP addresses assigned to daily internet users by network providers.Â
Reputable solutions, like ZenRows, provide a large pool of residential IPs with rotation and flexible geolocation features to efficiently distribute your traffic across several locations.
Check our guide on the best web scraping proxies to learn more.
2. Customize and Rotate User Agent Headers
The User Agent (UA) is the most critical HTTP header for scraping. It identifies the client sending a request to the server and provides details, such as the client's version, operating system, rendering engine, and more.Â
An Error 1020 usually occurs when Cloudflare flags your browser or HTTP client's signature as bot-like. Your User Agent is crucial in providing hints that can identify you as a bot.
For instance, HTTP clients such as Python's Requests have a bot-like User Agent like the following:
python-requests/2.31.0
Even headless browsers, such as Selenium and Playwright, display the HeadlessChrome
parameter in the User Agent string while in headless mode.
With these bot-like User Agent details, Cloudflare can tell that you're accessing the website with an automated service, prompting it to block you with the Error 1020 "Access Denied" response.
The first step in tackling this is to replace those bot-like User Agents with a custom one. A perfect way to do this is to use the User Agent sent by a real browser like Chrome. Ensure your chosen User Agent is up to date to reduce the chances of detection.Â
Below is an example showing how to customize the User Agent with Python's Requests. The code requests https://httpbin.io/user-agent
, a test website that returns your User Agent:
# pip3 install requests
import requests
# specify the User Agent header
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
}
url = "https://httpbin.io/user-agent"
# add the header parameter to your request
response = requests.get(url, headers=headers)
print(response.text)
The code returns your custom User Agent, as shown:
{
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
}
However, using a single User Agent isn't sustainable for large-scale web scraping. A good practice is to rotate them from a list of web scraping User Agents. Consider creating a robust list containing User Agents from different platforms and browser versions.Â
For example, the following code rotates Chrome User Agents from various platforms for four consecutive requests using Python's built-in itertools
:
# pip3 install requests
import requests
import itertools
# create a User Agent list
user_agents = [
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36",
# ... add more User Agents
]
url = "https://httpbin.io/user-agent"
# create a generator to rotate the list
rotated = itertools.cycle(user_agents)
for _ in range(4):
# add the rotated User Agents to the request headers
headers = {"User-Agent": next(rotated)}
# include the header parameter in your request
response = requests.get(url, headers=headers)
print(response.text)
The above code outputs the User Agents as shown:
# request 1
{
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36"
}
# request 2
{
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36"
}
# request 3
{
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36"
}
# request 4
{
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36"
}
If you prefer a ready-made tool to rotate User Agents, you can use a third-party library, such as the fake-useragent.
In advanced cases, it's essential to ensure your User Agent is consistent with other related headers, such as Sec-CH-UA
and Sec-CH-UA-Platform
. These headers provide detailed information about the browser's platform, version, and engine, so mismatching them with the wrong User Agent string can raise suspicion.
For example, if the User Agent string indicates that it's a Chrome browser on macOS, the Sec-CH-UA-Platform
header should also be "macOS". If they don't match, anti-bot systems might detect this inconsistency and block your request.
To avoid a mismatch between the User Agent string and the client hints headers (Sec-CH-UA
and Sec-CH-UA-Platform
), maintain a corresponding list that matches each User Agent's platform and browser version and rotate them altogether.
Rotating the User Agent alone isn't sufficient to bypass Cloudflare's anti-bot error. Combining this with the previous method (proxy rotation) gives a better result.
Read our article on customizing User Agents in Python for a detailed guide.
3. Mask Headless Browser With Undetected ChromeDriver
If you're accessing a protected website with browser automation tools like Selenium, Playwright, or Puppeteer, there's a high chance that you'll get the Cloudflare 1020 error. That's because they present bot-like properties like the presence of the WebDriver.Â
Your scraper is even more vulnerable to anti-bot detection if you're running a headless browser due to issues such as missing fingerprints and User Agent flags like the HeadlessChrome
.Â
You can use a plugin like Undetected ChromeDriver, a patched version of ChromeDriver, to mask some of these properties and stay under the anti-bot radar. Let's quickly see how to set it up in Python.Â
To start, install Undetected ChromeDriver:
pip3 install undetected-chromedriver
Here's a sample code using the Undetected ChromeDriver to visit a mildly protected website:
# pip3 install undetected-chromedriver
import undetected_chromedriver as uc
if __name__ == "__main__":
# instantiate Chrome options
options = uc.ChromeOptions()
# add headless mode
options.headless = True
# instantiate a Chrome browser and add the options
driver = uc.Chrome(
use_subprocess=False,
options=options,
)
# visit the target URL
driver.get("https://www.hapag-lloyd.com/en/home.html")
# print the URL
print(driver.current_url) # https://www.hapag-lloyd.com/en/home.html
# get the website's title
print(driver.title) # Hapag-Lloyd - Global container liner shipping - Hapag-Lloyd
# close the browser
driver.quit()
While plugins such as Undetected ChromeDriver increase your chances of avoiding the Cloudflare 1020 error, they may still leave bot-like traces like missing plugins. As a result, they don't always succeed, particularly when dealing with advanced protections. Plus, using headless browsers results in memory overhead, making it unsuitable for large-scale web scraping.
Fortunately, there's a solution that works all the time. Keep reading to learn about it.
4. Use a Web Scraping API
Web scraper APIs are the best solution to avoid anti-bot measures, such as Cloudflare's 1020 "Access Denied" error. One of the best is the ZenRows Scraper API, an all-in-one scraping solution with the assurance of bypassing even the most advanced anti-bot measures.Â
In addition to anti-bot auto-bypass, the ZenRows scraper API offers premium proxy rotation, JavaScript rendering, User Agent rotation, and more out-of-the-box. You only need a single API call, and ZenRows will handle all the technicalities under the hood.
Let's quickly see how the ZenRows scraper API works by extracting the full-page HTML of a well-protected page like the Cloudflare challenge page.
First, sign up to open the ZenRows Request Builder. Paste your target URL in the link box and activate Premium Proxies and JS Rendering. Select your programming language (Python, in this case) and choose the API connection mode.
Copy and paste the generated code into your scraper file.

The generated Python code looks like this:
# pip3 install requests
import requests
url = "https://www.scrapingcourse.com/cloudflare-challenge"
apikey = "<YOUR_ZENROWS_API_KEY>"
params = {
"url": url,
"apikey": apikey,
"js_render": "true",
"premium_proxy": "true",
}
response = requests.get("https://api.zenrows.com/v1/", params=params)
print(response.text)
The above code accesses the protected page and prints its HTML as shown:
<html lang="en">
<head>
<!-- ... -->
<title>Cloudflare Challenge - ScrapingCourse.com</title>
<!-- ... -->
</head>
<body>
<!-- ... -->
<h2>
You bypassed the Cloudflare challenge! :D
</h2>
<!-- other content omitted for brevity -->
</body>
</html>
Congratulations🎉! You bypassed Cloudflare using ZenRows. You can now scrape any protected website without the limitations of an Error 1020 "Access Denied".
Conclusion
You've learned four techniques to bypass Cloudflare's 1020 "Access Denied" error. Manual approaches like proxy and User Agent rotation work best when combined.Â
However, considering these methods won't work in edge cases, we recommend using a solution like the ZenRows scraper API to streamline the process and reliably extract data from any website without getting blocked.
Try ZenRows for free now without a credit card!