Have you run into an error while scraping with the Undetected ChromeDriver? We've got you covered!
This article explains the common errors associated with the Undetected ChromeDriver and how to solve them.
Module Not Found Error
A "module not found error" means Python can't find the Undetected ChromeDriver library package to run your scraping operation. The error looks like this:
ModuleNotFoundError: No module named 'undetected_chromedriver'
This error may occur if:
- You're in the wrong virtual environment.
- You've not installed the Undetected ChromeDriver.
Fixing this error is simple. First, ensure that pip
points to the correct virtual environment. To do that, open the command line to your project root folder, create and activate a virtual environment, and install the Undetected Chromedriver. Let's go through the steps one by one.
Run the following command to create a virtual environment:
python -m venv <ENVIROMNENT_NAME>
Here's how to activate that environment on Windows:
.\<ENVIRONMENT_NAME>\Scripts\activate
And here's the version for Linux/Mac:
source <ENVIRONMENT_NAME>/bin/activate
Now, install the Undetected ChromeDriver package into the virtual environment using pip
:
pip3 install undetected-chromedriver
Wait for the installation process to complete and rerun your Python script.
Wrong Chrome Version
A mismatch between the WebDriver and the Chrome browser versions is a common error with headless browsers.
Here's what the error looks like:
Message: session not created: This version of ChromeDriver only supports Chrome version 127
Current browser version is 126 with binary path
The above error means Undetected ChromeDriver is trying to use ChromeDriver version 127, but the installed Chrome browser version is 126.
To fix that error, update your Chrome browser version to match the required ChromeDriver.
Alternatively, you can force Undetected ChromeDriver to use the current browser version by pointing to its executable path:
driver = uc.Chrome(
browser_executable_path="C:/Program Files/Google/Chrome/Application/chrome.exe",
)
Undetected ChromeDriver will now use the browser version in the specified path.
Runtime Error
The runtime error is related to Undetected ChromeDriver's subprocess option. The library's README recommends setting the use_subprocess
option to false to improve performance and reduce the chances of anti-bot detection.
The runtime error occurs when you execute Undetected ChromeDriver outside a main process while the use_subprocess
option is false. It looks like this:
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
To prevent the runtime error in Undetected ChromeDriver, ensure you execute it correctly as part of the Windows main process:
if __name__ == "__main__":
# define Chrome options
options = uc.ChromeOptions()
# run Chrome in headless mode
options.headless = True
# set up the WebDriver for Chrome
driver = uc.Chrome(
use_subprocess=False,
options=options,
)
# ...your scraping logic
The above solutions will solve common errors in Undetected ChromeDriver. However, there's an all-in-one, easy solution to avoid these issues. You'll see in the next section.
Access Denied: Undetected ChromeDriver 403
The Undetected ChromeDriver 403 error occurs when the library encounters a roadblock, such as an anti-bot measure while trying to access a protected website. An error 403 means the web server understands your request but forbids it.
For example, scraping the full-page HTML of a protected website like OpenSea can result in an "Access denied" error. Try it out with the following code:
# pip3 install undetected-chromedriver
import undetected_chromedriver as uc
if __name__ == "__main__":
# define Chrome options
options = uc.ChromeOptions()
# run Chrome in headless mode
options.headless = True
# set up the WebDriver for Chrome
driver = uc.Chrome(
use_subprocess=False,
options=options,
)
# open the specified URL
driver.get("https://opensea.io/")
# print the full-page HTML
print(driver.page_source)
# close the browser
driver.close()
The above returned HTML shows an "Access denied" message in the title tag, showing that the website rejected your request:
<html class="no-js" lang="en-US">
<title>Access denied</title>
<!-- ... other content omitted for brevity -->
</html>
Removing the "Access denied" 403 error can be challenging, depending on a website's protection level. But it's possible!
Undetected ChromeDriver sends bot-like signals like the HeadlessChrome
flag in headless mode. You can reduce the chances of getting the Undetected ChromeDriver 403 error by running the library in the GUI mode (non-headless mode). This approach ensures that your scraper runs within a visible browser window, making it look like you're using an actual browser.
To run Undetected ChromeDriver in non-headless mode, change the headless option to False:
if __name__ == "__main__":
# ...
# set headless to False to run in non-headless mode
options.headless = False
The Undetected ChromeDriver 403 error can also be due to a rate-limited IP ban, which occurs when you send multiple requests from the same IP address. Another way to avoid the 403 error is to set up a proxy with the Undetected ChromeDriver.
To add a proxy to Undetected ChromeDriver, extend ChromeOptions
with the proxy address like so:
if __name__ == "__main__":
# ...
# specify a proxy address
proxy = "http://157.230.89.122:18085"
# add the proxy address to the ChromeOptions
options.add_argument(f"--proxy-server={proxy}")
Check out our complete tutorial on setting up a proxy with the Undetected ChromeDriver to learn more.
Avoid Errors and Blocks With a Web Scraping API
Undetected ChromeDriver is prone to errors, especially when handling heavily protected websites. That's because the library can't keep up with the frequent security updates of these anti-bots.
Additionally, while the Undetected ChromeDriver is designed to bypass anti-bots, it still presents some bot-like signals that can result in getting blocked.
For instance, Undetected ChromeDriver can't bypass a heavily protected website like the G2 Reviews:
# pip3 install undetected-chromedriver
import undetected_chromedriver as uc
if __name__ == "__main__":
# define Chrome options
options = uc.ChromeOptions()
# run the browser in headless mode
options.headless = True
# set up the WebDriver for Chrome
driver = uc.Chrome(
use_subprocess=False,
options=options,
)
# open the specified URL
driver.get("https://www.g2.com/products/asana/reviews")
print(driver.page_source)
# close the browser
driver.close()
The above scraper gets blocked by Cloudflare, as shown:
<!DOCTYPE html>
<html class="no-js" lang="en-US">
<head>
<title>Attention Required! | Cloudflare</title>
<!-- ... -->
</head>
<body>
<!-- ... -->
<div class="cf-wrapper cf-header cf-error-overview">
<h1 data-translate="block_headline">Sorry, you have been blocked</h1>
</div>
<!-- ... -->
</body>
</html>
The best way to prevent potential errors and avoid blocks is to use a web scraping API like ZenRows. This tool helps you auto-rotate premium proxies, fix your request headers, and auto-bypass CAPTCHAs and other anti-bot measures at scale.
ZenRows also acts as a headless browser, allowing you to completely replace the Undetected ChromeDriver. It's easier to set up and compatible with any programming language. All you need to do is make a single API call and watch ZenRows handle all anti-bot bypass technicalities under the hood while you focus on your scraping logic.
To see how ZenRows works, let's use it to scrape the protected website that blocked our scraper previously.
Sign up to open the ZenRows Request Builder. Paste the target URL in the link box, activate Premium Proxies and JS Rendering, choose Python as your programming language, and select the API connection mode. Copy and paste the generated code into your Python file:
The generated code should look like the following:
# pip install requests
import requests
url = "https://www.g2.com/products/asana/reviews"
apikey = "<YOUR_ZENROWS_API_KEY>"
params = {
"url": url,
"apikey": apikey,
"js_render": "true",
"premium_proxy": "true",
}
response = requests.get("https://api.zenrows.com/v1/", params=params)
print(response.text)
The code outputs the protected website's full-page HTML, as shown:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<link href="https://www.g2.com/images/favicon.ico" rel="shortcut icon" type="image/x-icon" />
<title>Asana Reviews, Pros + Cons, and Top Rated Features</title>
<!-- ... -->
</head>
<body>
<!-- other content omitted for brevity -->
</body>
Your scraper now uses ZenRows to bypass anti-bot protection. Welcome to scraping without limitations!
Conclusion
We've explained the common errors you might encounter while using Undetected ChromeDriver, including their causes and solutions. The approach to solving each issue depends on the specific error message. As a recap, here's a summary of the solutions to each error:
-
Module not found error: Ensure
pip
points to the correct virtual environment and install Undetected ChromeDriver. - Wrong Chrome version: Update your Chrome browser or point Undetected ChromeDriver to your Chrome browser executable path.
- Access denied: Undetected ChromeDriver 403: Run the driver in the GUI mode and set up a proxy.
-
Runtime error: Execute your scraping task inside a
main
process.
However, you can still get blocked despite applying these solutions. The only easy and sure way to bypass these potential errors is to use a web scraping API like ZenRows, an all-in-one solution that lets you scrape any website at scale without getting blocked.
Try ZenRows for free now without a credit card!