How to Modify Selenium navigator.webdriver to Avoid Anti-Bot Detection

Idowu Omisola
Idowu Omisola
March 13, 2025 · 4 min read

Web scraping with Selenium has become increasingly challenging as websites deploy sophisticated anti-bot measures to detect scrapers. Masking your automation fingerprint is essential for reliable web scraping.

The navigator.webdriver flag is a primary identifier anti-bot systems check during browser fingerprinting. Selenium automatically sets this flag to true when launching a browser. This immediately reveals your automation to anti-bot systems.

In this guide, you'll learn how to modify this property in Selenium to help your scraper blend in with normal traffic and reduce your chances of avoiding detection.

What Is navigator.webdriver in Selenium?

The navigator.webdriver property is a JavaScript flag that exists in all modern browsers. When Selenium launches and controls a browser, this property is automatically set to true. It's part of the WebDriver specification that allows websites to detect if they're being accessed by automated software.

Anti-bot systems specifically target this property during browser fingerprinting because it's a definitive indicator of automation. Unlike other browser characteristics that might be ambiguous, a true value for navigator.webdriver provides clear confirmation of automated activity which makes it a primary detection point for websites that want to block scrapers.

Let's see how this property affects your Selenium scraper using SannySoft as the test site. SannySoft tests your browser's fingerprint to reveal how websites can detect you.

scraper.py
from selenium import webdriver
from selenium.webdriver.chrome.options import Options

# set up Chrome options
chrome_options = Options()
chrome_options.add_argument("--headless=new")
driver = webdriver.Chrome(options=chrome_options)

# open the target page
url = "https://bot.sannysoft.com/"
driver.get(url)

# take a screenshot
driver.save_screenshot("failed_webdriver_test.png")

# close the browser
driver.quit()
Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

When you run this code, the screenshot clearly shows the WebDriver test failing:

Click to open the image in full screen

As you can see, the test site immediately identifies our browser as automated by detecting the navigator.webdriver property set to true.

In the next section, you'll learn how to modify this property to avoid detection and make your Selenium scraper appear more like a regular browser.

How to Modify navigator.webdriver in Selenium ChromeDriver

You can disable the navigator.webdriver automation property in Selenium by using the --disable-blink-features=AutomationControlled ChromeDriver option. This flag prevents Chromium's Blink engine from automatically setting the WebDriver property to true.

Let's implement this stealth enhancement in the code:

scraper.py
from selenium import webdriver
from selenium.webdriver.chrome.options import Options

# Set up Chrome options with stealth mode
chrome_options = Options()
chrome_options.add_argument("--headless=new")
chrome_options.add_argument("--disable-blink-features=AutomationControlled")
driver = webdriver.Chrome(options=chrome_options)

# open the target page
url = "https://bot.sannysoft.com/"
driver.get(url)

# take a screenshot
driver.save_screenshot("passed_webdriver_test.png")

# close the browser
driver.quit()

This modified code now passes the WebDriver test:

Click to open the image in full screen

While this improvement is significant, it's important to recognize the limitations of this approach. The screenshot reveals that there are still several fingerprinting leaks that can expose your automation. For example, the User-Agent header still contains the `HeadlessChrome` flag, which fails the User-Agent test as shown in the screenshot.

In the next section, you'll learn how to use stealth tools like SeleniumBase and Undetected ChromeDriver to patch the User-Agent and other subtle fingerprint leaks that might still give away your automation. These tools can help you achieve more comprehensive protection against anti-bot detection.

Disable Selenium navigator.webdriver With SeleniumBase

SeleniumBase with Undetected ChromeDriver (UC) provides a more comprehensive solution for bypassing anti-bot detection. It addresses multiple fingerprinting signals simultaneously, including the navigator.webdriver flag.

To get started with SeleniumBase, install it using pip:

Terminal
pip3 install seleniumbase

After installation, you can use SeleniumBase with its UC (Undetected ChromeDriver) mode to create a more stealthy browser instance. Here's the example using the same target site:

scraper.py
from seleniumbase import Driver

# create undetected browser instance
driver = Driver(uc=True)

# open the target page
url = "https://bot.sannysoft.com/"
driver.get(url)

# take a screenshot
driver.save_screenshot("seleniumbase_test.png")

# close the browser
driver.quit()

The screenshot from this approach shows significant improvements:

Click to open the image in full screen

Notice how both the WebDriver test and User-Agent test are now passed in the results. SeleniumBase with UC mode successfully masks multiple automation fingerprints and makes your scraper much harder for anti-bot systems to detect.

Despite these improvements, this solution still has significant limitations. Being open-source, these tools often struggle to keep pace with the rapidly evolving anti-bot landscape. Anti-bots update their detection mechanisms frequently, most of the time outpacing the community-driven updates to these tools.

Additionally, SeleniumBase with UC works most reliably in non-headless mode when dealing with sophisticated anti-bots, which creates substantial memory overhead from running full GUI browser instances, making it unsuitable for large-scale scraping operations.

In the next section, we'll explore a more reliable and scalable solution that can completely bypass anti-bot detection without the limitations of these open-source tools. This approach will help you maintain consistent access to the web data you need, even when faced with the most advanced protection systems.

Use ZenRows to Scrape Without Getting Blocked

To reliably bypass anti-bot systems, you'd need to modify numerous aspects of your Selenium setup beyond just the navigator.webdriver flag.

The ZenRows Universal Scraper API provides the easiest and most reliable solution to avoid getting blocked without dealing with these technical complexities. Rather than patching each individual fingerprint yourself, ZenRows handles all the stealth optimizations through a simple API call.

ZenRows addresses all the key challenges, including automation flag removal, with additional powerful features like premium proxy rotation, JavaScript rendering, headless browser support, CAPTCHA and anti-bot auto-bypass, and everything else you need for reliable web scraping.

Let's see how easy it is to use ZenRows to scrape a heavily protected website like the Antibot Challenge page.

Sign up for a free account to open the Request Builder. Then, paste your target URL in the address box and activate Premium Proxies and JS Rendering.

building a scraper with zenrows
Click to open the image in full screen

Select your preferred programming language (Python, in this case) and choose the API connection mode. Copy the generated code and paste it into your scraper.

Your code should look like this:

Example
# pip3 install requests
import requests

url = "https://www.scrapingcourse.com/antibot-challenge"
apikey = "<YOUR_ZENROWS_API_KEY>"
params = {
    "url": url,
    "apikey": apikey,
    "js_render": "true",
    "premium_proxy": "true",
}
response = requests.get("https://api.zenrows.com/v1/", params=params)
print(response.text)

Running this code successfully bypasses the anti-bot challenge and returns the full HTML of the protected page:

Output
<html lang="en">
<head>
    <!-- ... -->
    <title>Antibot Challenge - ScrapingCourse.com</title>
    <!-- ... -->
</head>
<body>
    <!-- ... -->
    <h2>
        You bypassed the Antibot challenge! :D
    </h2>
    <!-- other content omitted for brevity -->
</body>
</html>

Congratulations! 🎉 You successfully scraped a website protected by advanced anti-bot without the complexity of manually patching Selenium.

With ZenRows, you can now reliably access web data at any scale without worrying about being blocked.

Conclusion

In this guide, you learned how the navigator.webdriver property exposes your Selenium automation to anti-bot systems. We covered multiple approaches to modify this flag, from basic Chrome options to more comprehensive solutions like SeleniumBase.

Solutions like ZenRows provide the most effective approach for reliable web scraping at scale. Instead of constantly battling evolving anti-bot mechanisms, ZenRows ensures consistent access to web data without technical complications. Try ZenRows today for hassle-free web scraping!

Ready to get started?

Up to 1,000 URLs for free are waiting for you