Pyppeteer, on its own, is an easy spot for anti-bots. And it will most likely get blocked every time if not adequately supported. Simply adding the pyppeteer_stealth plugin to your Pyppeteer scraper can increase your chances of bypassing anti-bot measures.
In this article, you'll learn how to patch Pyppeteer's bot-like signals with the pyppeteer_stealth plugin to limit the chances of anti-bot detection.
What Is pyppeteer_stealth?
pyppeteer_stealth
is the patched version of Pypeteer, Python's unofficial port for Puppeteer. In short, pyppeteer_stealth
is the Python implementation of the Puppeteer Stealth plugin.
![Pyppeteer: Use Puppeteer in Python [2025]](https://static.zenrows.com/content/small_Pyppeteer_cover_27ed04c1fa.png)
The main Pyppeteer library leaks bot-like parameters that make it susceptible to detection. pyppeteer-stealth removes those loopholes, increasing the chances of bypassing anti-bot measures while scraping with Pyppeteer.
pyppeteer-stealth
works by patching suspicious browser fingerprints in Pyppeteer using specific evasion techniques. The commonly patched properties include:
- User Agent: It changes the
HeadlessChrome
User Agent flag to an actual Chrome flag while in headless mode. - WebDriver:
pyppeteer-stealth
updates the automated WebDriver navigator field fromtrue
tofalse
, making it less obvious that you're using an automated browser. - Chrome Runtime:
pyppeteer-stealth
modifies the Chrome runtime, making headless Chrome appear like it's in the regular GUI mode. - Hardware Concurrency: The library overrides the hardware concurrency to simulate a real machine's CPU cores.
- Plugins: It patches
navigator.plugins
with real browser plugins. - Vendor: It overrides the
navigator.vendor
field with a genuine software vendor's name instead of the empty string usually returned in headless mode. - WebGL:
pyppeteer-stealth
patches the WebGL property by spoofing a real machine's GPU values. - Media codecs: It replaces bot-like media codecs with realistic MIME types.
Now, let's see how to use the pyppeteer-stealth
plugin.
How to Scrape With pyppeteer_stealth
In this section, you'll learn how to use pyppeteer-stealth
while testing its evasion strength on Sannysoft, a fingerprinting test website. Let's start with the installation.
Install pyppeteer_stealth
To get started, install pyppeteer-stealth
and pyppeteer
with pip
:
pip3 install pyppeteer_stealth pyppeteer
Now, let's begin!
Scrape Data Using pypeteer_stealth
Before adding the stealth plugin, let's first see how the base Pyppeteer performs on Sannysoft. This serves as a baseline for comparison with the stealth plugin. Try it out with the following code that screenshots the test result page:
# pip3 install pyppeteer-stealth pyppeteer
import asyncio
from pyppeteer import launch
async def scraper():
# launch the browser in headless mode
browser = await launch(headless=True)
page = await browser.newPage()
# visit the target website
await page.goto("https://bot.sannysoft.com/")
# screenshot the result
await page.screenshot({"path": "screenshot.png"})
await browser.close()
asyncio.run(scraper())
The base Pyppeteer package fails a significant portion of the browser fingerprinting test, as shown by the red highlights in the screenshot below. This means it's susceptible to anti-bot detection:

Now, let's find out if pyppeteer_stealth
patches these leaks.
Update the previous code by importing pyppeteer_stealth
and adding the stealth plugin after launching the browser instance:
# pip3 install pyppeteer-stealth pyppeteer
import asyncio
from pyppeteer import launch
from pyppeteer_stealth import stealth
async def scraper():
# launch the browser in headless mode
browser = await launch(headless=True)
page = await browser.newPage()
# include the pyppeteer_stealth plugin
await stealth(page)
# visit the target website
await page.goto("https://bot.sannysoft.com/")
# screenshot the result
await page.screenshot({"path": "screenshot-stealth.png"})
await browser.close()
asyncio.run(scraper())
The pyppeteer-stealth
plugin patches the leaked fingerprints, as shown:

Great! You've patched Pyppeteer's leaks with the pyppeteer-stealth
evasions.
That said, the stealth plugin still presents some limitations that make it unsuitable for bypassing advanced anti-bot measures. We'll explain in a later section.
The Common Issue With Pyppeteer and pyppeteer_stealth
When you run the standard Pyppeteer package for the first time, it might raise an OS error, saying "Chromium downloadable not found." This is a common problem with the library targeting an outdated Chromium version. Here's what the error looks like:
# ...
raise OSError(f'Chromium downloadable not found at {url}: ' f'Received {r.data.decode()}.\n')
OSError: Chromium downloadable not found at https://storage.googleapis.com/chromium-browser-snapshots/Win_x64/1181205/chrome-win.zip: Received...
To solve this problem, go to the Pyppeteer library's installation location and open the Pyppeteer directory. Open the chromium_downloader.py
file and update the version value using the following code line. Place this code before the REVISION
variable (after the BASE_URL declaration):
# ...
os.environ["PYPPETEER_CHROMIUM_REVISION"] = "1181217"
REVISION = os.environ.get("PYPPETEER_CHROMIUM_REVISION", __chromium_revision__)
If you've used a virtual environment, go to the Lib
directory of your virtual environment folder. Open site-packages
and locate Pyppeteer.
pyppeteer_stealth's Limitations and Best Alternative
Although pyppeteer_stealth
patches major browser fingerprints in Pyppeteer, it still leaks subtle bot-like behaviors, such as static, predictable navigation patterns.
Besides, the library hasn't been updated since 2021. This limitation makes it unfit against modern, regularly updated anti-bot measures.
And since pyppeteer_stealth
is open-source, anti-bot developers can gain insights into its bypass mechanisms and block it.
For instance, pyppeteer_stealth
fails to bypass the protection on this Antibot Challenge page. Try it out with the code below:
# pip3 install pyppeteer-stealth pyppeteer
import asyncio
from pyppeteer import launch
from pyppeteer_stealth import stealth
async def scraper():
# launch the browser in headless mode
browser = await launch(headless=True)
page = await browser.newPage()
# include the pyppeteer_stealth plugin
await stealth(page)
# visit the target website
await page.goto("https://www.scrapingcourse.com/antibot-challenge")
# screenshot the result
await page.screenshot({"path": "screenshot.png"})
await browser.close()
asyncio.run(scraper())
pyppeteer_stealth
gets blocked, as shown:

pyppeteer_stealth
proves unreliable despite its ability to patch Pyppeteer's browser fingerprints. This is because anti-bot measures are becoming smarter in their bot detection approach. Many anti-scraping technologies can even detect the patching attempts of pyppeteer_stealth
.
The best way to reliably bypass these sophisticated, modern anti-bot measures is to use a scraping solution, such as the ZenRows Universal Scraper API. ZenRows applies the required evasion techniques to bypass any anti-bot measure at scale, with zero maintenance requirements.
ZenRows' auto-managed anti-bot evasion system sustainably adapts with evolving anti-bot technologies, guaranteeing a 99.93% scraping success rate. This lets you focus your time and resources on other business logic rather than battling abrupt scraping failures. You also get rotating premium proxies per request to avoid IP bans and rate limiting.
ZenRows' ability to integrate with third-party automation tools, including n8n, Make, Clay, Zapier, and more, makes it suitable for end-to-end scraping workflows. ZenRows also features headless browser features for scraping dynamic websites, making it a suitable replacement for Pyppeteer and pyppeteer_stealth
.
To see how ZenRows works, let's use it to scrape the Antibot Challenge page that blocked pyppeteer_stealth
previously.
Sign up and go to the Request Builder. Paste the target URL in the link field and activate Premium Proxies and JS Rendering.

Select Python as your programming language and choose the API connection mode. Copy and paste the generated code into your scraper script.
The generated Python code looks like this:
# pip3 install requests
import requests
url = "https://www.scrapingcourse.com/antibot-challenge"
apikey = "<YOUR_ZENROWS_API_KEY>"
params = {
"url": url,
"apikey": apikey,
"js_render": "true",
"premium_proxy": "true",
}
response = requests.get("https://api.zenrows.com/v1/", params=params)
print(response.text)
The above code bypasses the anti-bot challenge and outputs the protected site's full-page HTML, as shown:
<html lang="en">
<head>
<!-- ... -->
<title>Antibot Challenge - ScrapingCourse.com</title>
<!-- ... -->
</head>
<body>
<!-- ... -->
<h2>
You bypassed the Antibot challenge! :D
</h2>
<!-- other content omitted for brevity -->
</body>
</html>
Congratulations! 🎉 Your scraper now bypasses anti-bot measures reliably. No more scraping failures or missing data.
Conclusion
The pyppeteer_stealth plugin patches some bot-like signals in Pyppeteer. However, the plugin doesn't work against the advanced protection measures on modern websites.
ZenRows is the enterprise-grade solution to scrape any website at scale without getting blocked. It provides an auto-managed, proactive mechanism for bypassing anti-bot security measures as they get updated.