Using an incorrect user agent when scraping or not applying some related best practices is a recipe for getting blocked. To solve that, you'll find here the best user agents list for scraping and some tips to use them.
Ready? Let's go!
What Is a User Agent?
User Agent (UA) is a string sent by the user's web browser to a server. It's located in the HTTP header and identifies the browser type and version as well as the operating system. Accessed with JavaScript on the client side using navigator.userAgent property
, the remote web server uses this information to identify and render the content in a way that's compatible with the user's specifications.
While different structures and information are contained, most web browsers tend to follow the same format:
Mozilla/5.0 (<system-information>) <platform> (<platform-details>) <extensions>
For example, a user agent string for Chrome (Chromium) might be Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36
. Let's break it down: it contains the name of the browser (Chrome), the version number (109.0.0.0) and the operating system the browser is running on (Windows NT 10.0, 64-bit processor).
Why Is a User Agent Important for Web Scraping?
Since UA strings help web servers identify the type of browser (and bots) requests, adopting them for scraping can help mask your spider as a web browser.
So you can bypass protection systems like Cloudflare with the right User Agent strings.
But using a wrongly formed user agent will get your data extraction script blocked almost always.
What Are the Best User Agents for Scraping?
We compiled a list of user agents that are effective for scraping. They can help you emulate a browser and avoid getting blocked:
- Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36
- Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:124.0) Gecko/20100101 Firefox/124.0
- Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.2420.81
- Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 OPR/109.0.0.0
- Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36
- Mozilla/5.0 (Macintosh; Intel Mac OS X 14.4; rv:124.0) Gecko/20100101 Firefox/124.0
- Mozilla/5.0 (Macintosh; Intel Mac OS X 14_4_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.4.1 Safari/605.1.15
- Mozilla/5.0 (Macintosh; Intel Mac OS X 14_4_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 OPR/109.0.0.0
- Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36
- Mozilla/5.0 (X11; Linux i686; rv:124.0) Gecko/20100101 Firefox/124.0
How to Check User Agents and Understand Them
The easiest way to do so is to visit UserAgentString.com. It automatically displays the user agent for your web browsing environment. You can also get comprehensive information on other user agents. You just have to copy/paste any string in the input field and click on ''Analyze.''

How to Set a New User Agent Header in Python?
Let's run a quick example of changing a scraper user agent using Python requests. We'll use a string associated with Chrome:
Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36
Use the following code snippet to set the User-Agent
header while sending the request:
import requests
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"}
# You can test if your web scraper is sending the correct header by sending a request to HTTPBin
r = requests.get("https://httpbin.org/headers", headers=headers)
print(r.text)
Its output will look like this:
{
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36",
"X-Amzn-Trace-Id": "Root=1-63c42540-1a63b1f8420b952f1f0219f1"
}
}
And that's it! You now have a new user agent for scraping.
How To Rotate Infinite User Agents at Scale
Maintaining a reliable User Agent system is more complex than just having a list of strings. You need a continuous process to update browser versions, verify operating system compatibility, and remove problematic combinations.
Beyond User Agents, modern websites employ sophisticated detection methods to spot automation. They analyze your network behavior, verify header consistency across requests, check IP reputation scores, examine browser fingerprints, and more. Even with perfect User Agent management, these other detection methods can still block your scraper.
The most effective solution is to use a web scraping API like ZenRows. It provides auto-rotating up-to-date User Agents, premium proxy, JavaScript rendering, CAPTCHA auto-bypass, and everything you need to avoid getting blocked.
Let's see how ZenRows performs against a protected page like the Antibot Challenge page.
Start by signing up for a new account, and you'll get to the Request Builder.

Paste the target URL, enable JS Rendering, and activate Premium Proxies.
Next, select Python and click on the API connection mode. Then, copy the generated code and paste it into your script.
# pip3 install requests
import requests
url = "https://www.scrapingcourse.com/antibot-challenge"
apikey = "<YOUR_ZENROWS_API_KEY>"
params = {
"url": url,
"apikey": apikey,
"js_render": "true",
"premium_proxy": "true",
}
response = requests.get("https://api.zenrows.com/v1/", params=params, print(response.text)
The generated code uses Python's Requests library as the HTTP client. You can install this library using pip:
pip3 install requests
Run the code, and you'll successfully access the page:
<html lang="en">
<head>
<!-- ... -->
<title>Antibot Challenge - ScrapingCourse.com</title>
<!-- ... -->
</head>
<body>
<!-- ... -->
<h2>
You bypassed the Antibot challenge! :D
</h2>
<!-- other content omitted for brevity -->
</body>
</html>
Congratulations! 🎉 You’ve successfully bypassed the anti-bot challenge page using ZenRows. This works for any website.
Conclusion
In this guide, you've learned the essentials of managing User Agents with Python Requests:
- Understanding User Agent string structure.
- Selecting appropriate User Agents for different browsers.
- Best practices for User Agent implementation.
- Why User Agent management alone isn't enough for reliable web scraping.
Keep in mind that many websites use different anti-bot mechanisms to prevent web scraping. Integrate ZenRows to make sure you extract all the data you need without getting blocked. Try ZenRows for free!