cURL User Agent: How to Set or Change It in 2025

Yuvraj Chandra
Yuvraj Chandra
Updated: January 26, 2025 · 4 min read

Websites employ multiple techniques to restrict non-human traffic. Analyzing the User Agent string provided in the HTTP request header is one of them.

We'll learn how to set and randomize a new cURL User Agent to avoid getting blocked while web scraping.

What Is the cURL User-Agent

When you use cURL to send an HTTP request, it sends a User Agent string to the website that identifies you as a client.

It identifies the software, device, or application making the request and typically includes details such as the application name, version, and operating system.

Here is what a typical UA looks like:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36


From this example, we can tell that the user has a Chrome web browser version 94.0.4606.81 on a Windows 10 operating system, among other details.

Does cURL Add a User-Agent

When you do cURL web scraping without explicitly setting a User Agent, cURL provides a generic one by default. This default UA is typically something like User-Agent: curl/7.79.1. The exact format may vary based on your cURL version and platform.

However, using the default cURL User-Agent or not specifying one makes it easier for websites to detect the request is coming from an automated script. That can result in your scraper getting flagged or blocked. Changing the default cURL User Agent is essential to reduce the risk.

How to Change the User Agent in cURL

This section will show how to change and set a cURL User Agent. You'll also learn how to randomize the User Agent to simulate different users.

How to Set a Custom User Agent in cURL

To change the User-Agent (UA) in cURL, you can use the -A or --user-agent option followed by the desired User-Agent string. Here's a step-by-step guide:

The first step is to get a User Agent string. You can grab some from our list of User Agents.

The next step is to set the new user agent by including the -A or --user-agent option followed by the desired UA string in your cURL command. Here's the syntax:

Terminal
curl -A "<User-Agent>" <URL>
curl --user-agent "<user-agent>" <url>

Replace <User-Agent> with the User-Agent string you want to use and <URL> with the target URL.

For example, let's say you want to use Mozilla/5.0 (Linux; Android 10; SM-G996U Build/QP1A.190711.020; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Mobile Safari/537.36 and the website we want to scrape is https://example.com/. The cURL command would be this one:

Terminal
curl -A "Mozilla/5.0 (Linux; Android 10; SM-G996U Build/QP1A.190711.020; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Mobile Safari/537.36" https://example.com/

That's it! Your User-Agent will be modified once you run the command.

You can verify if you're successful by making a request to What Is My Browser to get your UA displayed. Here's what the result looks like:

User Agent Output
Click to open the image in full screen
Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Get a Random User-Agent in cURL

Changing your cURL User Agent is a good first step to avoid detection when making multiple requests to a website. However, it's not enough on its own. You should also randomize your User Agents to make your requests look more natural. This will help you avoid being blocked by the website.

To rotate the cURL User-Agent, create a list first. We'll take examples from our list of User-Agents. Here are the first few lines of code:

Example
#!/bin/bash

user_agent_list=(
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 13_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.1 Safari/605.1.15"
)

BoldThe next step is to specify what website we want to scrape. In this tutorial, we will be making a request to https://httpbin.org/headers. Then, we will create a loop and use the RANDOM variable to generate a random number, which is then used to select a random User-Agent from our list. Finally, we'll use the -A option to set the User-Agent header to the randomly selected option from our list.

Example
url="https://httpbin.org/headers"

for i in {1..3}
do
    random_index=$(( RANDOM % ${#user_agent_list[@]} ))
    user_agent="${user_agent_list[random_index]}"
    curl -A "$user_agent" "$url" -w "\n"
done

That's it! We have now successfully rotated the UAs in each round of our requests.

Here is what the full code looks like:

Example
#!/bin/bash

user_agent_list=(
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 13_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.1 Safari/605.1.15"
)

url="https://httpbin.org/headers"

for i in {1..3}
do
    random_index=$(( RANDOM % ${#user_agent_list[@]} ))
    user_agent="${user_agent_list[random_index]}"
    curl -A "$user_agent" "$url" -w "\n"
done

And this is the result:

User Agent Output
Click to open the image in full screen

How To Rotate User Agents at Scale

Creating a good User Agent rotation system takes a lot of work. You need to keep track of many browser versions, make sure they match operating systems and remove old ones.

Also, just changing User Agents isn't enough to stop websites from blocking you. Modern websites check many other things like your IP address, request behavior, and other technical details about your connection.

A better way is to use ZenRows' Universal Scraper API, it automatically handles User Agents for you, changes IP addresses, auto-bypasses all CAPTCHAs, and provides you with everything you need to avoid getting blocked.

Let's test how ZenRows performs against a protected page like the Antibot Challenge page.

First, sign up for a ZenRows account to get to the Request Builder.

building a scraper with zenrows
Click to open the image in full screen

Paste the target URL into the link box, enable JS Rendering, and activate Premium Proxies.

Next, select cURL and click on the API connection mode. Then, copy the generated code and paste it into your script.

Terminal
curl 
"https://api.zenrows.com/v1/?apikey=<YOUR_ZENROWS_API_KEY>&url=https%3A%2F%2Fwww.scrapingcourse.com%2Fantibot-challenge&js_render=true&premium_proxy=true"

When you run this code, you'll successfully access the page:

Output
<html lang="en">
<head>
    <!-- ... -->
    <title>Antibot Challenge - ScrapingCourse.com</title>
    <!-- ... -->
</head>
<body>
    <!-- ... -->
    <h2>
        You bypassed the Antibot challenge! :D
    </h2>
    <!-- other content omitted for brevity -->
</body>
</html>

Congratulations! 🎉 You've just accessed a protected page without getting blocked. This same method works with any website, even ones with strong protection.

Conclusion

In this guide, you've learned important things about User Agents in cURL:

  • What User Agents are and why websites check them.
  • How to change your User Agent in cURL.
  • How to use different User Agents for each request.
  • Why changing User Agents alone won't stop blocks.

Remember that most websites have many ways to detect scrapers. Use ZenRows to scrape any page without getting blocked. Try ZenRows for free!

Ready to get started?

Up to 1,000 URLs for free are waiting for you