Web Crawling Webinar for Tech Teams
Register Now

How to Rotate Proxies in Python: An In-Depth Guide

Yuvraj Chandra
Yuvraj Chandra
January 2, 2025 · 5 min read

A proxy can hide your real IP address, but what happens if it gets banned? Proxy rotation ensures that your IP changes with each request, making it a crucial tool for web scraping to avoid detection and prevent IP bans by anti-bot systems.

In this tutorial, you'll learn step-by-step how to implement proxy rotation in Python, starting with basic to advanced setups.

What Is Proxy Rotation?

Proxy rotation means changing the IP address per request at a specific or random interval. The purpose is to make each request appear to come from a different machine or location, making it more difficult for anti-bot measures to detect and block.

Anti-bots often use methods like rate limiting to reduce the number of requests an IP can send within a particular time interval. Thus, sending multiple requests through a single IP address can result in an IP ban, especially during large-scale web scraping or crawling.

You can also use static proxies for scraping, but a single one won't work at scale since it maintains a fixed IP address for every request. However, you can create a list of static proxies and rotate them manually.

A rotating proxy, on the other hand, automatically changes the IP address, reducing the need for manual interventions and giving you the desired IP rotation by default. 

We'll now learn how to rotate static proxies using Python. Let's go!

Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

How to Rotate Proxies in Python

In this section, you'll learn the steps involved in rotating proxies in Python. To test the proxy connection, you'll use https://httpbin.io/ip, an endpoint that returns your IP address. But first, let's go through the initial requirements. 

Step 1: Prerequisites

To follow this tutorial, ensure the following are available on your machine:

  • Python3: Some systems have it pre-installed. However, ensure it's up to date (version 3+). 
  • An IDE: Although this tutorial uses VS Code, you can still follow with your preferred IDE.
  • Requests: You'll use Python's Requests as an HTTP client. Install it using pip:
Terminal
pip3 install requests

Ready? Let's grab some proxies!

Step 2: Getting a Proxy List

To rotate proxies, you need to create a list of proxies from free sources, such as websites like Free Proxy List. Grab a couple from that website and save them in a text file called proxies_list.txt.

Step 3: Send a Request Without a Proxy

By default, a request will use your local IP if you don't specify one. To check, let's send an initial request without a proxy to the target test site.

Import the Requests library, visit the target site, and print its response:

Example
# pip3 install requests
import requests

# send a request to the test endpoint
response = requests.get("https://httpbin.io/ip")

# validate the response
if response.status_code != 200:
    print(f"The request failed with {response.status_code}")
else:
    print(response.text)

The above script will output your default local IP address. Now, let's build on it to set up a proxy in Python.

Step 4: Send a Request With a Proxy

Let's start with a single proxy before moving to the proxy rotation step. Grab a free proxy address from Free Proxy List and implement it in your script. 

Update your existing code with the proxy address by specifying HTTP and HTTPS protocols in a dictionary. Then, route your request through it:

Example
# pip3 install requests
import requests

# specify the proxy server address
proxies = {
    "http": "http://121.136.189.231:60001",
    "https": "http://121.136.189.231:60001",
}

# send a request to the test endpoint
response = requests.get(
    "https://httpbin.io/ip",
    proxies=proxies,
)

# validate the response
if response.status_code != 200:
    print(f"The request failed with {response.status_code}")
else:
    print(response.text)

The code outputs the proxy IP address, as shown:

Output
{
    "origin": "121.136.189.225:30223"
}

That works! You've set the basis for your proxy connection. Let's implement proxy rotation in the next section.

Step 5: Rotate Proxies Using a Proxy Pool

Remember you saved a proxy list in a proxies_list.txt file previously. You'll rotate those proxies in this section. That file looks something like this:

Output
121.136.189.231:60001
113.160.132.195:8080
122.10.225.55:8000
117.54.114.98:80
93.93.246.219:8080
#... omitted for brevity

You can rotate proxies in two ways:

  • Iterate through the proxy pool sequentially.
  • Iterate through the proxy pool randomly.

You'll learn how to implement both methods below.

Iterate through Proxy Pool Sequentially

Sequential proxy rotation is suitable for even traffic distribution between the proxies. It can be handy if you maintain a small proxy pool and want to avoid overusing some proxies over the other. 

However, the limitation of this method is that the target server might detect a pattern and potentially ban the proxy pool.

To rotate the proxies sequentially, you'll cycle through the list in the right order. Let's implement this now! 

First, read the proxies from the proxies_list.txt file and dump its items into a list. Cycle through the generated proxy list with Python's built-in itertools.cycle and get the next proxy from the pool inside a for loop that sends 4 requests (or more, based on your needs). Form a dictionary of the HTTP and HTTPS proxies and add it to your request:

Example
# pip3 install requests
import requests
from itertools import cycle

# read the proxies from the proxy list file
proxies_list = open("proxies_list.txt", "r").read().strip().split("\n")

# create a proxy generator
proxy_pool = cycle(proxies_list)

# iterate through the proxy list
for _ in range(4):

    # get the next proxy from the generator
    proxy = next(proxy_pool)

    # prepare the proxy address
    proxies = {
        "http": f"http://{proxy}",
        "https": f"http://{proxy}",
    }

    # send a request to the target site with the proxy
    response = requests.get(
        "https://httpbin.io/ip",
        proxies=proxies,
    )

    if response.status_code != 200:
        print(f"The request failed with {response.status_code}")
    else:
        print(response.text)

The above code outputs the IP address of the first four proxy servers, as shown:

Output
# request 1:
{
    "origin": "121.136.189.225:22656"
}
# request 2:
{
    "origin": "113.160.132.195:77265"
}
# request 3:
{
    "origin": "122.10.225.55:86883"
}
# request 4:
{
    "origin": "117.54.114.98:36723"
}

You're now rotating proxies sequentially with Python. Next, let's explore how proxy randomization works.

Iterate through Proxy Pool Randomly

Proxy randomization can prevent the target server from detecting a pattern in your request. This method involves picking the proxies from the pool in an undetermined manner. However, one drawback is that some proxies might be overused compared to others.

Let's modify the previous code to randomize the proxies. Use Python's built-in random.choice method to randomize the proxy list directly inside the for loop:

Example
# pip3 install requests
import requests
import random

# read the proxies from the proxy list file
proxies_list = open("proxies_list.txt", "r").read().strip().split("\n")

# iterate through the proxy list
for _ in range(4):
    # choose a proxy at random from the list
    random_proxy = random.choice(proxies_list)

    # prepare the proxy address
    proxies = {
        "http": f"http://{random_proxy}",
        "https": f"http://{random_proxy}",
    }

    # send a request to the target site with the proxy
    response = requests.get(
        "https://httpbin.io/ip",
        proxies=proxies,
    )

    if response.status_code != 200:
        print(f"The request failed with {response.status_code}")
    else:
        print(response.text)

The above code prints four random proxy IPs:

Output
# request 1:
{
    "origin": "113.160.132.195:24867"
}
# request 2:
{
    "origin": "93.93.246.219:98683"
}
# request 3:
{
    "origin": "117.54.114.98:36723"
}
# request 4:
{
    "origin": "122.10.225.55:88261"
}

That's it! You now know the various methods of rotating proxies in Python.

Async Proxy Rotation in Python

Asynchronous proxy rotation allows you to validate and use multiple proxies simultaneously in a non-blocking way, reducing the time required to validate large proxy pools.

To rotate the proxies asynchronously, you'll need aiohttp, an HTTP client designed to run multiple requests concurrently in a single thread. You'll also need asyncio to execute concurrent tasks, but that's built into Python3.

Install aiohttp using pip:

Terminal
pip3 install aiohttp

Import the libraries into your script, define a time out, specify the proxy file, and set the target URL:

scraper.py
# pip3 install aiohttp
import aiohttp
import asyncio

# define a timeout in seconds
time_out = 10

# test URL
url = "https://httpbin.io/ip"

# specify the proxies file
proxy_file = "proxies_list.txt"

Define a check_proxy function that validates the proxies and visits the target website:

scraper.py
# ...

# validate a single proxy
async def check_proxy(url, proxy):
    try:
        # create an aiohttp session
        session_timeout = aiohttp.ClientTimeout(
            total=None, sock_connect=time_out, sock_read=time_out
        )

        # visit the target site asynchronously with proxy
        async with aiohttp.ClientSession(timeout=session_timeout) as session:
            async with session.get(
                url, proxy=f"http://{proxy}", timeout=time_out
            ) as response:
                print(await response.text())
    except Exception as error:
        print("Proxy responded with an error: ", error)
        return

Create a main function to read each proxy from the proxy list file and run the check_proxy function asynchronously in a for loop to visit the target site and validate each proxy address:

scraper.py
# ...

# main function to read and validate proxies
async def main():
    tasks = []
    # read the proxies from the proxy list file
    proxies = open(proxy_file, "r").read().strip().split("\n")

    # run the task concurrently
    for proxy in proxies:
        task = asyncio.create_task(check_proxy(url, proxy))
        tasks.append(task)

    await asyncio.gather(*tasks)

Finally, execute the main function with asyncio:

scraper.py
# ...

if __name__ == "__main__":
    # execute the main function
    asyncio.run(main())

Combine the snippets, and you'll get the following complete code:

scraper.py
# pip3 install aiohttp
import aiohttp
import asyncio

# define a timeout in seconds
time_out = 10

# test URL
url = "https://httpbin.io/ip"

# specify the proxies file
proxy_file = "proxies_list.txt"


# validate a single proxy
async def check_proxy(url, proxy):
    try:
        # create an aiohttp session
        session_timeout = aiohttp.ClientTimeout(
            total=None, sock_connect=time_out, sock_read=time_out
        )

        # visit the target site asynchronously with proxy
        async with aiohttp.ClientSession(timeout=session_timeout) as session:
            async with session.get(
                url, proxy=f"http://{proxy}", timeout=time_out
            ) as response:
                print(await response.text())
    except Exception as error:
        print("Proxy responded with an error: ", error)
        return


# main function to read and validate proxies
async def main():
    tasks = []
    # read the proxies from the proxy list file
    proxies = open(proxy_file, "r").read().strip().split("\n")

    # run the task concurrently
    for proxy in proxies:
        task = asyncio.create_task(check_proxy(url, proxy))
        tasks.append(task)

    await asyncio.gather(*tasks)


if __name__ == "__main__":
    # execute the main function
    asyncio.run(main())

Awesome! You just supercharged your Python scraper proxy rotator with concurrency. 

If scraping dynamic content, you'll need to implement proxy rotation with a browser automation tool like Selenium. Refer to our detailed guide on rotating proxies with Selenium to learn more.

Best Practices for Efficient Proxy Rotation

Here are some best practices you should follow to get the most out of your proxy rotation game.

Skip Free Proxies for Reliable Performance

Free proxies will often lead to scraper failures as they're usually shared and have a short lifespan. While they may be useful for testing and prototyping, they are unsuitable for long-term, real-world projects. To avoid costly interruptions, you should remove all free proxies from your proxy pool.

Combine IP Rotation with User-Agent Rotation

Rotating the IP isn't always enough, as anti-bots can still detect you through your User-Agent header. To enhance the reliability of your web scraper, implement User-Agent rotation alongside proxy rotation. Cycling through a list of User-Agent headers makes your scraper significantly more robust.

Refer to our article on setting the User Agent in Python to learn more.

Opt for Premium Proxy Services for Scalability

Premium web scraping proxies have a high uptime with optimized speed and are sourced ethically, making them less prone to IP bans. Incorporate these proxies into your pool for improved reliability and scalability. Additionally, many premium services provide built-in proxy rotation and geolocation features, further enhancing their effectiveness for large-scale web scraping projects.

Best Solution to Proxy Rotation: ZenRows Residential Proxies

One of the most reliable premium proxy services is the ZenRows Residential Proxies. ZenRows provides the window for scalability with robust proxy rotation, distributing your traffic across 55+ million globally distributed residential IPs. You also get a geolocation feature to access geo-restricted content.

To use the ZenRows residential proxy service, sign up and access the Request Builder. Then, navigate to Residential Proxies and copy the proxy address and your credentials (username and password).

generate residential proxies with zenrows
Click to open the image in full screen

Set up your proxy as shown:

Example
# pip3 install requests
import requests

proxy = 'http://<ZENROWS_PROXY_USERNAME>:<ZENROWS_PROXY_PASSWORD>@superproxy.zenrows.com:1337'
proxies = { 
    'http': proxy, 
    'https': proxy
}

url = 'https://httpbin.io/ip'
response = requests.get(url, proxies=proxies)
print(response.text)

Here's a sample output:

Output
{
  "origin": "93.93.246.219:3032"
}

Congrats! 🎉 Your scraper is now equipped with ZenRows' premium residential proxies.

Conclusion

You've learned how to rotate proxies in Python and how to speed up the process using asynchronous operations. While proxy rotation increases the chances of bypassing IP bans, the manual approach can be time-consuming and unmanageable.

To avoid the manual process and get higher reliability at scale, we recommend using ZenRows. In addition to premium residential proxies, ZenRows also gives you a web scraping API with advanced anti-bot auto-bypass.

Try ZenRows for free now without a credit card!

Ready to get started?

Up to 1,000 URLs for free are waiting for you