Web Crawling Webinar for Tech Teams
Web Crawling Webinar for Tech Teams

How to Set a Proxy In Python Requests (2025)

Idowu Omisola
Idowu Omisola
Updated: December 26, 2024 · 4 min read

Is your IP getting banned while web scraping with Python? Routing your Python request through a proxy server can increase your chances of avoiding IP bans.

In this tutorial, you'll learn how to do it step-by-step. Let's get started!

Quick Answer: Setting Up a Proxy in Python Requests

In this section, you'll learn how to perform basic Python requests with a proxy, where to get the proxies from, how to authenticate proxies and other techniques. We'll use https://httpbin.org/ip, a web page that returns your IP address, as the target site to test the proxy connection.

Before we begin, ensure you have Python installed. Then, install the Requests library using pip:

Terminal
pip3 install requests

Following this tutorial will be easier if you know the fundamentals of web scraping with Python.

First, grab some free proxies from the Free Proxy List and define a dictionary with the proxy URLs associated with the HTTP and HTTPS protocols. Next, perform an HTTP request with Python Requests through the proxy server:

program.py
# pip3 install requests
import requests

# specify the proxy address
proxies = {
   'http': 'http://103.167.135.111:80',
   'https': 'http://116.98.229.237:10003',
}

url = 'https://httpbin.io/ip'
# send a request with the proxy server
response = requests.get(url, proxies=proxies)
print(response.text)

In the above code, the Requests library performs HTTP requests over the http proxy and handles HTTPS traffic over the https one. Verify it works by running the code. You'll get the following response:

Output
{
  "origin": "103.167.135.111:22008"
}

The origin field contains the IP address of the proxy, confirming Requests made the HTTP request over the proxy server.

Keep in mind that Requests supports only HTTP and HTTPS proxies. If you need to route HTTP traffic through other protocols like FTP or use a SOCKS proxy, you'll need to install the requests[socks] extension:

Terminal
pip3 install requests[socks]

You can then specify the SOCKS proxy like this:

program.py
# pip3 install requests
import requests

# specify the SOCKS proxy address
proxies = {
    'http': 'socks5://<PROXY_IP_ADDRESS>:<PROXY_PORT>',
    'https': 'socks5://<PROXY_IP_ADDRESS>:<PROXY_PORT>'
}

url = 'https://httpbin.io/ip'

# send a request with the proxy server
response = requests.get(url, proxies=proxies)
print(response.text)
Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Proxy Authentication in Python Requests

Some proxy servers are protected by authentication for security reasons, so only users with credentials can access them. That usually happens with premium proxies or commercial solutions.

Follow this syntax to specify a username and password in the URL of an authenticated proxy:

Example
<PROXY_PROTOCOL>://<YOUR_USERNAME>:<YOUR_PASSWORD>@<PROXY_IP_ADDRESS>:<PROXY_PORT>

See an example:

program.py
# ...

proxies = {
  'http': 'http://fgrlkbxt:[email protected]:7492',
  'https': 'https://fgrlkbxt:[email protected]:6286'
}

# ...

Proxy Session Using Python Requests

You may need a session when making many requests through a proxy server. A Session object allows you to reuse the same TCP connection and essential information for several requests, including cookies, authentication, and connection pooling. This not only saves time but also simplifies handling complex authentication flows or optimizing performance for repeated requests to the same host. For instance, when scraping a website that requires login, a session can help you stay authenticated throughout your scraping process without repeatedly sending login credentials.

Here's how to set up and use a proxy session in Python Requests:

program.py
import requests

# initialize a session
session = requests.Session()

# set the proxies in the session object
session.proxies = {
   'http': 'http://103.167.135.111:80',
   'https': 'http://116.98.229.237:10003'
}

url = 'https://httpbin.io/ip'

# perform an HTTP GET request over the session
response = session.get(url)

Environment Variable for a Python Requests Proxy

You can DRY up some code if your Python script uses the same proxies for each request. By default, Requests relies on the HTTP proxy configuration defined by these environment variables:

  • HTTP_PROXY: It corresponds to the http key of the proxies dictionary.
  • HTTPS_PROXY: It corresponds to the https key of the proxies dictionary.

Open the terminal and set the two environment variables this way:

program.py
export HTTP_PROXY="http://103.167.135.111:80"
export HTTPS_PROXY="http://116.98.229.237:10003"

Then, remove the proxy logic from your script, and you'll get to this:

program.py
import requests

url = 'https://httpbin.io/ip'
response = requests.get(url)

Great! You now know the basics of proxies in Python with Requests! Let's see them in action in some more advanced scenarios.

Rotating Proxies in Python Requests

When your script sends numerous requests rapidly, websites may flag this behavior as suspicious and block your IP address. Implementing a rotating proxy strategy can effectively prevent this. The concept is simple: you switch to a new IP after a set time interval or a certain number of requests, making each request appear as if it's from a different user.

Let's implement a simple proxy rotator using free proxies. 

First, grab more proxies from the previous free proxy website and insert them into a list. Create a function to randomize the proxies and use the randomized result to send HTTP requests:

program.py
# pip3 install requests
import requests
import random

# list of free proxies (replace with actual working proxies)
PROXY_LIST = [
    "http://203.24.108.161:80",
    "http://80.48.119.28:8080",
    "http://170.30.189.47:80",
]


# randomly select a proxy from the list
def get_random_proxy():
    return random.choice(PROXY_LIST)


# get a random proxy
proxy = get_random_proxy()

# target url to check our IP
url = "https://httpbin.io/ip"

# make a request using the selected proxy
response = requests.get(url, proxies={"http": proxy, "https": proxy})

# print the response, which should show the proxy's IP
print(response.text)

Here's the result of running the above code three times:

Output
# request 1
{
  "origin": "203.30.189.47"
}

# request 2
{
  "origin": "80.48.119.28"
}

# request 3
{
  "origin": "170.30.189.47"
}

If you run this script multiple times, you'll observe that it uses different IP addresses per request, demonstrating the IP rotation in action. This logic can increase your chances of bypassing rate limits on target websites.

By default, Requests verifies SSL certificates on HTTPS requests. Certification verification can lead to SSLError errors when dealing with proxies.

To avoid those errors, deactivate SSL verification with verify=False:

program.py
# ...
response = requests.request(
    http_method, 
    url, 
    proxies=proxies, 
    timeout=5
    # deactivate SSL certificate verification
    verify=False
)

However, it's important to note that free proxies often have limitations. They can be unreliable, slow, or may not work with more sophisticated websites. 

For serious scraping tasks, consider using premium proxy services that offer better reliability, speed, and features like automatic IP rotation and geolocation targeting. That said, making the right premium proxy choice is essential to the success of your scraping task. We'll help you with that in the next section.

How to Choose the Best Proxies

As mentioned, while free proxies can be handy for basic tasks, they often fail with serious web scraping projects. Using premium proxies that offer auto-rotation, residential IPs, and geolocation features significantly increases your scraping success rate.

One example of a premium proxy service that offers these features is the ZenRows Residential Proxies service. ZenRows distributes your traffic across 55+ million globally distributed premium IPs.

To get started, sign up for ZenRows to open the Request Builder. Go to the Residential Proxies. Then, copy the proxy address and your proxy credentials (username and password).

generate residential proxies with zenrows
Click to open the image in full screen

Modify your code with the copied credentials as follows:

program.py
import requests

proxy = 'http://<ZENROWS_PROXY_USERNAME>:<ZENROWS_PROXY_PASSWORD>@superproxy.zenrows.com:1337'
proxies = { 
    'http': proxy, 
    'https': proxy
}

url = 'https://httpbin.io/ip'
response = requests.get(url, proxies=proxies)
print(response.text)

Run the code. Here's an example of what the output would look like:

File
{
  "origin": "185.220.101.34:65273"
}

This output shows that your request now routes through ZenRows' premium proxies.

Congrats! Your premium proxy with Python Requests script is ready!

Conclusion

Proxies are essential for bypassing IP bans and increasing your scraper's success rate. This step-by-step tutorial covered the most important lessons about proxies with Requests in Python. You started from the basic setup and have now become a proxy master!

Remember that free proxies are unsuitable in real-life scenarios. We recommend using premium proxies from a provider like ZenRows to enjoy the best experience. With it, you'll get access to a reliable rotating proxy system. Try ZenRows for free now — no credit card required!

Frequent Questions

What Proxy Types Are There? Which Are the Best?

There are several types of proxies, each with different levels of effectiveness for web scraping. The major ones include residential, datacenter, mobile, public, and premium proxies. 

Residential proxies are the most reliable for web scraping since they use IPs from regular internet users. For the best results, it's crucial to choose from trusted providers, such as ZenRows. Learn more about selecting the right proxy for your needs in our guide on web scraping proxies.

What Are the Benefits of Using a Proxy for Web Scraping?

Using proxies for web scraping offers several significant advantages, including avoiding anti-bot systems, geolocation targeting to access geo-restricted content, anonymity, and improved performance, which can reduce the risk of rate limiting.

Ready to get started?

Up to 1,000 URLs for free are waiting for you