Are you struggling with IP blocks while using SeleniumBase for web scraping? You are not alone.
Using proxies is one way to solve this problem. They help you avoid IP bans, bypass geolocation restrictions, and manage rate limits, ensuring smoother and more reliable scraping sessions.
In this article, you'll learn how to use proxies with SeleniumBase. We'll cover everything from basic setup to authentication, proxy rotation, and even the use of premium proxies.
Let's dive in!
Configure a Proxy With SeleniumBase
To configure a proxy in SeleniumBase, you'll need to use the --proxy=IP_ADDRESS:PORT
argument to specify the proxy address in your command line:
pytest proxy_test.py --proxy=<PROXY_IP_ADDRESS>:<PROXY_PORT>
This command routes all traffic through the specified proxy. SeleniumBase uses pytest
to manage, run, and report the test results.
Let's see how this works.
For demonstration purposes, we'll use proxies from the Free Proxy List. Then, set up a test to send a request to ​​https://httpbin.io/ip. This web page returns the IP address of the client making the request.Â
Here's what a standard proxy address looks like:
<PROXY_PROTOCOL>://<PROXY_IP_ADDRESS>:<PROXY_PORT>
To get started, install SeleniumBase using the following command:
pip3 install seleniumbase
Next, create a file called proxy_test.py
in your working directory. Then, define a method that opens https://httpbin.io/ip, retrieves the IP address, and prints it:
from seleniumbase import BaseCase
from selenium.webdriver.common.by import By
import time
class ProxyTest(BaseCase):
def test_proxy(self):
# open the test URL
self.driver.get("https://httpbin.io/ip")
# add a delay
time.sleep(5)
# print the body text containing the IP address
ip_address = self.driver.find_element(By.TAG_NAME, "body").text
print(ip_address)
Run the above code using the following command to print your IP address:
pytest proxy_test.py --proxy=3.82.105.53:8042 -s
SeleniumBase will route the request and print your proxy's IP address:
{
"origin": "3.82.105.53:8042"
}
That's it! You've now set up a proxy with SeleniumBase.Â
Keep in mind that free proxies can be unreliable due to their short lifespan. For stability, it's better to use a premium proxy.
Proxy Authentication in SeleniumBase
If you want the best experience, consider investing in a premium proxy service. Paid proxies are faster, more reliable, and offer better anonymity. Most premium providers require authentication, ensuring only authorized users can access the service. Authentication normally involves a username and password combination.
SeleniumBase supports proxy authentication. You can pass the credentials directly through the command line:
pytest proxy_test.py --proxy=<YOUR_USERNAME>:<YOUR_PASSWORD>@<PROXY_IP_ADDRESS>:<PROXY_PORT>
All you have to do is replace your username, password, IP address and port with your details.
When you run this command, SeleniumBase will route your browser traffic through the authenticated proxy. The IP address used by the proxy will be displayed in the console.
That's it! Your scraper is now using an authenticated proxy.
Rotating Premium Proxies in SeleniumBase
Using a single proxy won't always ensure stable access, as many websites implement rate-limiting measures. They block IPs that exceed a specific traffic limit. On top of that, consistently making requests from the same proxy can degrade its quality, increasing the chances of an IP ban.
The solution is to rotate proxies.
Proxy rotation involves switching between different IP addresses from a pool and sending each request from a new IP. This method spreads traffic across multiple IPs, reducing the likelihood of getting blocked.
Rotating Free IPs with SeleniumBase
To rotate free proxies with SeleniumBase, start by collecting proxies from the Free Proxy List. We'll use them to create a custom list in our script. Remember that free proxies are unreliable. For real projects, it's better to use higher-quality IPs.
Now, create a rotate_proxies.py
file and add the following code. This script will randomly select a proxy from a list of free proxies and run proxy_test.py
using the selected proxy.
import random
import subprocess
# list of proxy servers (replace with your actual proxies)
proxies = [
"85.210.203.188:8080",
"171.244.60.55:8080",
"51.222.161.115:80",
# ...
]
# randomly select a proxy from the list
proxy = random.choice(proxies)
# construct the pytest command with the selected proxy
command = f"pytest proxy_test.py --proxy={proxy} -s"
# execute the pytest command in a shell
subprocess.run(command, shell=True)
As a reminder, here is our proxy_test.py
script, which will print your proxy IP address:
from seleniumbase import BaseCase
from selenium.webdriver.common.by import By
import time
class ProxyTest(BaseCase):
def test_proxy(self):
# open the test URL
self.driver.get("https://httpbin.io/ip")
# add a delay
time.sleep(5)
# print the body text containing the IP address
ip_address = self.driver.find_element(By.TAG_NAME, "body").text
print(ip_address)
Run the rotate_proxies.py
script to automatically select a proxy from the list and run the proxy_test.py
script, which prints the IP address being used:
python rotate_proxies.py
Here is a sample output for three consecutive requests:
# request 1
{
"origin": "85.210.203.188"
}
# request 2
{
"origin": "171.244.60.55"
}
# request 3
{
"origin": "51.222.161.115"
}
Great job! You now know how to rotate IP addresses using a custom proxy list in SeleniumBase.
Premium Residential Proxies to Get Unblocked
Using free proxies tends to get blocked easily due to their lower quality and overuse. This makes it unreliable for long-term or high-volume scraping tasks. A better option is residential proxies. These proxies use IPs assigned by ISPs to real users, making them far more dependable and less likely to get blocked.
ZenRows offers one of the largest pools of premium residential proxies on the market. The proxies combine IP auto-rotation and flexible geo-targeting to bypass rate limits and geographic restrictions at scale.Â
In addition to residential proxies, you can access extra scraping toolkits to bypass anti-bot security measures, all under a unified price cap.
Now, let's see how you can set up ZenRows' residential proxies.
Sign up to open the Request Builder. Go to the Proxy Generator by clicking the Residential Proxies tab. Copy your proxy credentials (username and password), the proxy domain, and the proxy port.

To run the test using authenticated proxies, use the following format:
pytest proxy_test.py --proxy=<YOUR_USERNAME>:<YOUR_PASSWORD>@<PROXY_IP_ADDRESS>:<PROXY_PORT>
Here are the outputs for three separate requests:
# request 1
{
"origin": "194.230.160.129:23859"
}
# request 2
{
"origin": "77.89.83.231:61500"
}
# request 3
{
"origin": "83.28.15.130:48140"
}
Congratulations 🎉! Your scraper now uses ZenRows residential proxies, allowing you to bypass potential IP bans and geo-restrictions.Â
If you're still encountering issues with advanced anti-bot protections like CAPTCHAs or web application firewalls (WAFs) like Cloudflare or DataDome, you can bypass them using the ZenRows Scraper API.
Conclusion
You've explored various ways to set up proxies with SeleniumBase, including basic proxy configuration, authenticated proxies, and IP rotation. These techniques help ensure your scraping or testing processes are efficient and less likely to encounter IP blocks.
Most importantly, rotating high-quality residential IPs is crucial for long-term success. ZenRows offers a top-tier residential proxy solution with advanced features to bypass even the most sophisticated anti-bot protections.
Try ZenRows' residential proxies today and enhance your SeleniumBase scraping setup, avoiding potential bans and restrictions.