If you keep hammering a website with requests from the same IP address, they're going to notice and probably block you.
That's where proxies come in handy. They act as middlemen between your scraper and the websites you're targeting.
By routing your requests through different IPs, you can avoid blocks, access content that's locked to certain regions, and make your scraper look like it's just normal traffic from different users.
So, let's see how to set up proxies with Selenium in Java.
Quick Answer: Setting Up a Proxy in Selenium Java
Let's get straight to the point and set up Selenium WebDriver with a proxy in Java.
First, you'll need a proxy. Grab a free one from Free Proxy List if you're just testing things out. We'll use HTTPBin as our test site because it shows you the IP address making the request - perfect for checking if our proxy is working.
Create a Java file and set up WebDriverManager to handle the driver binary stuff for you. Then configure your proxy settings by creating a Proxy object that handles both HTTP and HTTPS traffic.
package com.example;
import org.openqa.selenium.By;
import org.openqa.selenium.Proxy;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import io.github.bonigarcia.wdm.WebDriverManager;
public class Scraper {
public static void main(String[] args) throws InterruptedException {
// download the required driver binaries
WebDriverManager.chromedriver().setup();
// set the proxy details
String proxyAddress = "184.169.154.119";
int proxyPort = 80;
// create a Proxy object and set the HTTP and SSL proxies
Proxy proxy = new Proxy();
proxy.setHttpProxy(proxyAddress + ":" + proxyPort);
proxy.setSslProxy(proxyAddress + ":" + proxyPort);
// create ChromeOptions instance and set the proxy options
ChromeOptions options = new ChromeOptions();
options.setProxy(proxy);
// run in headless mode
options.addArguments("--headless=new");
// create driver instance with the ChromeOptions
WebDriver driver = new ChromeDriver(options);
// navigate to target URL
driver.get("https://httpbin.io/ip");
// wait for 5 seconds using Thread.sleep
Thread.sleep(5000);
// retrieve text content
String ipAddress = driver.findElement(By.tagName("body")).getText();
System.out.println(ipAddress);
// close the browser
driver.quit();
}
}
You'll get the following output on running this code:
{
"origin": "184.169.154.119:34369"
}
Congrats! You've successfully configured Selenium in your Java web scraper to route requests through a proxy server.
While this single-proxy setup works well for basic scenarios, let's explore a more advanced technique, rotating between multiple proxies to make your scraping even more resilient to blocking.
Implement a Rotating Proxy in Selenium Java
While using a proxy can help you avoid getting blocked, measures like rate limiting and IP banning can stop your bot. So, to make it increasingly difficult for websites to detect your scraping activities, you need to dynamically change your IP address per request.
Building upon the previous implementation, introduce proxy rotation by creating a pool of proxies and randomly selecting one for each session.
The key change is the use of Java's Random
class to pick a proxy from the ArrayList
. This approach is particularly effective when you need to distribute requests across multiple IPs while maintaining a clean, single-session implementation.
package com.example;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.Proxy;
import org.openqa.selenium.chrome.ChromeOptions;
import io.github.bonigarcia.wdm.WebDriverManager;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class Scraper {
public static void main(String[] args) throws InterruptedException {
// download the required driver binaries
WebDriverManager.chromedriver().setup();
// define a list of proxy addresses and ports
List<String> proxyList = new ArrayList<>();
proxyList.add("190.61.88.147:8080");
proxyList.add("13.231.166.96:80");
proxyList.add("35.213.91.45:80");
// randomly select a proxy
Random rand = new Random();
String proxyAddress = proxyList.get(rand.nextInt(proxyList.size()));
// create ChromeOptions instance
ChromeOptions options = new ChromeOptions();
options.addArguments("--headless=new"); // Run in headless mode
// create a Proxy object and set the HTTP and SSL proxies
Proxy proxy = new Proxy();
proxy.setHttpProxy(proxyAddress);
proxy.setSslProxy(proxyAddress);
// set the proxy options in ChromeOptions
options.setProxy(proxy);
// create driver instance with the ChromeOptions
WebDriver driver = new ChromeDriver(options);
// navigate to target URL
driver.get("https://httpbin.io/ip");
Thread.sleep(5000);
// retrieve text content
String ipAddress = driver.findElement(By.tagName("body")).getText();
System.out.println(ipAddress);
// close the browser
driver.quit();
}
}
Running this script produces outputs from different IPs. Here's the result from three runs:
// request 1
{
"origin": "190.61.88.147"
}
// request 2
{
"origin": "35.213.91.45"
}
// request 3
{
"origin": "13.231.166.96"
}
Bingo! You've successfully increased your anonymity by using a different IP for each request.
However, free proxies are suitable for learning but bad practices in a real-world scenario because they're prone to failure, possess security issues, have a low IP reputation, and are easily spotted by anti-bot systems.
Let's explore an alternative in the section to help you scrape without getting blocked.
Avoid Getting Blocked While Scraping With Java
The best proxies to avoid getting blocked with Java are residential proxies. Check out our in-depth guide on the best web scraping proxies that integrate seamlessly with your headless browser.
Beyond that, remember that even with high-level customizations and integrating the best proxies, modern-day websites will still flag you easily. They implement detection measures like browser fingerprinting, request patterns monitoring, user behavior analysis, TLS fingerprinting, etc.
A web scraping API like ZenRows' Universal Scraper API is the only surefire way to scrape without getting blocked at any scale.
ZenRows' Universal Scraper API is a better alternative to Selenium and provides you with an all-in-one toolkit that deals with anti-bot measures automatically, using auto-rotating residential proxies, anti-CAPTCHA technologies, optimizing request headers and everything else you need for reliable web scraping.
Like Selenium, it enables you to mimic user behavior and render JavaScript, but without the need for additional infrastructure, maintenance, and getting constantly blocked by systems that even use machine learning to identify your scraper.
Let's see how ZenRows works by using it to scrape this Antibot Challenge page, a webpage highly protected by anti-bot measures.
Sign up on ZenRows to open the Request Builder. Paste your target URL in the link box and activate Premium Proxies and JS Rendering.

Select Java as your programming language and choose the API connection mode.
That'll generate your scraper's code. Copy it to your IDE:
import org.apache.hc.client5.http.fluent.Request;
public class APIRequest {
public static void main(final String... args) throws Exception {
String apiUrl = "https://api.zenrows.com/v1/?apikey=<YOUR_ZENROWS_API_KEY>>&url=https%3A%2F%2Fwww.scrapingcourse.com%2Fantibot-challenge&js_render=true&premium_proxy=true";
String response = Request.get(apiUrl)
.execute().returnContent().asString();
System.out.println(response);
}
}
To send HTTP requests, you can use the Apache HttpClient library (or any other library). For that, add the dependency to your pom.xml
file:
<dependency>
<groupId>org.apache.httpcomponents.client5</groupId>
<artifactId>httpclient5-fluent</artifactId>
<version>5.4.1</version>
</dependency>
Run the code, and you'll get the following output:
<html lang="en">
<head>
<!-- ... -->
<title>Antibot Challenge - ScrapingCourse.com</title>
<!-- ... -->
</head>
<body>
<!-- ... -->
<h2>
You bypassed the Antibot challenge! :D
</h2>
<!-- other content omitted for brevity -->
</body>
</html>
Awesome! You successfully accessed a heavily protected web page using ZenRows, leveraging premium proxies and advanced anti-bot bypass mechanisms.
Conclusion
In this guide, you learned how to enhance your Selenium Java scraper with proxy capabilities. We covered setting up a basic proxy configuration and implementing proxy rotation with random selection.
However, while implementing proxy is straightforward, maintaining reliable scraping operations at scale can become challenging. Anti-bot systems are constantly evolving, with new security measures that can quickly render basic proxy setups ineffective.
To handle web scraping challenges efficiently at any scale, consider using ZenRows, an all-in-one scraping toolkit that automatically handles proxy rotation, browser fingerprinting, and complete anti-bot bypass. Try ZenRows for free!