How to Use a Proxy With OkHttp [Tutorial 2024]

June 25, 2024 · 8 min read

OkHttp is a popular open-source HTTP client for web scraping in Java. But since websites block IP addresses that make too many requests, you need to use a proxy with OkHttp to fly under the radar and access the data you want.

Proxies act as intermediaries between you and the target server, offering additional benefits such as access to restricted geo-restricted content and the ability to distribute traffic across multiple servers.

In this article, you'll learn how to configure OkHttp to use a proxy in Java to scrape undetected. Let's go!

How to Set Your Proxy With OkHttp in Java

OkHttp provides a Proxy class that offers a way to represent a proxy server. It takes two arguments:

  • Proxy Type: It specifies the type of the proxy server. Proxy.Type.HTTP is usually used for HTTP proxies.
  • Socket Address: It represents the proxy server's network endpoint, typically specified by its host or IP address and port number.

To set your proxy with OkHttp, you need to create a proxy object and pass in the necessary details using the Proxy class. Then, use this proxy object to configure your OkHttpClient instance and route HTTP requests through the proxy server.

Follow the steps below to learn how to do it.

Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Step 1: Add a Proxy in OkHttp

Let's set up a basic OkHttp script to which you'll add proxy configuration. This script sends a GET request to HttpBin, a web service that returns the requesting client's IP address.

scraper.java
package com.example;
 
// import the required classes
import java.io.IOException;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.Response;
 
public class Main {
    // create a new OkHttpClient instance
    final OkHttpClient client = new OkHttpClient();
 
    String run(String url) throws IOException {
        // create a request with the provided URL
        Request request = new Request.Builder()
            .url(url)
            .build();
        // execute the request and obtain the response
        try (Response response = client.newCall(request).execute()) {
            // return the response body as a string
            return response.body().string();
        }
    }
 
    public static void main(String[] args) throws IOException {
        // create an instance of the Main class
        Main example = new Main();
        // make a GET request to the specified URL and print the response
        String response = example.run("https://httpbin.io/ip");
        System.out.println(response);
    }
}

Now, let's add a proxy configuration.

Start by defining your proxy details. Then, create a proxy object and pass in the necessary details using the Proxy class.

You can grab a free proxy from Free Proxy List. Use HTTPS proxies since they work with HTTPS and HTTP websites.

scraper.java
public class Main {
 
    String run(String url) throws IOException {
        // define your proxy details
        String proxyHost = "140.238.247.9";
        int proxyPort = 8100;
 
        // create a proxy object and pass in the necessary details
        Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost, proxyPort));
    }
}

Then, create an instance of OkHttpClient.Builder() and configure it to use the proxy. The OkHttpCleint.Builder() instance initializes a builder object that allows you to configure proxy options for the OkHttpClient instance.

scraper.java
public class Main {
    String run(String url) throws IOException {
        //...
 
        // create a OkHttpClient builder instance and configure it to use the proxy
        OkHttpClient client = new OkHttpClient.Builder()
            .proxy(proxy) 
            .build();
    }
}

That's it! You've configured an OkHttp proxy. Now, you can make your request just like in the basic script, and it'll be routed through the proxy server.

Combine everything, and your complete code should look like this:

scraper.java
package com.example;
 
// import the required classes
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.Response;
 
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Proxy;
 
public class Main {
    String run(String url) throws IOException {
        // define your proxy details
        String proxyHost = "140.238.247.9";
        int proxyPort = 8100;
 
        // create a proxy object and pass in the necessary details
        Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost, proxyPort));
 
        // create a OkHttpClient builder instance and configure it to use the proxy
        OkHttpClient client = new OkHttpClient.Builder()
            .proxy(proxy) 
            .build();
 
        // create a request with the provided URL
        Request request = new Request.Builder()
            .url(url)
            .build();
        // execute the request and obtain the response
        try (Response response = client.newCall(request).execute()) {
            // return the response body as a string
            return response.body().string();
        }
    }
 
    public static void main(String[] args) throws IOException {
        // create an instance of the Main class
        Main example = new Main();
        // make a GET request to the specified URL and print the response
        String response = example.run("https://httpbin.io/ip");
        System.out.println(response);
    }
}

Run it, and your result will be your proxy's IP address.

Output
{
  "origin": "140.238.247.9:63509"
}

Free proxies are viable only for testing purposes. They're unreliable and easy to detect by websites. In real-world use cases, you'll need premium web scraping proxies. These proxies often require additional configuration depending on the authentication type.

Let's see how to implement basic authentication for an OkHttp proxy.

Step 2: Proxy Authentication: Username and Password

Proxy authentication is necessary when the proxy server requires additional information, such as username and password, to grant you access. This is common with premium proxies.

To authenticate an OkHttp proxy, implement the Authenticator interface provided by OkHttp. This interface provides an Authenticate() method that takes a route and a response as parameters and returns a Request object with the appropriate authentication credentials.

Follow the steps below to configure the Authenticator interface.

In your OkHttpClient.Builder() instance, create a new instance of the Authenticator interface using an anonymous class (new Authenticator() { ... }).

scraper.java
    String run(String url) throws IOException {
        //...
 
        OkHttpClient client = new OkHttpClient.Builder()
            .proxy(proxy)
            // create instance of Authenticator interface. 
            .authenticator(new Authenticator() {
 
            }) 
 
    }

Next, implement the Authenticate() method. Pass the Route and Response objects as parameters to this method:

scraper.java
 @Override
// pass the route and response object in the authenticate method
public Request authenticate(Route route, Response response) throws IOException {
 
}

A route typically includes information about the proxy server and the destination server to which the request is made. Similarly, Response represents the response from the proxy server, indicating that authentication is required.

Lastly, provide the authentication credentials using credentials.basic() and return a Requests object with the updated headers containing the authentication details.

If the proxy server requires authentication, the credentials.basic() method will generate a basic authentication string and encode it in a suitable HTTP header format.

scraper.java
// provide the credentials
String credential = Credentials.basic("<YOUR_USERNAME>", "<YOUR_PASSWORD>");
return response.request().newBuilder()
        .header("Proxy-Authorization", credential)
        .build();

Combine everything. You'll end up with the following Authenticator interface and OkHttp.Builder() instance.

scraper.java
// create a OkHttpClient builder instance and configure it to use the proxy
OkHttpClient client = new OkHttpClient.Builder()
    .proxy(proxy)
    .authenticator(new Authenticator() {
        @Override
        public Request authenticate(Route route, Response response) throws IOException {
            // If the proxy requires authentication, provide the credentials
            String credential = Credentials.basic("<YOUR_USERNAME>", "<YOUR_PASSWORD>");
            return response.request().newBuilder()
                .header("Proxy-Authorization", credential)
                .build()
            }
        }) 
        .build();

Your new complete code should look like this:

scraper.java
package com.example;
 
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Proxy;
 
import okhttp3.*;
 
public class Main {
 
    String run(String url) throws IOException {
        // define your proxy details
        String proxyHost = "140.238.247.9";
        int proxyPort = 8100;
 
        // create a proxy object and pass in the necessary details
        Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost, proxyPort));
 
        // create a OkHttpClient builder instance and configure it to use the proxy
        OkHttpClient client = new OkHttpClient.Builder()
            .proxy(proxy)
            .authenticator(new Authenticator() {
                @Override
                public Request authenticate(Route route, Response response) throws IOException {
                    // provide the credentials
                    String credential = Credentials.basic("<YOUR_USERNAME>", "<YOUR_PASSWORD>");
                    return response.request().newBuilder()
                            .header("Proxy-Authorization", credential)
                            .build();
                }
            }) 
            .build();
 
        // create a request with the provided URL
        Request request = new Request.Builder()
            .url(url)
            .build();
        // execute the request and obtain the response
        try (Response response = client.newCall(request).execute()) {
            // return the response body as a string
            return response.body().string();
        }
    }
 
    public static void main(String[] args) throws IOException {
        // create an instance of the Main class
        Main example = new Main();
        // make a GET request to the specified URL and print the response
        String response = example.run("https://httpbin.io/ip");
        System.out.println(response);
    }
}

Step 3: Rotate Proxies With OkHttp

While scraping multiple pages, you must rotate between multiple proxies to avoid IP bans and rate limiting.

Most websites block IP addresses that make too many requests in short intervals. Rotating your proxies allows you to distribute traffic across multiple servers, making your requests seem to originate from different users.

To rotate OkHttp proxies, you need to maintain a proxy pool and select a different proxy for each request. Let's learn how to do it.

Import the necessary Java classes, List and Random, and create your proxy pool. Again, you can grab a few proxies from Free Proxy List.

scraper.java
package com.example;
 
// import the necessary classes
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
 
import okhttp3.*;
 
public class Main {
        // define a proxy pool
       private static final List<ProxyInfo> proxyList = new ArrayList<>();
 
    static {
        proxyList.add(new ProxyInfo("140.238.247.9", 8100));
        proxyList.add(new ProxyInfo("213.188.211.61", 3128));
        proxyList.add(new ProxyInfo("67.43.227.229", 20195));
    }
 
}

Then, create a static class, like in the code snippet below. This allows you to better organize and manage proxy details.

scraper.java
 static class ProxyInfo {
        String host;
        int port;
 
        ProxyInfo(String host, int port) {
            this.host = host;
            this.port = port;
        }
    }

Next, randomly select a proxy from the list and use its details to create a proxy object.

scraper.java
//...
String run(String url) throws IOException {
    // randomly select a proxy from the list
    Random random = new Random();
    int index = random.nextInt(proxyList.size());
    ProxyInfo proxyInfo = proxyList.get(index);
 
    // Create a proxy object with the selected proxy details
    Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyInfo.host, proxyInfo.port));
 
}

Lastly, build your OkHttpClient instance using the new proxy object and make the same request as in the previous code example.

You'll end up with the following complete code:

scrape.java
package com.example;
 
// import the necessary classes
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
 
import okhttp3.*;
 
public class Main {
        // define a proxy pool
       private static final List<ProxyInfo> proxyList = new ArrayList<>();
 
    static {
        proxyList.add(new ProxyInfo("140.238.247.9", 8100));
        proxyList.add(new ProxyInfo("213.188.211.61", 3128));
        proxyList.add(new ProxyInfo("67.43.227.229", 20195));
    }
 
    // create static proxyInfo class
    static class ProxyInfo {
        String host;
        int port;
 
        ProxyInfo(String host, int port) {
            this.host = host;
            this.port = port;
        }
    }
 
    String run(String url) throws IOException {
        // randomly select a proxy from the list
        Random random = new Random();
        int index = random.nextInt(proxyList.size());
        ProxyInfo proxyInfo = proxyList.get(index);
 
        // create a proxy object with the selected proxy details
        Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyInfo.host, proxyInfo.port));
        
        // create a OkHttpClient builder instance and configure it to use the proxy
        OkHttpClient client = new OkHttpClient.Builder()
            .proxy(proxy)
            .build();
 
        // create a request with the provided URL
        Request request = new Request.Builder()
            .url(url)
            .build();
        // execute the request and obtain the response
        try (Response response = client.newCall(request).execute()) {
            // return the response body as a string
            return response.body().string();
        }
    }
 
    public static void main(String[] args) throws IOException {
        // create an instance of the Main class
        Main example = new Main();
        // make a GET request to the specified URL and print the response
        String response = example.run("https://httpbin.io/ip");
        System.out.println(response);
    }
}

To verify if it works, make multiple requests. You should get a different IP address each time. Here are the results for two requests:

Output
{
  "origin": "140.238.247.9:60163"
}
 
{
  "origin": "213.188.211.61:51103"
}

Well done!

Get the Best Premium Proxies to Scrape

Rotating proxies can help mitigate IP-based blocking, but relying exclusively on them won’t guarantee ban-free scraping.

See for yourself. Try scraping the Amazon product page below using the previous OkHttp proxy script.

Amazon Product Page
Click to open the image in full screen
scraper.java
package com.example;
 
// import the necessary classes
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
 
import okhttp3.*;
 
public class Main {
        // define a proxy pool
       private static final List<ProxyInfo> proxyList = new ArrayList<>();
 
    static {
        proxyList.add(new ProxyInfo("140.238.247.9", 8100));
        proxyList.add(new ProxyInfo("213.188.211.61", 3128));
        proxyList.add(new ProxyInfo("67.43.227.229", 20195));
    }
 
    // create static proxyInfo class
    static class ProxyInfo {
        String host;
        int port;
 
        ProxyInfo(String host, int port) {
            this.host = host;
            this.port = port;
        }
    }
 
    String run(String url) throws IOException {
        // randomly select a proxy from the list
        Random random = new Random();
        int index = random.nextInt(proxyList.size());
        ProxyInfo proxyInfo = proxyList.get(index);
 
        // create a proxy object with the selected proxy details
        Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyInfo.host, proxyInfo.port));
        
        // create a OkHttpClient builder instance and configure it to use the proxy
        OkHttpClient client = new OkHttpClient.Builder()
            .proxy(proxy)
            .build();
 
        // create a request with the provided URL
        Request request = new Request.Builder()
            .url(url)
            .build();
        // execute the request and obtain the response
        try (Response response = client.newCall(request).execute()) {
            // return the response body as a string
            return response.body().string();
        }
    }
 
    public static void main(String[] args) throws IOException {
        // create an instance of the Main class
        Main example = new Main();
        // make a GET request to the specified URL and print the response
        String response = example.run("https://www.amazon.com/Lumineux-Teeth-Whitening-Strips-Treatments-Enamel-Safe/dp/B082TPDTM2/?th=1");
        System.out.println(response);
    }
 
}

You'll get the following result:

Output
<!DOCTYPE html>
<body>
    <h4>Enter the characters you see below</h4>
    <p class="a-last">
        Sorry, we just need to make sure you're not a robot. For best results, please make sure your browser is accepting cookies.
    </p>
    <!--
    -->
</body>

The script above encountered an anti-bot challenge, asking you to prove you're not a robot. This outcome confirms that no proxies are foolproof. Even with premium proxies, you may still face blocks, especially against advanced anti-bot systems.

A web scraping API, such as ZenRows, can solve all these problems. ZenRows provides a full web-scraping toolkit, including auto-rotating premium proxies, optimized headers, anti-CAPTCHAs, and more.

Moreover, ZenRows automatically bypasses the anti-bot systems, minimizing the manual work you must do. To access your desired data, you only need to make a single request to the ZenRows API.

Let's see how ZenRows deals with the same webpage we tried to scrape earlier.

To get started, sign up for free and go to the Request Builder page.

Paste your target URL, select the JavaScript Rendering mode, and check the box for Premium Proxies to rotate proxies automatically. Select Java as the language, and it'll generate your request code on the right.

building a scraper with zenrows
Click to open the image in full screen

The generated code uses the Apache Fluent API, but you can achieve the same result with OkHttp. You only need to make a request to the ZenRows API URL.

Your new script should look like this:

scraper.java
package com.example;
 
// import the required classes
import java.io.IOException;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.Response;
 
public class Main {
    // create a new OkHttpClient instance
    final OkHttpClient client = new OkHttpClient();
 
    String run(String url) throws IOException {
        // create a request with the provided URL
        Request request = new Request.Builder()
            .url(url)
            .build();
        // execute the request and obtain the response
        try (Response response = client.newCall(request).execute()) {
            // return the response body as a string
            return response.body().string();
        }
    }
 
    public static void main(String[] args) throws IOException {
        // create an instance of the Main class
        Main example = new Main();
        // make a GET request to the specified URL and print the response
        String response = example.run("https://api.zenrows.com/v1/?apikey=<YOUR_ZENROWS_API_KEY>&url=https%3A%2F%2Fwww.amazon.com%2FLumineux-Teeth-Whitening-Strips-Treatments-Enamel-Safe%2Fdp%2FB082TPDTM2%2F%3Fth%3D1&js_render=true&premium_proxy=true");
        System.out.println(response);
    }
}

Run it, and you'll get the page's HTML content.

Output
<!DOCTYPE html>
<title>
    Amazon.com: Lumineux Teeth Whitening Strips 21 Treatments...
</title>
//...

Awesome, right? That's how easy it is to scrape with ZenRows.

Conclusion

Setting an OkHttp proxy in Java can increase your anonymity and help you avoid IP bans. However, it's important to remember that proxies don't work for every use case. Even premium proxies can be blocked by advanced anti-bot systems.

With a web scraping API like ZenRows, you can scrape any website without getting blocked. Apart from premium proxies that will save you the hassle of manually maintaining a proxy pool, ZenRows also offers a headless browser, User Agent rotator, CAPTCHA bypass, and other solutions needed to scrape at scale.

Ready to get started?

Up to 1,000 URLs for free are waiting for you