How to Use a Proxy With Chromdp in 2024

May 24, 2024 · 9 min read

Do you want to add a proxy to your Chromedp scraper and avoid an IP ban while scraping with Golang? You’re at the right place!

In this article, you'll learn how to set up proxies in Chromedp. You'll start with a single proxy and then rotate the proxies from a list.

How to Set Your Proxy With Chromedp

Chromedp allows you to make a request with a proxy server. The following sections explain how to do this.

Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Step 1: Add a Proxy in Chromedp

Adding a single proxy is the most basic way to configure proxies while scraping with Chromedp. To start, grab a free proxy from the Free Proxy List. 

Next, you'll route your Chromedp scraper through the proxy and send a request to https://httpbin.io/ip` to view your current IP address. 

Let's see how to add the chosen proxy in your code. First, import the required packages and declare your proxy address:

scraper.go
package main

import (
	"context"
	"fmt"

	"github.com/chromedp/chromedp"
)

func main() {
	// define the proxy server address
	var proxyAddress string = "https://35.154.21.111:3128"
  
}

Next, create a base browser context and add the proxy address to the allocator option instance:

scraper.go
func main() {
	
	// ...
  
	ctx, cancel := chromedp.NewContext(context.Background())
	defer cancel()

	// set up options with the proxy server and headless mode enabled
	opts := []chromedp.ExecAllocatorOption{
		chromedp.ProxyServer(proxyAddress),
		chromedp.Flag("headless", true),
	}
  
}

Add the browser context and the allocator options to the Chromedp executor. Then start a new browser context with the allocated options:

scraper.go
func main() {
	
	// ...
  
	// allocate the browser context with the allocation options
	ctx, cancel = chromedp.NewExecAllocator(ctx, opts...)
	defer cancel()

	// create a new context with the allocated browser context
	ctx, cancel = chromedp.NewContext(ctx)
	defer cancel()
}

Finally, open the target web page and print its page source to view the current IP address:

scraper.go
func main() {
	
	// ...
	
	// navigate to a website and extract information
	var ip string
	err := chromedp.Run(ctx,
		chromedp.Navigate("https://httpbin.org/ip"),
		chromedp.WaitVisible("body"),
		chromedp.Text("body", &ip),
	)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}

	// print the extracted IP address
	fmt.Println("IP Address:", ip)
}

Put everything together, and your final code should look like this:

scraper.go
package main

import (
	"context"
	"fmt"

	"github.com/chromedp/chromedp"
)

func main() {
  
	var proxyAddress string = "https://35.154.21.111:3128"

	ctx, cancel := chromedp.NewContext(context.Background())
	defer cancel()

	// set up options with the proxy server and headless mode enabled
	opts := []chromedp.ExecAllocatorOption{
		chromedp.ProxyServer(proxyAddress),
		chromedp.Flag("headless", true), // Enable headless mode
	}
	// allocate the browser context with the allocation options
	ctx, cancel = chromedp.NewExecAllocator(ctx, opts...)
	defer cancel()

	// create a new context with the allocated browser context
	ctx, cancel = chromedp.NewContext(ctx)
	defer cancel()

	// navigate to a website and extract information
	var ip string
	err := chromedp.Run(ctx,
		chromedp.Navigate("https://httpbin.org/ip"),
		chromedp.WaitVisible("body"),
		chromedp.Text("body", &ip),
	)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}

	// print the extracted IP address
	fmt.Println("IP Address:", ip)
}

The code adds the specified proxy and outputs its IP address, as shown:

Output
IP Address: {
  "origin": "35.154.21.111"
}

That works! You just configured your Chromedp scraper to use a specific proxy server. Let's scale this up by rotating the proxies from a list.

Step 2: Rotate Proxies with Chromedp

Sticking to a single proxy is usually unreliable and will eventually get you blocked. You'll need to use different IPs to reduce the chances of anti-bot detection. The best way to do this is to rotate your proxies.

You'll see how to achieve that by rotating three proxies. Feel free to grab yours from the Free Proxy List. Next, you'll request https://httpbin.io/ip to view the IP results. 

First, import the required libraries into your code and list the proxies, as shown:

scraper.go
package main

// import the required libraries
import (
	"context"
	"fmt"
	"time"

	"github.com/chromedp/chromedp"
)

func main() {
	// define a list of proxy server addresses to rotate
	proxyAddresses := []string{
		"https://103.160.150.251:8080",
		"https://38.65.174.129:80",
		"https://46.105.50.251:3128",

		// add more proxy server addresses as needed
	}
  
}

Next, create a timeout context and execute the scraping logic inside a for loop that iterates through the proxy list. Then, add each proxy to the allocator option slice:

scraper.go
func main() {
	
	//...

	// create a new context with a timeout
	ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
	defer cancel()

	// iterate over each proxy address and perform scraping
	for _, proxyAddress := range proxyAddresses {
		
		// set up options with the current proxy address
		opts := append(chromedp.DefaultExecAllocatorOptions[:],
			chromedp.Flag("headless", true),
			chromedp.ProxyServer(proxyAddress),
		)
	}
    
}

Set up a new allocator for the proxy option slice and use this to form a new browser context inside the for loop:

scraper.go
func main() {
	
	// ...
	
	// iterate over each proxy address and perform scraping
	for _, proxyAddress := range proxyAddresses {
		
		// ...
		
		// create an allocator for the current proxy option
		allocCtx, cancel := chromedp.NewExecAllocator(ctx, opts...)
		defer cancel()

		// create a new context with the allocated browser context
		ctx, cancel := chromedp.NewContext(allocCtx)
		defer cancel()
	}
    
}

Launch the target web page with the browser context and print the body text to view the IPs used per iteration. Finally, close the code with an error catcher that continues to the next proxy if the current one fails:

scraper.go
func main() {
	
	// ...
  
	// iterate over each proxy address and perform scraping
	for _, proxyAddress := range proxyAddresses {
		
		// ...
    
		// navigate to a website and extract information
		var ip string
		err := chromedp.Run(ctx,
			chromedp.Navigate("https://httpbin.org/ip"),
			chromedp.WaitVisible("body"),
			chromedp.Text("body", &ip),
		)
		if err != nil {
			fmt.Println("Error:", err)
			
			// move to the next proxy in case of error
			continue
		}

		// print the extracted IP address
		fmt.Println("IP Address:", ip)
	}
}

Here's the final code:

scraper.go
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/chromedp/chromedp"
)

func main() {
	// define a list of proxy server addresses to rotate
	proxyAddresses := []string{
		"https://103.160.150.251:8080",
		"https://38.65.174.129:80",
		"https://46.105.50.251:3128",

		// add more proxy server addresses as needed
	}

	// create a new context with a timeout
	ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
	defer cancel()

	// iterate over each proxy address and perform scraping
	for _, proxyAddress := range proxyAddresses {
		
		// set up options with the current proxy address
		opts := append(chromedp.DefaultExecAllocatorOptions[:],
			chromedp.Flag("headless", true),
			chromedp.ProxyServer(proxyAddress),
		)

		// create an allocator for the current proxy option
		allocCtx, cancel := chromedp.NewExecAllocator(ctx, opts...)
		defer cancel()

		// create a new context with the allocated browser context
		ctx, cancel := chromedp.NewContext(allocCtx)
		defer cancel()

		// navigate to a website and extract information
		var ip string
		err := chromedp.Run(ctx,
			chromedp.Navigate("https://httpbin.org/ip"),
			chromedp.WaitVisible("body"),
			chromedp.Text("body", &ip),
		)
		if err != nil {
			fmt.Println("Error:", err)

			// move to the next proxy in case of error
			continue
		}

		// print the extracted IP address
		fmt.Println("IP Address:", ip)
	}
}

The code executes the scraping request inside a for loop and rotates the proxies, as expected:

Output
IP Address: {
  "origin": "103.160.150.251"
}

IP Address: {
  "origin": "38.65.174.129"
}

IP Address: {
  "origin": "46.105.50.251"
}

You just configured your Golang Chromedp web scraper to rotate proxies. Congratulations! 

However, these free proxies are only good for testing and will likely get blocked or won't work for actual projects. You'll need premium web scraping proxies for real web scraping projects, but remember that these usually require your authentication credentials.

Regardless, proxies may not work against advanced anti-bot systems. How can you deal with that?

When Proxies Are Not Enough to Scrape

Adding proxies may not prevent your Chromedp scraper from being blocked by sophisticated anti-bot mechanisms, especially on secure pages like G2.

You can try it out with the following code: 

scraper.go
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/chromedp/chromedp"
)

func main() {
	// create a context with headless mode enabled
	opts := append(chromedp.DefaultExecAllocatorOptions[:],
		chromedp.ProxyServer("https://35.154.21.111:3128"),
		chromedp.Flag("headless", false),
	)
	// allocate the browser context with the allocation options
	allocCtx, cancel := chromedp.NewExecAllocator(context.Background(), opts...)
	defer cancel()
	// create a new context with the allocated browser context
	ctx, cancel := chromedp.NewContext(allocCtx)
	defer cancel()

	// navigate to a website and extract information
	var htmlContent string
	if err := chromedp.Run(ctx,
		chromedp.Navigate("https://www.g2.com/products/asana/reviews"),
		chromedp.WaitVisible("html"),
		chromedp.InnerHTML("html", &htmlContent),
	); err != nil {
		log.Println("Error:", err)
	} else {
		// print the website's html content
		defer fmt.Println("HTML content:", htmlContent)
	}
}

The website blocks Chromedp with Cloudflare Turnstile, as shown:

Output
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
    <!--  ...    -->
  
    <title>Attention Required! | Cloudflare</title>
</head>
<body>
    <!-- ... -->
</body>
</html>

Here's a screenshot of the blocking CAPTCHA:

G2 Blocked Request
Click to open the image in full screen

This means you need more than just proxies to beat advanced anti-bot mechanisms. That's where web scraping APIs like ZenRows come in to help you avoid them all. It fixes the request headers, rotates premium proxies, and bypasses CAPTCHAs and other anti-bot systems, allowing you to scrape without getting blocked.

Now, let's access the protected website that got you blocked previously with ZenRows and Golang's HTTP client. Sign up to open the Request Builder and get your API key with free credits. 

Once signed in, paste the target URL in the link box, set the Boost mode to JS Rendering, and click Premium Proxies. Select Go as your programming language.

building a scraper with zenrows
Click to open the image in full screen

The generated code uses ZenRows and Golang's HTTP client. Next, copy the code and format it like this in your script:

scraper.go
package main

import (
	"fmt"
	"io"
	"log"
	"net/http"
	"net/url"
)

func main() {
	client := &http.Client{}
	originalURL := "https://www.g2.com/products/asana/reviews"
	encodedURL := url.QueryEscape(originalURL)

	// construct the request URL with encodedURL using string interpolation
	requestURL := fmt.Sprintf(
		"https://api.zenrows.com/v1/?apikey=<YOUR_ZENROWS_API_KEY>"+
			"&url=%s&js_render=true&premium_proxy=true", encodedURL)

	req, err := http.NewRequest("GET", requestURL, nil)
	if err != nil {
		log.Fatalln(err)
	}

	resp, err := client.Do(req)
	if err != nil {
		log.Fatalln(err)
	}
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		log.Fatalln(err)
	}

	log.Println(string(body))
}

The code accesses the website and scrapes its HTML content successfully, with its result now showing the correct title:

Output
<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <link href="https://www.g2.com/images/favicon.ico" rel="shortcut icon" type="image/x-icon" />
    <title>Asana Reviews 2024</title>
</head>
<body>
    <!-- other content omitted for brevity -->
</body>
</html>

You've now scraped content from a protected website using ZenRows and Golang's HTTP client. Great job! You can scale that up with ZenRows JavaScript instructions to add JavaScript support and scrape any dynamic website at scale.

Conclusion

In this article, you've learned how to set a single proxy and rotate proxies from a list with the Chromdp headless browser in Golang.

However, you've seen that advanced anti-bots can still block your scraper despite implementing proxies in Chromedp. Let ZenRows handle all the proxy configurations and anti-bot bypass for you and scrape any website without limitation. Try ZenRows for free!

Ready to get started?

Up to 1,000 URLs for free are waiting for you