How to Take Screenshots With Chromedp: Tutorial [2024]

June 6, 2024 · 8 min read

Are you looking for a simple way to capture screenshots while web scraping in Golang?

In this article, you'll learn three ways of taking screenshots with Chromedp and Golang:

Option 1: Generate a Screenshot for the Visible Part of the Screen

The part of the webpage you see in your browser window is called viewport. You can easily capture it using Chromedp in Golang.

Here's the above-the-fold screenshot that you'll capture from the target website:

Visible Screen Screenshot
Click to open the image in full screen

Let's grab this screenshot using the code!

In the main function, initialize a Chrome browser instance. This creates an environment where you can automate browser actions.

Call a function (you'll define this in the next step) to capture the visible portion of the webpage.

scraper.go
// import the required packages
package main

import (
	"context"
	"log"

	"github.com/chromedp/chromedp"
)

// function to capture screenshot of visible part of the screen
func captureVisibleScreenScreenshot(ctx context.Context) error {
	// ...
	return nil
}

func main() {
	// initialize a controllable Chrome instance
	ctx, cancel := chromedp.NewContext(
		context.Background(),
	)
	// release the browser resources when
	// it is no longer needed
	defer cancel()

	// capture screenshot of visible part of the screen
	if err := captureVisibleScreenScreenshot(ctx); err != nil {
		log.Fatal("Error capturing visible screen screenshot:", err)
	}
}

The captureVisibleScreenScreenshot() function navigates to the target webpage, captures the screenshot using the CaptureScreenshot() method, and saves the data in a byte slice.

In Go, a byte slice is a flexible and dynamic data structure for handling binary data. Here, it serves as temporary storage for the binary data representing the screenshot.

The image file is created using this byte slice. After calling the function, if an error occurs, log it and terminate the program. Otherwise, print Success!.

scraper.go
// import the required packages
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/chromedp/chromedp"
)

// function to capture screenshot of visible part of the screen
func captureVisibleScreenScreenshot(ctx context.Context) error {
	var screenshotBuffer []byte
	err := chromedp.Run(ctx,
		chromedp.Navigate("https://www.scrapingcourse.com/ecommerce/product/adrienne-trek-jacket/"),
		chromedp.CaptureScreenshot(&screenshotBuffer),
	)
	if err != nil {
		return err
	}
	// file permissions: 0644 (Owner: read/write, Group: read, Others: read)
	// write the response body to an image file
	err = os.WriteFile("visible-screen-screenshot.png", screenshotBuffer, 0644)
	if err != nil {
		return err
	}

	return nil
}

func main() {
	// initialize a controllable Chrome instance
	ctx, cancel := chromedp.NewContext(
		context.Background(),
	)
	// release the browser resources when
	// it is no longer needed
	defer cancel()

	// capture screenshot of visible part of the screen
	if err := captureVisibleScreenScreenshot(ctx); err != nil {
		log.Fatal("Error capturing visible screen screenshot:", err)
	}

	fmt.Println("Success!")
}

That's it! You've captured a screenshot of the visible part of the webpage. Now, let's modify the code to take a full-page screenshot.

Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Option 2: Capture a Full-Page Screenshot

Chromedp allows you to capture the screenshot of the whole webpage, including the parts beyond the viewport.

You'll take the full-page screenshot of the following demo product page:

scraper.go
// import the required packages
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/chromedp/chromedp"
)

// function to capture screenshot of complete page
func captureFullPageScreenshot(ctx context.Context) error {
	var screenshotBuffer []byte
	err := chromedp.Run(ctx,
		chromedp.Navigate("https://www.scrapingcourse.com/ecommerce/product/adrienne-trek-jacket/"),
		chromedp.FullScreenshot(&screenshotBuffer, 100),
	)
	if err != nil {
		return err
	}

	// file permissions: 0644 (Owner: read/write, Group: read, Others: read)
	// write the response body to an image file
	err = os.WriteFile("full-page-screenshot.png", screenshotBuffer, 0644)
	if err != nil {
		return err
	}

	return nil
}

func main() {
	// initialize a controllable Chrome instance
	ctx, cancel := chromedp.NewContext(
		context.Background(),
	)
	// release the browser resources when
	// it is no longer needed
	defer cancel()

	// capture screenshot of complete page
	if err := captureFullPageScreenshot(ctx); err != nil {
		log.Fatal("Error capturing full page screenshot:", err)
	}

	fmt.Println("Success!")
}

All you need to do is replace the CaptureScreenshot() method with the FullScreenshot() method in the previous script, and pass the quality parameter (100 in this case) to control the image's compression quality.

Create a new function captureFullPageScreenshot() to keep the code modular. Here's the updated script:

scraper.go
// import the required packages
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/chromedp/chromedp"
)

// function to capture screenshot of specific element
func captureSpecificElementScreenshot(ctx context.Context) error {
	var screenshotBuffer []byte
	err := chromedp.Run(ctx,
		chromedp.Navigate("https://www.scrapingcourse.com/ecommerce/product/adrienne-trek-jacket/"),
		chromedp.Screenshot(".entry-summary", &screenshotBuffer, chromedp.NodeVisible),
	)
	if err != nil {
		return err
	}

	// file permissions: 0644 (Owner: read/write, Group: read, Others: read)
	// write the response body to an image file
	err = os.WriteFile("specific-element-screenshot.png", screenshotBuffer, 0644)
	if err != nil {
		return err
	}

	return nil
}

func main() {
	// initialize a controllable Chrome instance
	ctx, cancel := chromedp.NewContext(
		context.Background(),
	)
	// release the browser resources when
	// it is no longer needed
	defer cancel()

	// capture screenshot of specific element
	if err := captureSpecificElementScreenshot(ctx); err != nil {
		log.Fatal("Error capturing specific element screenshot:", err)
	}

	fmt.Println("Success!")
}

Good job! You now know how to take full-page screenshots using Chromedp.

Option 3: Create a Screenshot of a Specific Element

To capture a screenshot of a specific element, you first need to locate it on the webpage using CSS selectors.

Let's grab the following product summary section of the target webpage:

Specific Element Screenshot
Click to open the image in full screen

Use the Screenshot() method to capture specific webpage elements. This method takes the CSS selector (.entry-summary), screenshot buffer data, and chromedp.NodeVisible as parameters. The last parameter instructs to capture only the visible part of the specified element.

Here's the complete code to take a specific element screenshot:

scraper.go
// import the required packages
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/chromedp/chromedp"
)

// function to capture screenshot of specific element
func captureSpecificElementScreenshot(ctx context.Context) error {
	var screenshotBuffer []byte
	err := chromedp.Run(ctx,
		chromedp.Navigate("https://www.scrapingcourse.com/ecommerce/product/adrienne-trek-jacket/"),
		chromedp.Screenshot(".entry-summary", &screenshotBuffer, chromedp.NodeVisible),
	)
	if err != nil {
		return err
	}

	// file permissions: 0644 (Owner: read/write, Group: read, Others: read)
	// write the response body to an image file
	err = os.WriteFile("specific-element-screenshot.png", screenshotBuffer, 0644)
	if err != nil {
		return err
	}

	return nil
}

func main() {
	// initialize a controllable Chrome instance
	ctx, cancel := chromedp.NewContext(
		context.Background(),
	)
	// release the browser resources when
	// it is no longer needed
	defer cancel()

	// capture screenshot of specific element
	if err := captureSpecificElementScreenshot(ctx); err != nil {
		log.Fatal("Error capturing specific element screenshot:", err)
	}

	fmt.Println("Success!")
}

Good job! Your code successfully captured the screenshot of the product summary section.

The above methods work fine for screenshots, but they won't help if the target sites implement anti-bot measures. The next section will show how you can tackle this problem.

Avoid Getting Blocked When Taking Screenshots in Chromedp

One of the biggest challenges of web scraping is avoiding getting blocked. Many websites implement measures to detect and block automated scraping activities.

Using one of the previous scripts, let's try to capture a full-page screenshot of a protected G2 Reviews webpage.

scraper.go
// import the required packages
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/chromedp/chromedp"
)

// function to capture screenshot of complete page
func captureFullPageScreenshot(ctx context.Context) error {
	var screenshotBuffer []byte
	err := chromedp.Run(ctx,
		chromedp.Navigate("https://www.g2.com/products/azure-sql-database/reviews"),
		chromedp.FullScreenshot(&screenshotBuffer, 100),
	)
	if err != nil {
		return err
	}


	// file permissions: 0644 (Owner: read/write, Group: read, Others: read)
	// write the response body to an image file
	err = os.WriteFile("full-page-screenshot.png", screenshotBuffer, 0644)
	if err != nil {
		return err
	}

	return nil
}

func main() {
	// initialize a controllable Chrome instance
	ctx, cancel := chromedp.NewContext(
		context.Background(),
	)
	// release the browser resources when
	// it is no longer needed
	defer cancel()

	// capture screenshot of complete page
	if err := captureFullPageScreenshot(ctx); err != nil {
		log.Fatal("Error capturing full page screenshot:", err)
	}

	fmt.Println("Success!")
}

Your request got blocked by Cloudflare:

Full Page Screenshot
Click to open the image in full screen

The most effective way to take screenshots and scrape any page without getting blocked is using a web scraping API like ZenRows.

It auto-rotates premium proxies, optimizes request headers, bypasses anti-bot systems, and more.

ZenRows acts as a headless browser like Chromedp and provides features like JavaScript instructions for scraping dynamic content. By switching to ZenRows, you can avoid the technical setup and limitations of Chromedp.

To try it out, sign up to ZenRows. After signing in, you'll be redirected to the Requests Builder page. Paste the target URL in the URL to Scrape box, toggle on JS Rendering, and activate Premium Proxies. On the right, select the Go tab to generate the required code.

building a scraper with zenrows
Click to open the image in full screen

You need to make some changes to the generated code. Write the returned body byte slice to an image file. Make sure to include the screenshot and screenshot\_fullpage\ flags in the request URL. Here's what your final code should look like:

scraper.go
package main

import (
	"io"
	"log"
	"net/http"
	"os"
)

func main() {
	client := &http.Client{}
	req, err := http.NewRequest("GET", "https://api.zenrows.com/v1/?apikey=<YOUR_ZENROWS_API_KEY>&url=https%3A%2F%2Fwww.g2.com%2Fproducts%2Fazure-sql-database%2Freviews&js_render=true&premium_proxy=true&screenshot=true&screenshot_fullpage=true", nil)
	if err != nil {
		log.Fatalln(err)
	}
	resp, err := client.Do(req)
	if err != nil {
		log.Fatalln(err)
	}
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		log.Fatalln(err)
	}

	// file permissions: 0644 (Owner: read/write, Group: read, Others: read)
	// write the response body to an image file
	err = os.WriteFile("G2_full_page_screenshot.png", body, 0644)
	if err != nil {
		log.Fatalln(err)
	}
}

This code captures a full-page screenshot of the G2 Reviews page:

G2 Full Page Screenshot
Click to open the image in full screen

Congrats! You just bypassed a Cloudflare-protected webpage and took its screenshot.

Conclusion

In this tutorial, you learned three methods of taking a screenshot with Chromedp in Golang:

  • Capturing the visible part of the webpage.
  • Capturing the full-page screenshot.
  • Screenshoting a specific page element.

No matter how advanced your scraping script is, anti-bot systems can still block it. We recommend using ZenRows, an all-in-one web scraping solution, to bypass any anti-bot system and scrape at scale without getting blocked. Try ZenRows for free!

Ready to get started?

Up to 1,000 URLs for free are waiting for you