Web Crawling Webinar for Tech Teams
Web Crawling Webinar for Tech Teams

How to Take a Screenshot With Puppeteer: Tutorial 2025

Updated: May 31, 2024 · 8 min read

Are you using Puppeteer and want to take a screenshot for testing or web scraping purposes? You're in the right place! 

This article will teach you the three main methods of taking Puppeteer screenshots, whether for testing or scraping purposes. 

How to Take a Puppeteer Screenshot

There are three methods of taking a screenshot while web scraping with Puppeteer. You can screenshot the visible parts of a web page, capture the entire page, or target specific elements. 

You'll learn how to achieve each in this section by taking a screenshot of a product page on ScrapingCourse.com, a demo website with e-commerce features.

Option 1: Generate a Screenshot for The Visible Part of the Screen

Capturing the visible part of a web page is the simplest way to take screenshots in Puppeteer. The library covers the top of the page by default. 

Here's the expected result of the visible part of the target page:

ScrapingCourse product page visible part screenshot
Click to open the image in full screen

Now, let's see the code to generate this screenshot. First, start the Puppeteer instance in headless mode and visit the target product page:

scraper.js
// import the required library
const puppeteer = require('puppeteer');

(async () =>{

    // start browser instance in headless mode and launch a new page
    const browser = await puppeteer.launch({ headless: 'new' });
    const page = await browser.newPage();

    // visit the product page
    await page.goto('https://www.scrapingcourse.com/ecommerce/product/abominable-hoodie/');

    await browser.close();
})();

Expand the script with the following code. It pauses for the page to load and screenshots the visible part of the web page:

scraper.py
(async () =>{

    // ...
    
    // use setTimeout to wait for elements to load
    await new Promise(resolve => setTimeout(resolve, 5000));

    // take a screenshot of the visible part of the page
    await page.screenshot({ path: 'visible-part-screenshot.jpg' })

    await browser.close();
})();

Here's the complete code:

scraper.js
// import the required library
const puppeteer = require('puppeteer');

(async () =>{

    // start browser instance in headless mode and launch a new page
    const browser = await puppeteer.launch({ headless: 'new' });
    const page = await browser.newPage();

    // visit the product page
    await page.goto('https://www.scrapingcourse.com/ecommerce/product/abominable-hoodie/');

    // use setTimeout to wait for elements to load
    await new Promise(resolve => setTimeout(resolve, 5000));

    // take a screenshot of the visible part of the page
    await page.screenshot({ path: 'visible-part-screenshot.jpg' })

    await browser.close();
})();

Bravo! You just took a screenshot of the visible parts of a web page with Puppeteer. Now, check the screenshot in your project directory. What if you want a full page?

Frustrated that your web scrapers are blocked once and again?
ZenRows API handles rotating proxies and headless browsers for you.
Try for FREE

Option 2: Capture a Full-Page Screenshot

A full-page Puppeteer screenshot outputs the entire page, including its scrolling effect. It only requires adding a little detail to the previous code.

To take a full-page screenshot, modify the screenshot method by adding the fullPage argument and setting its value to true:

scraper.js
// import the required library
const puppeteer = require('puppeteer');

(async () =>{

    // start browser instance in headless mode and launch a new page
    const browser = await puppeteer.launch({ headless: 'new' });
    const page = await browser.newPage();

    // visit the product page
    await page.goto('https://www.scrapingcourse.com/ecommerce/product/abominable-hoodie/');

    // use setTimeout to wait for elements to load
    await new Promise(resolve => setTimeout(resolve, 10000));

    // take a screenshot of the full page
    await page.screenshot({ path: 'full-page-screenshot.jpg', fullPage: true})

    await browser.close();
})();

The code takes a full-page screenshot of the target page. You can scroll the image, as shown:

ScrapingCourse product page full-page screenshot
Click to open the image in full screen

You now know how to take a full-page screenshot with Puppeteer. Nice job! Next, you'll see how to capture a specific element.

Option 3: Create a Screenshot of a Specific Element

Screenshotting a specific element captures a selected part of the web page. For instance, a screenshot of the product summary container looks like this:

ScrapingCourse specific element screenshot.
Click to open the image in full screen

You'll have to point Puppeteer to the target element to capture it. Now, let's write the code to get the above screenshot.

To begin, launch a browser instance in headless mode and visit the target product page:

scraper.js
// import the required library
const puppeteer = require('puppeteer');

(async () =>{

    // start browser instance in headless mode and launch a new page
    const browser = await puppeteer.launch({ headless: 'new' });
    const page = await browser.newPage();

    // visit the target product page
    await page.goto('https://www.scrapingcourse.com/ecommerce/product/abominable-hoodie/');

    await browser.close();
})();

Next, wait for the page to load and grab the web page element (.summary.entry-summary) using the CSS selector. Then take its screenshot:

scraper.js
(async () =>{

    // ...
  
    // use setTimeout to wait for the page to load
    await new Promise(resolve => setTimeout(resolve, 5000));

    // obtain the specific element
    const element = await page.$('.summary.entry-summary');

    // capture a screenshot of the specific element
    await element.screenshot({ path: 'specific-element-screenshot.jpg' });

    await browser.close();
})();

The final code looks like this:

scraper.js
// import the required library
const puppeteer = require('puppeteer');

(async () =>{

    // start browser instance in headless mode and launch a new page
    const browser = await puppeteer.launch({ headless: 'new' });
    const page = await browser.newPage();

    // visit the target product page
    await page.goto('https://www.scrapingcourse.com/ecommerce/product/abominable-hoodie/');

    // use setTimeout to wait for the page to load
    await new Promise(resolve => setTimeout(resolve, 5000));

    // obtain the specific element
    const element = await page.$('.summary.entry-summary');

    // capture a screenshot of the specific element
    await element.screenshot({ path: 'specific-element-screenshot.jpg' });

    await browser.close();
})();

This code outputs the expected screenshot. That's great! However, you'll likely get blocked by anti-bots when taking screenshots. How can you avoid that while scraping with Puppeteer?

Avoid Getting Blocked While Taking Screenshots

Modern websites employ anti-bot systems that will prevent you from scraping content. You need to bypass them to scrape without getting blocked.

For example, the previous code won't output the desired content when used to screenshot a protected web page like the Antibot Challenge. Try it out with the following code. Replace the target URL with the Antibot Challenge page to see if it works:

scraper.js
// import the required library
const puppeteer = require('puppeteer');

(async () =>{

    // start browser instance in headless mode and launch a new page
    const browser = await puppeteer.launch({ headless: 'new' });
    const page = await browser.newPage();

    // visit the product page
    await page.goto('https://www.scrapingcourse.com/antibot-challenge');

    // use setTimeout to wait for elements to load
    await new Promise(resolve => setTimeout(resolve, 10000));

    // take a screenshot of the page
    await page.screenshot({ path: 'full-page-screenshot-g2.jpg'})

    await browser.close();
})();

The code returns a screenshot showing that an anti-bot has blocked Puppeteer:

scrapingcourse cloudflare blocked screenshot
Click to open the image in full screen

That's not the screenshot you want!

The most effective way to bypass anti-bots and capture screenshots reliably is by using a Screenshot API like ZenRows. It provides premium proxies, automatically fixes your request headers, rotates user agents, and helps you bypass any anti-bot system.

It supports multiple options from the viewport and full page to specific-element screenshots, along with built-in handling of dynamic JavaScript content and multiple export formats, including PNG, JPG, and JSON.

Let's try capturing the previous screenshot using ZenRows!

Sign up, and you'll get to the Request Builder.

building a scraper with zenrows
Click to open the image in full screen

Paste the target URL in the link box, and activate Premium Proxies and JS Rendering Boost mode. Scroll down and select the Screenshot output option.

Then, choose NodeJS as your language and select the API connection mode. Copy and paste the generated code into your script.

You'll use Axios as an HTTP client. So, ensure you install it with npm:

Terminal
npm install axios

Modify the generated code by adding the responseType: "stream" parameter, which is used to make the screenshot file writeable. Finally, export the response to your project directory.

Here's the final modified code:

scraper.js
// npm install axios
const axios = require('axios');
const fs = require('fs');

const url = "https://www.scrapingcourse.com/antibot-challenge";
const apikey = "<YOUR_ZENROWS_API_KEY>";

// make the request
axios({
  url: "https://api.zenrows.com/v1/",
  method: "GET",
  params: {
    url: url,
    apikey: apikey,
    js_render: "true",
    premium_proxy: "true",
    screenshot: "true",
    screenshot_fullpage: "true",
  },
  // set response type to stream
  responseType: "stream",
})
  .then(function (response) {
    // write the image file to your project directory
    const writer = fs.createWriteStream("screenshot.png");

    response.data.pipe(writer);
    writer.on("finish", () => {
      console.log("Image saved successfully!");
    });
  })
  .catch(function (error) {
    console.error("Error:", error);
  });

The code captures the viewport screenshot, as expected:

scrapingcourse-antibot-challenge-passed
Click to open the image in full screen

Congratulations! You've bypassed anti-bot protection and grabbed a screenshot of the protected web page with ZenRows.

Check out our official documentation to learn more.

Conclusion

In this article, you've learned the three methods of capturing Puppeteer screenshots. Here's a recap of what you now know:

  • Taking a screenshot of the visible part of a web page.
  • Capturing a full web page, including its scrolling effects.
  • Getting a screenshot of a specific target element.
  • Accessing a protected website and grabbing a screenshot of its full page.

Don't forget that anti-bot mechanisms are out there to prevent you from taking screenshots while web scraping. Bypass all blocks with ZenRows and scrape any website at scale without getting blocked. Try ZenRows for free!

Ready to get started?

Up to 1,000 URLs for free are waiting for you