An IP ban is definitely an undesirable result during scraping. You just want to get the data and move on to further steps. But you actually can't escape it if you don't apply the necessary measures.
Is your IP address being banned? Node Unblocker can help out. We'll explain what Node Unblocker is and how to use it for web scraping. We'll cover:
- What Is Node Unblocker?
- Setting Up Node Unblocker.
- Advanced Configuration.
- Understanding Limitations.
- Premium Proxy Solution.
What Is Node Unblocker?
Node Unblocker is a free proxy tool for Node.js. Being an intermediary server, it acts as a middleman between your application and target websites. Node Unblocker has a developer-friendly API that integrates easily with Express.js applications.
When you make a request through Node Unblocker, it gets the content on your behalf, processes it on the fly, and streams it back to your application. This approach helps hide your scraper's real IP address while maintaining fast response times due to its streaming architecture.
Key Features of Node Unblocker:
- Compatible with Express.js for easy integration.
- Stream-based processing for optimal performance.
- Automatic handling of redirects and protocol switches.
- Smart cookie management across domains and protocols.
- Built-in support for WebSocket proxying.
How to Setup Node Unblocker
Before diving into the implementation, make sure you have Node.js and npm installed on your development machine. We'll create a basic proxy server and then deploy it to handle real scraping tasks.
Step 1: Create and Test the Server with Express
Let's start by creating a new Node.js project with the esnext module:
npm init esnext
Then, proceed to install the required dependencies: express and unblocker.
npm install express unblocker
Now, create an index.js
file in your project root and import the required dependencies:
import express from "express";
import Unblocker from "unblocker";
unblocker
exports a default Unblocker
class that is passed in a configuration options object upon instantiation. One of its options is prefix
, which specifies the path that proxied URLs begin with.
Next, create a new express app and an instance of unblocker.
Because of Node Unblocker's express-compatible API, we only need to register its instance as middleware to integrate it.
import express from "express";
import Unblocker from "unblocker";
// initialize express
const app = express();
// create unblocker instance
const unblocker = new Unblocker({});
// register unblocker middleware
app.use(unblocker);
We've passed in an empty object here, meaning that unblocker
will use the default configuration: the prefix
is /proxy/
.
The last step is calling the listen
method on express to start the server on port 5005 and the on
method to allow Unblocker to proxy web sockets when an upgrade
event is triggered:
// ...
const PORT = process.env.PORT || 5005;
// start server and enable websocket proxying
app
.listen(PORT, () => console.log(`Listening on port ${PORT}`))
.on("upgrade", unblocker.onUpgrade);
Here's the full code:
import express from "express";
import Unblocker from "unblocker";
// initialize express
const app = express();
// create unblocker instance
const unblocker = new Unblocker({});
// register unblocker middleware
app.use(unblocker);
const PORT = process.env.PORT || 5005;
// start server and enable websocket proxying
app
.listen(PORT, () => console.log(`Listening on port ${PORT}`))
.on("upgrade", unblocker.onUpgrade);
Now, run the index.js
script using node index.js
to spin up your server on port 5005 and append your target URL to the proxy to test. We'll use https://httpbin.io/ip
as a target page. Open the following link in your browser:
http://localhost:5005/proxy/https://httpbin.io/ip
You'll have an output like this, displaying your server's IP:

Note that the IP address won't change in this test since you're running the Node Unblocker server locally. To mask your IP address effectively, you'll need to deploy your unblocker server to a remote host, which we'll cover in the next step.
Step 2: Deploy Node Unblocker Proxy Server
When it comes to deploying your Node Unblocker proxy server, you have several cloud platforms at your disposal.
Popular choices include Render with its generous free tier, DigitalOcean's affordable droplets starting at $4/month, Railway's developer-friendly platform starting at $5/month, and Heroku's reliable infrastructure.
For this tutorial, we'll focus on deploying to Heroku since it offers a good balance of simplicity, reliability, and straightforward scaling options at just $5/month for the basic plan.
It's important to be aware of a hosting platform's Acceptable Use Policy (AUP) before putting any scraping or proxy software on it. Some hosting providers will not permit such software at all, and others, like Heroku, will only do so with strict requirements and discretion.
First of all, head to Heroku to create an account. It's no longer free, but you can get the basic plan for $5, so you can proceed to the billing section of your account and get it.
Then, you'll need the Heroku CLI tool to deploy, so install it globally depending on the kind of local machine you use. When it's installed, log in to your account using the heroku login
command.
The next step is creating a new app on your dashboard or alternatively via the CLI:
heroku apps:create <app-name>
Replace <app-name>
here with a suitable name for your app.
Now, specify the start script and Node.js version in your package.json
file:
{
"name": "express-unblocker",
"version": "1.0.0",
"type": "module",
"module": "index.js",
"scripts": {
"start": "node index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"express": "^4.21.2",
"unblocker": "^2.3.1"
}
}
Run the below commands to deploy:
git init
git add .
git commit -m "initial commit"
heroku git:remote -a <app-name>
git push heroku main
Heroku will detect that your app is a Node.js one and build it as such, and your proxy service will be assigned a domain after deployment based on the app name you specified.

Go ahead and test it with https://httpbin.io/ip
in your browser to confirm that the deployed service is working as shown in the image below.

Fantastic! You've successfully deployed a working Node Unblocker proxy service to Heroku.
Step 3: Use the Proxy Server for Scraping Requests
Now that we have our proxy server deployed, we can create a powerful proxy network by deploying multiple instances across different hosting providers. This distributed approach helps prevent IP-based blocking and improves scraping reliability. Let's see how to use it with popular scraping tools.
Here's how to use Puppeteer to route requests through your proxy server:
import puppeteer from "puppeteer";
async function scrapeWithProxy() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// route request through your proxy server
await page.goto("<YOUR_DEPLOYED_APP_URL>/proxy/https://httpbin.io/ip");
// get the response content
const content = await page.content();
console.log("Response:", content);
await browser.close();
}
scrapeWithProxy().catch(console.error);
You can achieve similar results using Playwright or any other web scraping library. The key is to prefix your target URLs with your proxy server's address.
To scale this setup, deploy several Node Unblocker servers on Heroku or other cloud providers or VPS services like Vultr. Then, modify your scraping script to randomly select a proxy server for each request, effectively creating your own proxy rotation system.
Advanced Node Unblocker Server Configuration
Node Unblocker comes with a robust set of configuration options that can enhance your proxy server's capabilities for web scraping tasks. By customizing these settings, you can better handle anti-bot measures, manage cookies, and control how your proxy processes different types of content.
The library supports two main approaches to configuration. You can pass options directly when creating the Unblocker instance, or you can utilize middleware functions to modify requests and responses on the fly. For more complex scenarios, you can even combine both approaches to create a highly customized proxy setup that meets your specific scraping needs.
Let's look at a practical example of using request middleware to modify the User-Agent header, a common requirement when dealing with anti-bot systems:
import express from "express";
import Unblocker from "unblocker";
// create express server
const app = express();
// configure request headers
function setBrowserHeaders(data) {
// simulate chrome browser environment
data.headers["user-agent"] =
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36";
}
// initialize unblocker with configuration
const unblocker = new Unblocker({
prefix: "/proxy/",
requestMiddleware: [setBrowserHeaders],
});
// register middleware
app.use(unblocker);
// start server
const PORT = process.env.PORT || 8080;
app
.listen(PORT, () => console.log(`Proxy server running on port ${PORT}`))
.on("upgrade", unblocker.onUpgrade);
This configuration helps your scraper appear more like a legitimate browser, potentially reducing the likelihood of being blocked. You can extend this pattern to implement other customizations like cookie handling, request headers, or response processing based on your specific scraping requirements.
Limitations of Node Unblocker and Alternatives
Node Unblocker is easy to set up and deploy, but it has some serious limitations to using it as a proxy for web scraping: you can still get blocked, and the cost of maintenance is high.
Let's explore them and find solutions.
Getting Blocked While Scraping
When using Node Unblocker proxies for web scraping, you can encounter sophisticated anti-bot systems that can detect and block automated access.
Modern websites employ multiple layers of protection, including fingerprinting to detect inconsistencies in your requests, IP-based rate limiting that can identify distributed scraping patterns, strict cookie validation, and more.
These security measures can significantly impact your scraping success rate, even when routing requests through Node Unblocker proxies. To mitigate these challenges, you'll need to implement additional strategies alongside your Node Unblocker setup.Â
Maintenance Work
From time to time, it may be necessary to perform maintenance tasks like installing new patches and fixing software bugs that interfere with your ability to access the content you want.
Also, you may need to add more Node Unblocker proxies to reduce the frequency at which the existing ones are used to access target websites. Generally, the work involved in maintaining Node Unblocker is substantial and requires careful consideration of your infrastructure needs and scaling strategy.
Other limitations of Node Unblocker include:
- Limited compatibility with modern single-page applications.
- Issues handling complex JavaScript and AJAX requests.
- Challenges with cross-domain cookie management.
- Resource overhead when scaling proxy servers.
Consider these limitations when deciding if Node Unblocker fits your specific use case and infrastructure requirements. For high-volume scraping, you might need to explore alternative proxy solutions.
How to Choose the Best Proxies
For reliable web scraping at scale, basic proxy solutions like Node Unblocker often fall short due to their limited features and maintenance overhead. Premium proxy services offer crucial advantages for production-grade scraping, including high-reputation residential IPs, automatic rotation, and robust session management.
For a comprehensive overview of proxy solutions for web scraping, check out this web scraping proxy guide.
ZenRows' Residential Proxies, the best premium proxy service, provides a robust infrastructure for high-performance data extraction. It maintains a network of over 55 million residential IPs spread across 185+ countries, ensuring reliable access to geo-restricted content.
Each request automatically rotates through this IP pool, while the intelligent proxy selection system chooses the most suitable IPs based on the target website and historical success rates.
Let's see ZenRows' Residential Proxies in action.
Sign up and go to the Proxy Generator dashboard. Your premium residential proxy will be generated automatically.

Customize the settings according to your requirements and replace the placeholders in the following code with the generated proxy credentials:
// npm install axios
const axios = require('axios');
axios
.get('https://httpbin.org/ip', {
proxy: {
protocol: 'http',
host: 'superproxy.zenrows.com',
port: '1337',
auth: {
username: '<ZENROWS_PROXY_USERNAME>',
password: '<ZENROWS_PROXY_PASSWORD>',
},
},
})
.then((res) => {
console.log(res.data);
})
.catch((err) => console.error(err));
You'll get a similar output on running the above code multiple times:
// request 1
{
"origin": "174.49.98.49:58352"
}
// request 2
{
"origin": "38.178.89.195:36856"
}
Congratulations! The output confirms that your request was successfully routed through ZenRows' premium residential proxies.
Conclusion
Web scraping projects often hit a common roadblock: IP blocks and rate limiting. Node Unblocker offers a solution for bypassing these restrictions but has significant limitations and maintenance overhead that can impact your scraping success.
For reliable data extraction at any scale, ZenRows' Residential Proxies provide a robust alternative that eliminates common scraping headaches. Ready to upgrade your scraping infrastructure? Try ZenRows for free!