AIOHTTP is an asynchronous HTTP client/server framework built on top of Python's asyncio library. While efficient for asynchronous web scraping in Python, it can still get blocked by websites with anti-bot measures.
In this tutorial, you'll learn how to set up proxies to avoid that. We’ll go through a simple proxy setup and then learn how to build a proxy rotator and use premium proxies for maximum protection against blocks and bans. Let's go!
Set up a Single Proxy in AIOHTTP
To get started, install the AIOHTTP Python library:
pip install aiohttp
Next, import the aiohttp
and asyncio
modules to your script.
import aiohttp
import asyncio
Now, define an asynchronous function enclosing the main logic to perform HTTP GET requests. Create a ClientSession
object to connect with the web server. Then, make a GET request to HTTPBin, a website that returns the client's IP address. Finally, print the response to the console.
# ...
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# perform an HTTP GET request
async with session.get("http://httpbin.org/ip") as resp:
print(await resp.text())
Execute your asynchronous function using asyncio.run():
# ...
# run the main async function
asyncio.run(main())
Your complete script should look like this:
import aiohttp
import asyncio
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# perform an HTTP GET request
async with session.get("http://httpbin.org/ip") as resp:
print(await resp.text())
# run the main async function
asyncio.run(main())
Running the above code will print your machine's IP address:
{
"origin": "210.212.39.138"
}
However, if you make too many requests from the same IP to one website's server, your activity may be flagged as suspicious, resulting in blocks or even permanent bans.
Let's learn how to integrate proxies into the code to avoid that.
First, grab a free proxy from the Free Proxy List website. Next, define a proxy
variable that stores the proxy server address. Finally, pass this variable to the session.get()
method.
Here's the updated code implementing a single proxy in AIOHTTP:
import aiohttp
import asyncio
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# define a proxy server address
proxy = "http://8.219.97.248:80"
# perform an HTTP GET request
async with session.get("http://httpbin.org/ip", proxy=proxy) as resp:
print(await resp.text())
# run the main async function
asyncio.run(main())
You'll get the following response:
{
"origin": "8.219.64.236"
}
Congratulations! You successfully masked your real IP address using a proxy.
The proxies used in the code above most likely won't work when you read this article. Free proxies have a short lifespan, so they're only suitable for educational purposes. Feel free to grab fresh ones from the Free Proxy List.
Proxy Authentication
Some proxy servers require authentication to ensure only users with valid credentials can access them. It's typically the case with commercial solutions or premium proxies.
Define the proxy authentication credentials using the aiohttp.BasicAuth()
method. Then, pass it as a parameter to the session.get()
method. Here's the code implementing the same:
import aiohttp
import asyncio
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# define a proxy server address
proxy = "http://8.219.97.248:80"
# define proxy authentication credentials
proxy_auth = aiohttp.BasicAuth("<YOUR_USERNAME>", "<YOUR_PASSWORD>")
# perform an HTTP GET request
async with session.get("http://httpbin.org/ip", proxy=proxy, proxy_auth=proxy_auth) as resp:
print(await resp.text())
# run the main async function
asyncio.run(main())
You'll get the following IP address as output:
{
"origin": "8.219.64.236"
}
AIOHTTP also allows you to specify the authentication credentials (username and password) in the proxy URL:
import aiohttp
import asyncio
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# define authentication credentials in proxy URL
proxy = "http://<YOUR_USERNAME>:<YOUR_PASSWORD>@8.219.97.248:80"
# perform an HTTP GET request
async with session.get("http://httpbin.org/ip", proxy=proxy) as resp:
print(await resp.text())
# run the main async function
asyncio.run(main())
You'll get the same output as before:
{
"origin": "8.219.64.236"
}
Best Proxy Protocol: HTTP, HTTPS, SOCKS
HTTP, HTTPS, and SOCKS are the most common proxy protocols. Each has its own strengths and use cases.
Both HTTP and HTTPS proxy protocols are useful for web scraping. HTTP proxies are suitable for accessing plain HTTP websites, while HTTPS proxies provide encryption, which ensures secure communication.
However, HTTPS proxies can handle both HTTP and HTTPS requests, which makes them a better choice for web scraping.
SOCKS is a versatile proxy protocol suitable for handling non-HTTP network traffic like TCP and UDP.
If you want to use SOCKS or SOCKS5 proxies with AIOHTTP, you first need to install the aiohtpp-socks
package since the plain AIOHTTP package doesn’t have native support for SOCKS proxies.
Use Rotating Proxies With AIOHTTP
As mentioned above, when your script sends multiple requests from the same IP in a short time, websites may flag this as suspicious activity and block your access.
If you rotate IPs, your scraper will be much more effective and hard to detect.
Let's implement this functionality!
First, get a few free proxies from the Free Proxy List website.
proxy_list = [
"http://8.219.97.248:80",
"http://3.77.153.38:80",
"http://3.70.179.165:80",
"http://20.235.159.154:80"
]
Next, define a function randomly selecting a proxy from proxy_list
and returning it. You can use Python's random.choice()
method for this.
Here's the modified code with rotating proxies functionality:
import aiohttp
import asyncio
import random
# function to randomly select and return a proxy
def rotate_proxy():
proxy_list = [
"http://8.219.97.248:80",
"http://3.77.153.38:80",
"http://3.70.179.165:80",
"http://20.235.159.154:80"
]
return random.choice(proxy_list)
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# choose a random proxy
proxy = rotate_proxy()
# perform an HTTP GET request
async with session.get("http://httpbin.org/ip", proxy=proxy) as resp:
print(await resp.text())
# run the main async function
asyncio.run(main())
Each time you run this code, you'll get a randomly selected proxy as output.
Here's the result for three runs:
# request 1
{
"origin": "3.70.179.165"
}
# request 2
{
"origin": "8.219.64.236"
}
# request 3
{
"origin": "20.235.159.154"
}
The above output confirms the code is successfully rotating the proxies. Good job!
However, even rotating proxies may not be enough against strong anti-bot systems. Let's see what happens if we try to access a protected website like G2 Reviews.
Replace the old HTTPBin URL with G2 Reviews. You'll likely get errors, so handle them using try...except blocks.
import aiohttp
import asyncio
import random
# function to randomly select and return a proxy
def rotate_proxy():
proxy_list = [
"http://8.219.97.248:80",
"http://3.77.153.38:80",
"http://3.70.179.165:80",
"http://20.235.159.154:80",
"http://35.185.196.38:3128",
]
return random.choice(proxy_list)
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# choose a random proxy
proxy = rotate_proxy()
try:
# perform an HTTP GET request
async with session.get("https://www.g2.com/products/asana/reviews", proxy=proxy) as resp:
print(resp.status)
except aiohttp.ClientError as e:
# print error
print(f"An error occurred: {e}")
# If encountering the "Event loop is closed" RuntimeError on Windows machines,
# uncomment the following line of code to resolve the issue:
# asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
# run the main async function
asyncio.run(main())
You'll get the 403
status code as output:
403
Error 403 means the access was denied by the target server. This is common with sites protected by WAFs (Web Application Firewalls) like Cloudflare. Check out our guides on bypassing Cloudflare 403 errors and WAF bypass techniques to learn more.
Use Premium Proxies
Free proxies present major challenges for web scraping operations. Their unreliable performance, security vulnerabilities, and frequent blocking make them impractical for production environments. Websites quickly detect and block these free proxies, disrupting your data collection workflows.
Premium proxies offer a superior solution for avoiding detection. With high-quality IPs and advanced rotation capabilities, premium proxies can effectively mask your automated requests. Features like intelligent routing and geolocation targeting dramatically improve your scraping success rate.
ZenRows' Residential Proxies emerge as a powerful solution, providing access to 55M+ residential IPs across 185+ countries. With features like dynamic IP rotation, smart proxy selection, and customizable geo-targeting, all backed by 99.9% uptime, it's perfect for high-performance async scraping with aiohttp.
Let's implement ZenRows' Residential Proxies with aiohttp.
First, sign up and visit the Proxy Generator dashboard. Your proxy credentials will be automatically generated.

Take your proxy credentials and update the placeholders in the following code:
import aiohttp
import asyncio
# async function to perform HTTP GET request
async def main():
async with aiohttp.ClientSession() as session:
# define authentication credentials in proxy URL
proxy = "http://<ZENROWS_PROXY_USERNAME>:<ZENROWS_PROXY_PASSWORD>@superproxy.zenrows.com:1337"
# perform an HTTP GET request
async with session.get("http://httpbin.io/ip", proxy=proxy) as resp:
print(await resp.text())
# run the main async function
asyncio.run(main())
Running this code multiple times will show output like this:
// request 1
{
"origin": "194.156.95.178:31337"
}
// request 2
{
"origin": "45.155.68.129:8118"
}
Perfect! The different IP addresses confirm that your requests are successfully routed through ZenRows' residential proxy network. Your aiohttp client is now using premium proxies that significantly reduce blocking risks during high-performance async scraping operations.
Conclusion
This tutorial showed how to set up a proxy in AIOHTTP with Python. You started with a single proxy configuration and then moved on to more robust methods, including rotating and premium proxies.
Avoid the hassle of finding and configuring proxies. Use ZenRows, a reliable solution that bypasses any anti-bot protection. Try ZenRows for free!