scrapy---中间件--设置User-Agent、代理
在Scrapy中设置User-Agent
和使用代理可以通过中间件来实现。以下是实现的示例代码:
- 设置
User-Agent
的中间件(user_agent_middleware.py
):
import random
from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware
class RandomUserAgentMiddleware(UserAgentMiddleware):
def __init__(self, user_agent=''):
self.user_agent = user_agent
def process_request(self, request, spider):
user_agent_list = [
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36',
# 更多的User-Agent字符串
]
user_agent = random.choice(user_agent_list)
request.headers.setdefault('User-Agent', user_agent)
- 使用代理的中间件(
proxy_middleware.py
):
class ProxyMiddleware:
def __init__(self, proxy):
self.proxy = proxy
@classmethod
def from_crawler(cls, crawler):
return cls(
proxy=crawler.settings.get('HTTP_PROXY')
)
def process_request(self, request, spider):
request.meta['proxy'] = self.proxy
- 在
settings.py
中启用中间件:
DOWNLOADER_MIDDLEWARES = {
'your_project_name.middlewares.RandomUserAgentMiddleware': 400,
'your_project_name.middlewares.ProxyMiddleware': 500,
}
# 代理设置
HTTP_PROXY = 'http://proxy_ip:port'
替换your_project_name
为你的项目名称,proxy_ip:port
为你的代理服务器地址和端口。
这样配置后,Scrapy爬虫将会随机使用提供的User-Agent,并且通过代理服务器发送请求。
评论已关闭