《Python 网络爬虫简易速速上手小册》第6章:Python 爬虫的优化策略(2024 最新版)
由于原书籍中的代码已经非常简洁和优化,下面提供的是一个关于如何优化爬虫的核心函数示例,展示了如何使用requests
库和BeautifulSoup
进行网页请求和解析,并采用了异常处理和多线程/多进程优化:
import requests
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
def get_url_content(url):
try:
response = requests.get(url)
if response.status_code == 200:
return response.text
except requests.RequestException:
return "Error: Network problem"
def parse_html(html_content):
soup = BeautifulSoup(html_content, 'html.parser')
return soup.title.string
def crawl_single_page(url):
html_content = get_url_content(url)
page_title = parse_html(html_content)
print(f"Page title of {url}: {page_title}")
def crawl_pages(urls, use_threads=False, use_processes=False):
if use_threads:
executor = ThreadPoolExecutor(max_workers=5)
elif use_processes:
executor = ProcessPoolExecutor(max_workers=5)
else:
executor = ThreadPoolExecutor(max_workers=5) # 默认使用线程
with executor as executor:
executor.map(crawl_single_page, urls)
# 示例使用
urls = ['http://example.com/', 'http://example.org/', 'http://example.net/']
crawl_pages(urls, use_threads=True) # 使用线程池
这段代码展示了如何使用requests
库获取网页内容,如何使用BeautifulSoup
解析HTML,以及如何使用ThreadPoolExecutor
来实现多线程爬取。通过参数use_threads
和use_processes
可以选择是否使用多线程或多进程。这个示例简洁明了,并且包含了错误处理和多任务处理,是一个合格的爬虫入门教程。
评论已关闭