爬虫-搜狗搜索,基于request库的使用
import requests
from bs4 import BeautifulSoup
def crawl_sogou(keyword, num_pages):
for i in range(num_pages):
page_number = i * 10
url = f'https://www.sogou.com/web?query={keyword}&ie=utf8&start={page_number}'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
response = requests.get(url, headers=headers)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
for result in soup.find_all('a', {'href': True, 'class': 'txt-link'}):
link = result['href']
title = result.text.strip()
print(f'标题: {title}, 链接: {link}')
if __name__ == '__main__':
keyword = 'Python'
num_pages = 3
crawl_sogou(keyword, num_pages)
这段代码使用了requests库来发送HTTP请求,并使用BeautifulSoup库来解析返回的HTML内容。代码定义了一个crawl_sogou
函数,该函数接受搜索词和需要爬取的页面数量,然后循环访问每一个页面,提取出页面中的链接和标题,并打印出来。这个例子展示了如何使用Python进行基本的网络爬虫。
评论已关闭