【python】爬虫:爬取豆瓣top250电影名单
以下是一个使用Python的requests和beautifulsoup库爬取豆瓣电影TOP250的示例代码:
import requests
from bs4 import BeautifulSoup
import csv
# 豆瓣电影TOP250的页面URL
def get_pages(url):
pages = []
for i in range(10):
page_url = f'{url}?start={i*25}'
pages.append(page_url)
return pages
# 解析页面,获取电影信息
def parse_page(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.text
# 保存数据
def save_data(data, filename):
with open(filename, 'w', newline='', encoding='utf-8') as file:
writer = csv.writer(file)
for item in data:
writer.writerow(item)
# 提取电影信息
def extract_data(soup):
data = []
for item in soup.find_all('div', class_='info'):
movie_name = item.find('div', class_='hd').a.text
rating_score = item.find('div', class_='star').text
quote = item.find('div', class_='inq').text if item.find('div', class_='inq') else ''
data.append([movie_name, rating_score, quote])
return data
# 主函数
def main():
base_url = 'https://movie.douban.com/top250'
pages = get_pages(base_url)
movie_data = []
for page in pages:
html = parse_page(page)
soup = BeautifulSoup(html, 'html.parser')
movie_data.extend(extract_data(soup))
save_data(movie_data, 'douban_top250.csv')
if __name__ == '__main__':
main()
这段代码首先定义了获取页面URL的函数、页面解析的函数、数据保存的函数以及数据提取的函数。主函数main()则是这些功能的组合使用,实现了爬取豆瓣TOP250电影信息并保存到CSV文件的完整流程。在运行代码前,请确保已安装requests和beautifulsoup4库。
评论已关闭