毕设 基于Python实现京东商城爬虫
以下是一个简化版的京东商城商品信息爬虫的示例代码,使用Python的requests和BeautifulSoup库。
import requests
from bs4 import BeautifulSoup
import csv
def crawl_jd(keyword, page_num):
# 初始化商品列表
products = []
for i in range(1, page_num+1):
print(f"正在爬取第{i}页...")
url = f"https://search.jd.com/Search?keyword={keyword}&page={i}"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"}
response = requests.get(url, headers=headers)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
product_list = soup.find_all('div', class_='gl-item')
for product in product_list:
# 提取商品名称和价格
name = product.find('div', class_='p-name').a.text.strip()
price = product.find('div', class_='p-price').strong.text.strip()
products.append({'name': name, 'price': price})
else:
print("爬取失败")
break
return products
def save_to_csv(products, filename):
with open(filename, 'w', newline='', encoding='utf-8') as f:
writer = csv.DictWriter(f, fieldnames=['name', 'price'])
writer.writeheader()
for product in products:
writer.writerow(product)
if __name__ == '__main__':
keyword = '手机' # 替换为你想要搜索的商品关键词
page_num = 2 # 设置爬取的页数
products = crawl_jd(keyword, page_num)
save_to_csv(products, 'jd_products.csv')
这段代码实现了基本的爬虫功能,包括爬取指定关键词的商品信息,并将其保存到CSV文件中。需要注意的是,该代码仅用于学习和测试目的,实际使用时应遵守相关法律法规,并遵守京东的爬虫政策。
评论已关闭