【爬虫之scrapy框架——尚硅谷(学习笔记one)--基本步骤和原理+爬取当当网(基本步骤)】
在学习Scrapy爬虫框架的基本步骤和原理后,我们可以尝试爬取当当网的数据。以下是一个简单的Scrapy爬虫项目的创建和爬取当当网图书信息的基本步骤:
- 创建一个新的Scrapy项目:
scrapy startproject dangdang_crawler
- 定义爬虫Item:
# items.py
import scrapy
class DangdangItem(scrapy.Item):
# define the fields for your item here like:
name = scrapy.Field()
author = scrapy.Field()
price = scrapy.Field()
publisher = scrapy.Field()
- 编写爬虫:
# spiders/dangdang_spider.py
import scrapy
from dangdang_crawler.items import DangdangItem
class DangdangSpider(scrapy.Spider):
name = 'dangdang'
allowed_domains = ['dangdang.com']
start_urls = ['http://category.dangdang.com/pg1-cid20000.html']
def parse(self, response):
book_selectors = response.css('.name a')
for book in book_selectors:
item = DangdangItem()
item['name'] = book.css('::text').extract_first().strip()
item['author'] = book.css('.author::text').extract_first()
item['price'] = response.css('.price .sys-price::text').extract_first()
item['publisher'] = response.css('.publisher::text').extract_first()
yield item
next_page = response.css('.paging a.next::attr(href)').extract_first()
if next_page:
yield response.follow(next_page, self.parse)
- 设置管道(Pipeline)以保存数据:
# pipelines.py
class DangdangCrawlerPipeline(object):
def __init__(self):
self.file = open('items.csv', 'w')
self.file.write('name,author,price,publisher\n')
def process_item(self, item, spider):
line = f"{item['name']},{item['author']},{item['price']},{item['publisher']}\n"
self.file.write(line)
return item
def close_spider(self, spider):
self.file.close()
- 在settings.py中启用管道:
ITEM_PIPELINES = {
'dangdang_crawler.pipelines.DangdangCrawlerPipeline': 300,
}
以上代码实现了一个简单的Scrapy爬虫,用于爬取当当网图书的信息,并将爬取的数据保存到CSV文件中。这个例子展示了如何定义Item、编写爬虫以及管道的使用,为学习者提供了一个实践的入口。
评论已关闭