Python实验项目9 :网络爬虫与自动化
import requests
from bs4 import BeautifulSoup
import re
import os
def download_image(image_url, directory):
"""
下载图片到指定目录
"""
response = requests.get(image_url)
image_name = image_url.split('/')[-1]
with open(os.path.join(directory, image_name), 'wb') as file:
file.write(response.content)
def crawl_images(url, directory):
"""
爬取指定网页中的图片链接,并下载到本地目录
"""
if not os.path.exists(directory):
os.makedirs(directory)
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
image_urls = [image['src'] for image in soup.find_all('img') if re.match(r'http[s]?://', image['src'])]
for image_url in image_urls:
print(f"Downloading image: {image_url}")
download_image(image_url, directory)
if __name__ == '__main__':
base_url = 'http://example.webscraping.com/places/default/view/Afghanistan-1'
directory = 'images'
crawl_images(base_url, directory)
这段代码实现了一个简单的网络爬虫,用于下载特定网页上的所有图片。首先,定义了一个下载图片的函数download_image
,它接受图片链接和保存目录作为参数,然后使用requests
库获取图片内容,并将其写入到指定目录。
其次,定义了一个爬取图片的函数crawl_images
,它接受网页链接和保存目录作为参数,调用requests
获取网页内容,使用BeautifulSoup
解析网页,并通过正则表达式筛选出完整的图片链接。然后遍历这些链接,调用download_image
函数进行下载。
最后,在if __name__ == '__main__':
块中,设置了基础网页链接和图片保存目录,并调用crawl_images
函数开始爬取过程。
评论已关闭