Python爬虫实战训练:爬取网络小说
import requests
from bs4 import BeautifulSoup
import re
import os
# 设置代理服务器
proxies = {
'http': 'http://user:password@proxy.server.com:port',
'https': 'https://user:password@proxy.server.com:port'
}
def get_novel_content(url, proxies):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
response = requests.get(url, headers=headers, proxies=proxies)
return response.text
def parse_content(html):
soup = BeautifulSoup(html, 'lxml')
content = soup.find('div', id='content').text
return content
def save_novel_content(content, file_path):
with open(file_path, 'w', encoding='utf-8') as file:
file.write(content)
def main():
url = 'http://www.example.com/novel.html'
file_path = 'novel.txt'
html = get_novel_content(url, proxies)
content = parse_content(html)
save_novel_content(content, file_path)
if __name__ == '__main__':
main()
这段代码展示了如何使用Python爬取一个网络小说,并保存到本地。首先设置了代理服务器,然后定义了获取小说内容、解析HTML、保存小说内容到本地的函数。最后在main
函数中调用这些函数,实现自动化爬取和保存小说内容的过程。
评论已关闭