Python爬虫:高效数据抓取的编程技术(爬虫基础)
import requests
from bs4 import BeautifulSoup
# 设置请求头,伪装为浏览器访问
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
def get_soup(url):
"""
获取页面内容并解析
"""
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
return soup
def get_players_data(soup):
"""
提取球员数据
"""
players_data = []
# 假设我们要抓取的球员信息在<div class="player">的<table>标签中
player_rows = soup.find_all('div', class_='player')
for row in player_rows:
table = row.find('table')
tbody = table.find('tbody')
rows = tbody.find_all('tr')
player_data = {}
for r in rows:
# 假设每个球员的属性分别在<td class="label">和<td class="info">中
label = r.find('td', class_='label').text
info = r.find('td', class_='info').text
player_data[label] = info
players_data.append(player_data)
return players_data
# 示例URL
url = 'http://example.com/players'
soup = get_soup(url)
players_data = get_players_data(soup)
# 打印抓取到的数据
for player in players_data:
print(player)
这个示例代码展示了如何使用Python的requests库和BeautifulSoup库来高效地抓取网页数据。代码中的get_soup
函数负责发送请求并解析页面,而get_players_data
函数则提取了假设的球员信息数据。这个例子教会开发者如何结构化地抓取网页上的表格数据,这是很多爬虫项目所面临的常见问题。
评论已关闭