分享Python7个爬虫小案例_爬虫实例
由于篇幅限制,以下是7个Python爬虫案例的核心函数代码。
- 从网页爬取表格数据:
import requests
from bs4 import BeautifulSoup
url = 'http://example.com/table'
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
table = soup.find('table') # 根据实际情况调整选择器
rows = table.find_all('tr')
for tr in rows:
cols = tr.find_all('td')
for td in cols:
print(td.text.strip(), end=' ')
print()
- 从网页爬取图片:
import requests
url = 'http://example.com/image'
r = requests.get(url)
with open('image.jpg', 'wb') as f:
f.write(r.content)
- 从网页爬取链接:
import requests
from bs4 import BeautifulSoup
url = 'http://example.com'
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
- 使用多线程或多进程爬取网页数据:
import requests
from multiprocessing.pool import ThreadPool
urls = ['http://example.com/page1', 'http://example.com/page2', ...]
def get_content(url):
return requests.get(url).text
pool = ThreadPool(processes=4) # 根据实际情况调整线程数
results = pool.map(get_content, urls)
pool.close()
pool.join()
for result in results:
print(result)
- 使用代理服务器爬取数据:
import requests
url = 'http://example.com'
proxy = {'http': 'http://proxy.example.com:8080', 'https': 'https://proxy.example.com:8080'}
r = requests.get(url, proxies=proxy)
print(r.text)
- 登录后爬取数据:
import requests
url = 'http://example.com/protected'
payload = {'username': 'user', 'password': 'pass'}
r = requests.post(url, data=payload)
print(r.text)
- 使用Selenium自动化爬取JavaScript渲染的网页数据:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('http://example.com')
print(driver.page_source)
driver.close()
这些代码示例提供了爬虫任务的不同方法,包括解析HTML、多线程/多进程处理、使用代理、登录认证以及自动化操作。在实际应用中,你需要根据目标网站的具体情况进行适当的调整和优化。
评论已关闭