Python解析html网页,掌握这4种方法就够了
在Python中解析HTML页面,常用的库有BeautifulSoup
、lxml
、html.parser
、pyquery
等。以下是每种方法的简单介绍和示例代码。
- BeautifulSoup:
from bs4 import BeautifulSoup
import requests
url = 'http://example.com'
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
# 示例:获取页面的标题
print(soup.title.text)
- lxml:
from lxml import html
import requests
url = 'http://example.com'
r = requests.get(url)
tree = html.fromstring(r.text)
# 示例:使用XPath选择器查找元素
elements = tree.xpath('//div[@class="example"]')
for element in elements:
print(element.text)
- html.parser:
import html.parser
import urllib.request
class MyHTMLParser(html.parser.HTMLParser):
def handle_starttag(self, tag, attrs):
if tag == 'a':
for attr in attrs:
if attr[0] == 'href':
print('Link:', attr[1])
url = 'http://example.com'
response = urllib.request.urlopen(url)
parser = MyHTMLParser()
parser.feed(response.read().decode())
- pyquery:
from pyquery import PyQuery as pq
import requests
url = 'http://example.com'
r = requests.get(url)
doc = pq(r.text)
# 示例:查找所有的链接
for link in doc('a'):
print(link.attrib['href'])
以上代码提供了获取页面标题、查找特定元素、解析链接和获取文本内容的基本操作。选择哪种库取决于你的具体需求和项目的限制条件。
评论已关闭