【python】爬虫获取网页文章并绘制高频词汇的云图
import jieba
import requests
from wordcloud import WordCloud
from matplotlib import pyplot as plt
# 获取网页内容
def get_html(url):
try:
r = requests.get(url, timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return ""
# 分词并返回词频列表
def get_word_freq(text):
cut_text = " ".join(jieba.cut(text))
word_freq = jieba.analyse.extract_tags(cut_text, topK=100, withWeight=True)
return word_freq
# 绘制词云图
def draw_wordcloud(word_freq):
word_dict = dict(word_freq)
wordcloud = WordCloud(font_path='simhei.ttf', background_color='white', max_words=100, max_font_size=40, random_state=42)
wordcloud.fit_words(word_dict)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
# 主函数
def main():
url = "http://example.com" # 替换为目标网页的URL
html = get_html(url)
word_freq = get_word_freq(html)
draw_wordcloud(word_freq)
if __name__ == '__main__':
main()
在这个代码实例中,首先导入了必要的模块,然后定义了获取网页内容、分词并返回词频列表的函数,以及绘制词云图的函数。主函数 main() 调用这些函数来完成整个流程。需要注意的是,你需要替换 "http://example.com" 为你想要爬取的目标网页的URL,并确保你有可用的分词词典和字体路径。
评论已关闭