如何仅从链接中提取段落部分,而排除网页中的其他链接?

时间:2019-01-26 20:12:52

标签: python python-3.x jupyter-notebook

我正在尝试从网页中提取句子,但是无法排除该网页中显示的其他链接或侧面图标。

我正在尝试从网页(含义段落)中查找所有出现的“ p”,但是我也得到了其他不必要的结果。

我的代码:

  import re
  from nltk import word_tokenize, sent_tokenize, ngrams
  from collections import Counter
  from urllib import request
  from bs4 import BeautifulSoup

  url = "https://www.usatoday.com/story/sports/nba/rockets/2019/01/25/james-harden-30-points-22-consecutive-games-rockets-edge-raptors/2684160002/"
  html = request.urlopen(url).read().decode('utf8')
  raw = BeautifulSoup(html,"lxml") 


 partags = raw.find_all('p') #to extract only paragraphs 
 print(partags) 

我得到以下输出(作为图像发布,因为复制粘贴看起来不那么整洁)

[![enter image description here][1]][1]

https://i.stack.imgur.com/rGC1P.png

但是我只想从链接中提取这种句子,是否可以应用其他过滤器。

[![在此处输入图片描述] [1]] [1]

https://i.stack.imgur.com/MlPUV.png'

Code after Valery's feedback.  

partags = raw.get_text()
print(partags)

我得到的输出(它也具有JSON格式的链接和其他链接)

This is just sample from the full output: 

James Harden extends 30-point streak, makes key defensive stop
{
    "@context": "http://schema.org",
    "@type": "NewsArticle",
    "headline": "James Harden extends 30-point streak, makes key defensive stop to help Rockets edge Raptors",
    "description": "James Harden scored 35 points for his 22nd consecutive game with at least 30, and forced Kawhi Leonard into a missed 3 at buzzer for 121-119 win.",
    "url": "https://www.usatoday.com/story/sports/nba/rockets/2019/01/25/james-harden-30-points-22-consecutive-games-rockets-edge-raptors/2684160002/?utm_source=google&utm_medium=amp&utm_campaign=speakable",
    "mainEntityOfPage": {
        "@type": "WebPage",
        "@id": "https://www.usatoday.com/story/sports/nba/rockets/2019/01/25/james-harden-30-points-22-consecutive-games-rockets-edge-raptors/2684160002/"
    },

1 个答案:

答案 0 :(得分:2)

有关此BeautifulSoup/bs4/doc/#get-text

的bs4文档
import requests
from bs4 import BeautifulSoup as bs

response = requests.get("https://www.usatoday.com/story/sports/nba/rockets/2019/01/25/james-harden-30-points-22-consecutive-games-rockets-edge-raptors/2684160002/")
html = response.text
raw = bs(html, "html")

for partag in raw.find_all('p'):

    print(partag.get_text())

这里是Link to results

因此对partags(段落标签)调用get_text()会产生有效的文本而不会产生干扰。