我在一个函数中包含了一个从url中提取文本的过程:
def text(link):
article = Article(link)
article.download()
article = article.parse()
return article
我计划将此功能应用于pandas列:
df['text'] = df['links'].apply(text)
但是,links
列的某些链接已被破坏(即HTTPError: HTTP Error 404: Not Found
)。所以我的问题是,如何在破坏的网址中添加NaN并传递它们?我试着这样做:
from newspaper import Article
import numpy as np
import requests
def text(link):
article = Article(link)
try:
article.download()
article = article.parse()
except requests.exceptions.HTTPError:
return np.nan
return article
df['text'] = df['links'].apply(text)
尽管如此,我不知道是否可以处理apply()
函数,以便将NaN
值归入链接断开的单元格。
更新
我尝试使用ArticleException
处理它,如下所示:
DF:
title Link
Inside tiny tubes, water turns solid when it should be boiling http://news.mit.edu/2016/carbon-nanotubes-water-solid-boiling-1128
Four MIT students named 2017 Marshall Scholars http://news.mit.edu/2016/four-mit-students-marshall-scholars-11282
Saharan dust in the wind http://news.mit.edu/2016/saharan-dust-monsoons-11231
The science of friction on graphene http://news.mit.edu/2016/sliding-flexible-graphene-surfaces-1123
在:
import numpy as np
from newspaper import Article, ArticleException
import requests
def text_extractor2(link):
article = Article(link)
try:
article.download()
except ArticleException:
article = article.parse()
return np.nan
return article
df['text'] = df['Link'].apply(text_extractor2)
df
输出:
title Link text
0 Inside tiny tubes, water turns solid when it s... http://news.mit.edu/2016/carbon-nanotubes-wate... <newspaper.article.Article object at 0x10c8a0320>
1 Four MIT students named 2017 Marshall Scholars http://news.mit.edu/2016/four-mit-students-mar... <newspaper.article.Article object at 0x1070df0f0>
2 Saharan dust in the wind http://news.mit.edu/2016/saharan-dust-monsoons... <newspaper.article.Article object at 0x107b035c0>
3 The science of friction on graphene http://news.mit.edu/2016/sliding-flexible-grap... <newspaper.article.Article object at 0x10c8bf8d0>
答案 0 :(得分:1)
根据我的理解,您希望与断开的链接对应的行在text
列中具有NaN值。如果您尚未添加numpy
导入,我们可以先添加:
import numpy as np
我假设抛出的异常是HTTPError
,并且将使用NumPy作为其NaN值:
def text(link):
article = Article(link)
try:
article.download()
except HTTPError:
return np.nan
article = article.parse()
return article
然后,使用pandas apply,
df['text'] = df['links'].apply(text)
文本列应包含已损坏链接的缺失值和有效链接的文章文本。
不使用newspaper
,您可以更改函数以捕获ur.urlopen(url).read()
上的异常,例如
def text_extractor(url):
try:
html = ur.urlopen(url).read()
except ur.HTTPError:
return np.nan
soup = BeautifulSoup(html, 'lxml')
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = ' '.join(chunk for chunk in chunks if chunk)
sentences = ', '.join(sent_tokenize(str(text.strip('\'"') )))
return sentences