当从文件打开链接时,美丽的汤无法从页面中提取HTML

时间:2018-05-24 02:51:19

标签: python html web-scraping beautifulsoup web-crawler

我在文件article_links.txt中有一些网络链接,我想逐个打开,提取文本并打印出来。我这样做的代码是:

import requests
from inscriptis import get_text
from bs4 import BeautifulSoup

links = open(r'C:\Users\h473\Documents\Crawling\article_links.txt', "r")

for a in links:
    print(a)
    page = requests.get(a)
    soup = BeautifulSoup(page.text, 'lxml')
    html = soup.find(class_='article-wrap')
    if html==None:
        html = soup.find(class_='mag-article-wrap')

    text = get_text(html.text)

    print(text)

但我收到错误提示---> text = get_text(html.text)

AttributeError: 'NoneType' object has no attribute 'text'

所以,当我打印出soup变量以查看ts内容是什么时。这是我为每个链接找到的内容:

http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head><title>Bad Request</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/></head>
<body><h2>Bad Request - Invalid URL</h2>
<hr/><p>HTTP Error 400. The request URL is invalid.</p>
</body></html>

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head><title>Bad Request</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/></head>
<body><h2>Bad Request - Invalid URL</h2>
<hr/><p>HTTP Error 400. The request URL is invalid.</p>
</body></html>

所以,我尝试从链接中单独提取文本,如下所示:

import requests
from inscriptis import get_text
from bs4 import BeautifulSoup

page = requests.get('http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law')
soup = BeautifulSoup(page.text, 'lxml')
html = soup.find(class_='article-wrap')
if html==None:
    html = soup.find(class_='mag-article-wrap')
text = get_text(html.text)
print(text)

它完美无缺!所以,我尝试以列表/数组形式提供链接,并尝试从每个链接中提取文本:

import requests
from inscriptis import get_text
from bs4 import BeautifulSoup

links = ['http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law',
'http://www3.asiainsurancereview.com//Mock-News-Article/id/42946/Type/eDaily/India-M-A-deals-brewing-in-insurance-sector',
'http://www3.asiainsurancereview.com//Mock-News-Article/id/42947/Type/eDaily/China-Online-insurance-premiums-soar-31-in-1Q2018',
'http://www3.asiainsurancereview.com//Mock-News-Article/id/42948/Type/eDaily/South-Korea-Courts-increasingly-see-65-as-retirement-age',
'http://www3.asiainsurancereview.com//Magazine/ReadMagazineArticle/aid/40847/Creating-a-growth-environment-for-health-insurance-in-Asia']

#open(r'C:\Users\h473\Documents\Crawling\article_links.txt', "r")

for a in links:
    print(a)
    page = requests.get(a)
    soup = BeautifulSoup(page.text, 'lxml')
    html = soup.find(class_='article-wrap')
    if html==None:
        html = soup.find(class_='mag-article-wrap')

    text = get_text(html.text)

    print(text)

这也完美无缺!那么,从文本文件中提取链接会出现什么问题?以及如何解决它?

3 个答案:

答案 0 :(得分:5)

问题是您的网址无效,因为它们都以换行符结尾。你可以看到同样的事情:

>>> page = requests.get('http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law\n')
>>> page
<Response [400]>
>>> page.text
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid URL</h2>
<hr><p>HTTP Error 400. The request URL is invalid.</p>
</BODY></HTML>

BeautifulSoup正在解析该HTML就好了。它不是非常有用的HTML。特别是,它与课程article-wrap或课程mag-article-wrap没有任何关系,因此您的find都会返回None。并且你没有对该案件进行任何错误处理;您只是尝试使用None值,就像它是HTML元素一样,因此例外。

你应该已经注意到打印出来的每个a:每行后面都有一个额外的空白行。这或者意味着字符串中有换行符(这实际上是在发生什么),或者实际行之间有空行(这将是一个更无效的URL - 你得到一个ConnectionError或其中的某些子类。)

您想要做的很简单:只需从每行删除换行符:

for a in links:
    a = a.rstrip()
    # rest of your code

答案 1 :(得分:-2)

我不知道你的文件中有什么。但在我看来,您的文件中可能有一个新的空行导致NoneType对象

答案 2 :(得分:-4)

尝试:

with f open("sample.txt"):
    for line in f:
        print(line)