我正在尝试构建一个非常简单的scraper来收集链接作为爬虫项目的一部分。我已经设置了以下功能来进行抓取:
import requests as rq
from bs4 import BeautifulSoup
def getHomepageLinks(page):
homepageLinks = []
response = rq.get(page)
text = response.text
soup = BeautifulSoup(text)
for a in soup.findAll('a'):
homepageLinks.append(a['href'])
return homepageLinks
我将此文件保存为" scraper2.py"。当我尝试运行代码时,出现以下错误:
>>> import scraper2 as sc
>>> sc.getHomepageLinks('http://washingtonpost.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "scraper2.py", line 9, in getHomepageLinks
for a in soup.findAll('a'):
TypeError: 'NoneType' object is not callable
现在对于奇怪的部分:如果我尝试调试代码并打印响应,它可以正常工作:
>>> response = rq.get('http://washingtonpost.com')
>>> text = response.text
>>> soup = BeautifulSoup(text)
>>> for a in soup.findAll('a'):
... print(a['href'])
...
https://www.washingtonpost.com
#
#
http://www.washingtonpost.com/politics/
https://www.washingtonpost.com/opinions/
http://www.washingtonpost.com/sports/
http://www.washingtonpost.com/local/
http://www.washingtonpost.com/national/
http://www.washingtonpost.com/world/
...
如果我正确读取错误消息,则问题出现在soup.findAll中,但仅当findAll是函数的一部分时才会发生。我确定我拼写正确(不是findall或Findall,因为这里有很多错误),而且我已尝试使用上一篇文章中提到的lxml进行修复,并且没有#39解决它。有没有人有任何想法?
答案 0 :(得分:0)
尝试使用以下内容替换for循环:
for a in soup.findAll('a'):
url = a.get("href")
if url != None:
homepageLinks.append(url)