从维基百科检索第一个锚标记时,href没有属性“ get”

时间:2018-08-03 01:16:48

标签: python web-scraping beautifulsoup

我得到href has no attribute 'get'。我正在尝试检索此网络搜寻器中的第一个锚标记。我曾经像p.a.['href']一样直接提取href,并使用p.a.get('href')来打印。但是,当我将其分配给href1时,它会出错。

Traceback (most recent call last):
File "/Users/asagarwala/IdeaProjects/Py1/new1.py", line 11, in <module>
print(soup.find(id="mw-content-text").find(class_='mw-parser- 
output').p.a.get('href'))
AttributeError: 'NoneType' object has no attribute 'get'

Process finished with exit code 1

这是我的代码:

import requests
from bs4 import BeautifulSoup

url1 = "https://en.wikipedia.org/wiki/Anger"

my_list = []
i = 1

while i < 26:
    html = requests.get(url1)
    soup = BeautifulSoup(html.text, 'html.parser')

    print(soup.find(id="mw-content-text").find(class_='mw-parser-output').p.a.get('href'))

    href1 = soup.find(id="mw-content-text").find(class_='mw-parser-output').p.a.get('href')
    url1 = "https://en.wikipedia.org" + href1
    i += 1

    if href1 == 'wiki/Philosophy':
        print("philosophy reached. Bye")
        break

    my_list.append(url1)

print(my_list)

1 个答案:

答案 0 :(得分:0)

您的问题是您正在搜索类中的第一个p标记。在您的第二次迭代中(在https://en.wikipedia.org/Anger中,该变量为空,因此您将不会获得结果。

尝试以下

In [176]: def wiki_travel(url):
     ...:     visited = []
     ...:     for i in range(26):
     ...:         html = requests.get(url)
     ...:         if not html.ok:
     ...:             print("'{0}' got response code {1}".format(url, html.status_code))
     ...:             break
     ...:
     ...:         soup = bs4.BeautifulSoup(html.text, 'html.parser')
     ...:
     ...:         target = next((c.get('href') for p in soup.find(class_='mw-parser-output').findAll('p') for c in p.findAll('a') if c.get('href', '').startswith('/')), None)
     ...:         if not target:
     ...:             print('Target not found')
     ...:             break
     ...:
     ...:         print(target)
     ...:         url = 'https://en.wikipedia.org' + target
     ...:         if target == '/wiki/Philosophy':
     ...:             print('Philosophy reached. Bye')
     ...:             break
     ...:
     ...:         visited.append(url)
     ...:
     ...:     return visited
     ...:

对此进行测试

In [177]: wiki_travel('https://en.wikipedia.org/wiki/Anger')
/wiki/Emotion
/wiki/Consciousness
/wiki/Quality_(philosophy)
/wiki/Philosophy
Philosophy reached. Bye
Out[177]:
['https://en.wikipedia.org/wiki/Emotion',
 'https://en.wikipedia.org/wiki/Consciousness',
 'https://en.wikipedia.org/wiki/Quality_(philosophy)']

密钥在以下行

target = next((c.get('href') for p in soup.find(class_='mw-parser-output').findAll('p') for c in p.findAll('a') if c.get('href', '').startswith('/')), None)

这是怎么回事?这是一个类似于

的生成器
target = []
# Search for all p tags within this class
for p in soup.find(class_='mw-parser-output').findAll('p'):
    # Find all a tags
    for c in p.findAll('a'):
        # Only add to target list iff the link starts with a '/'
        # I.e. no anchors ('#') which won't get us to a new page
        if c.get('href', '').startswith('/'):
            target.append(c.get('href'))

如果没有找到结果,则获取target[0]None