无法识别链接类

时间:2015-04-28 10:02:56

标签: python-2.7 web-scraping beautifulsoup

我是编程和Python的新手,我正在尝试编写这个简单的刮刀来从这个页面中提取治疗师的所有个人资料网址

http://www.therapy-directory.org.uk/search.php?search=Sheffield&services[23]=1&business_type[individual]=1&distance=40&uqs=626693

import requests
from bs4 import BeautifulSoup

def tru_crawler(max_pages):
   p = '&page='
   page = 1
   while page <= max_pages:
     url = 'http://www.therapy-directory.org.uk/search.php?search=Sheffield&distance=40&services[23]=on&services=23&business_type[individual]=on&uqs=626693' + p + str(page)
     code = requests.get(url)
     text = code.text
     soup = BeautifulSoup(text)
     for link in soup.findAll('a',{'member-summary':'h2'}):
        href = 'http://www.therapy-directory.org.uk' + link.get('href')
        yield href + '\n'
        print(href)
    page += 1

现在,当我运行此代码时,我什么都没得到,主要是因为soup.findall是空的。

个人资料链接的HTML显示

<div class="member-summary">
<h2 class="">
 <a href="/therapists/julia-church?uqs=626693">Julia Church</a>
</h2>

所以我不确定在soup.findall(&#39; a&#39;)中传递哪些其他参数以获取个人资料网址

请帮忙

由于

更新 -

我运行了修改后的代码并返回了一堆错误

好的,这次刮掉第1页后,它返回了一堆错误

Traceback (most recent call last):
File "C:/Users/PB/PycharmProjects/crawler/crawler-revised.py", line    19,      enter code here`in <module>
tru_crawler(3)
File "C:/Users/PB/PycharmProjects/crawler/crawler-revised.py", line 9, in tru_crawler
code = requests.get(url)
File "C:\Python27\lib\requests\api.py", line 68, in get
return request('get', url, **kwargs)
File "C:\Python27\lib\requests\api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "C:\Python27\lib\requests\sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\requests\sessions.py", line 602, in send
history = [resp for resp in gen] if allow_redirects else []
File "C:\Python27\lib\requests\sessions.py", line 195, in resolve_redirects
allow_redirects=False,
File "C:\Python27\lib\requests\sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\requests\adapters.py", line 415, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.',  BadStatusLine("''",))

这里出了什么问题?

1 个答案:

答案 0 :(得分:1)

目前,您拥有的findAll()参数没有意义。它显示为:找到<a>属性等于“h2”的所有member-class

一种可能的方法是使用select()方法传递CSS selector作为参数:

for link in soup.select('div.member-summary h2 a'):
    href = 'http://www.therapy-directory.org.uk' + link.get('href')
    yield href + '\n'
    print(href)

上面的CSS选择器显示:找到类等于“member-summary”的<div>标记,然后在<div>找到<h2>标记内,然后在<h2>内查找{ {1}}标记。

工作示例:

<a>

输出(修剪,总共26个链接):

import requests
from bs4 import BeautifulSoup

p = '&page='
page = 1
url = 'http://www.therapy-directory.org.uk/search.php?search=Sheffield&distance=40&services[23]=on&services=23&business_type[individual]=on&uqs=626693' + p + str(page)
code = requests.get(url)
text = code.text
soup = BeautifulSoup(text)
for link in soup.select('div.member-summary h2 a'):
    href = 'http://www.therapy-directory.org.uk' + link.get('href')
    print(href)