使用Scrapy从详细信息页面中提取数据

时间:2013-04-24 14:13:12

标签: python screen-scraping scrapy web-crawler

我正在尝试从此网站抓取代理商的电话号码:

列表视图 http://www.authoradvance.com/agencies/

详情视图 http://www.authoradvance.com/agencies/b-personal-management/

电话号码隐藏在详细信息页面中。

那么可以通过网址浏览网站,例如上面的详细视图网址和抓取电话号码吗?

我对此代码的尝试是:

from scrapy.item import Item, Field

class AgencyItem(Item):
    Phone = Field()

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from agentquery.items import AgencyItem


class AgencySpider(CrawlSpider):
   name = "agency"
   allowed_domains = ["authoradvance.com"]
   start_urls = ["http://www.authoradvance.com/agencies/"]
   rules = (Rule(SgmlLinkExtractor(allow=[r'agencies/*$']), callback='parse_item'),)

   def parse_item(self, response):
       hxs = HtmlXPathSelector(response)
       sites = hxs.select("//div[@class='section-content']")
       items = []
       for site in sites:
           item = AgencyItem()
           item['Phone'] = site.select('div[@class="phone"]/text()').extract()
           items.append(item)
       return(items)

然后我跑了“scrapy crawl agency -o items.csv -t csv” 并且结果抓了0页。

怎么了?感谢您的帮助!

1 个答案:

答案 0 :(得分:3)

页面上只有一个链接符合您的正则表达式(agencies/*$):

stav@maia:~$ scrapy shell http://www.authoradvance.com/agencies/
2013-04-24 13:14:13-0500 [scrapy] INFO: Scrapy 0.17.0 started (bot: scrapybot)

>>> SgmlLinkExtractor(allow=[r'agencies/*$']).extract_links(response)
[Link(url='http://www.authoradvance.com/agencies', text=u'Agencies', fragment='', nofollow=False)]

这只是一个指向iteself的链接,它没有带section-content类的div:

>>> fetch('http://www.authoradvance.com/agencies')
2013-04-24 13:15:22-0500 [default] DEBUG: Crawled (200) <GET http://www.authoradvance.com/agencies> (referer: None)

>>> hxs.select("//div[@class='section-content']")
[]

因此,您的循环不会迭代,items永远不会被追加。

因此,请将正则表达式更改为/agencies/.+

>>> len(SgmlLinkExtractor(allow=[r'/agencies/.+']).extract_links(response))
20

>>> fetch('http://www.authoradvance.com/agencies/agency-group')
2013-04-24 13:25:02-0500 [default] DEBUG: Crawled (200) <GET http://www.authoradvance.com/agencies/agency-group> (referer: None)

>>> hxs.select("//div[@class='section-content']")
[<HtmlXPathSelector xpath="//div[@class='section-content']" data=u'<div
class="section-content">\n\t      <di'>, <HtmlXPathSelector xpath="//div
[@class='section-content']" data=u'<div class="section-content"><div class='>]