heya我正在使用scrapy创建一个项目,我需要从业务目录中删除业务详细信息http://directory.thesun.co.uk/find/uk/computer-repair
我面临的问题是:当我尝试抓取页面时,我的抓取工具只获取第一页的详细信息,而我还需要获取其余9页的详细信息;那就是10页......
我显示在我的蜘蛛代码和items.py和设置.py之下
请查看我的代码并帮助我解决它
蜘蛛代码::
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from project2.items import Project2Item
class ProjectSpider(BaseSpider):
name = "project2spider"
allowed_domains = ["http://directory.thesun.co.uk/"]
start_urls = [
"http://directory.thesun.co.uk/find/uk/computer-repair"
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@class="abTbl "]')
items = []
for site in sites:
item = Project2Item()
item['Catogory'] = site.select('span[@class="icListBusType"]/text()').extract()
item['Bussiness_name'] = site.select('a/@title').extract()
item['Description'] = site.select('span[last()]/text()').extract()
item['Number'] = site.select('span[@class="searchInfoLabel"]/span/@id').extract()
item['Web_url'] = site.select('span[@class="searchInfoLabel"]/a/@href').extract()
item['adress_name'] = site.select('span[@class="searchInfoLabel"]/span/text()').extract()
item['Photo_name'] = site.select('img/@alt').extract()
item['Photo_path'] = site.select('img/@src').extract()
items.append(item)
return items
我的items.py代码如下::
from scrapy.item import Item, Field
class Project2Item(Item):
Catogory = Field()
Bussiness_name = Field()
Description = Field()
Number = Field()
Web_url = Field()
adress_name = Field()
Photo_name = Field()
Photo_path = Field()
我的settings.py是:::
BOT_NAME = 'project2'
SPIDER_MODULES = ['project2.spiders']
NEWSPIDER_MODULE = 'project2.spiders'
请帮忙 我也从其他页面中提取细节......
答案 0 :(得分:0)
如果您检查分页链接,它们看起来像这样:
http://directory.thesun.co.uk/find/uk/computer-repair/page/3 http://directory.thesun.co.uk/find/uk/computer-repair/page/2
您可以使用带有变量
的urllib2来循环页面import urllib2
response = urllib2.urlopen('http://directory.thesun.co.uk/find/uk/computer-repair/page/' + page)
html = response.read()
并刮掉HTML。
答案 1 :(得分:0)
以下是工作代码。滚动页面应该通过研究 网站及其滚动结构并相应地应用它们。在这种情况下,网站给它“/ page / x”,其中x是页码。
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from project2spider.items import Project2Item
from scrapy.http import Request
class ProjectSpider(BaseSpider):
name = "project2spider"
allowed_domains = ["http://directory.thesun.co.uk"]
current_page_no = 1
start_urls = [
"http://directory.thesun.co.uk/find/uk/computer-repair"
]
def get_next_url(self, fired_url):
if '/page/' in fired_url:
url, page_no = fired_url.rsplit('/page/', 1)
else:
if self.current_page_no != 1:
#end of scroll
return
self.current_page_no += 1
return "http://directory.thesun.co.uk/find/uk/computer-repair/page/%s" % self.current_page_no
def parse(self, response):
fired_url = response.url
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@class="abTbl "]')
for site in sites:
item = Project2Item()
item['Catogory'] = site.select('span[@class="icListBusType"]/text()').extract()
item['Bussiness_name'] = site.select('a/@title').extract()
item['Description'] = site.select('span[last()]/text()').extract()
item['Number'] = site.select('span[@class="searchInfoLabel"]/span/@id').extract()
item['Web_url'] = site.select('span[@class="searchInfoLabel"]/a/@href').extract()
item['adress_name'] = site.select('span[@class="searchInfoLabel"]/span/text()').extract()
item['Photo_name'] = site.select('img/@alt').extract()
item['Photo_path'] = site.select('img/@src').extract()
yield item
next_url = self.get_next_url(fired_url)
if next_url:
yield Request(next_url, self.parse, dont_filter=True)
`
答案 2 :(得分:0)
我尝试了@nizam.sp的代码。发布,这只显示主页中的2条记录1条记录(最后一条记录)和第二页中的1条记录(随机记录),结束。