我是使用Scrapy抓取网页的新手,不幸的是选择了一个动态的网页...
我已经成功抓取了部分内容(120个链接),感谢有人帮助我here,但没有target website中的链接
在做了一些研究之后,我知道抓取ajax web与那些简单的想法没什么不同:
•打开浏览器开发人员工具,网络标签
•转到目标网站
•单击“提交”按钮,查看XHR请求到服务器的内容
•在蜘蛛中模拟此XHR请求
最后一个听起来虽然模糊不清---如何模拟XHR请求?
我见过有人使用'header'或'formdata'和其他参数进行模拟。无法弄清楚这是什么意思。
以下是我的代码的一部分:
class googleAppSpider(scrapy.Spider):
name = "googleApp"
allowed_domains = ['play.google.com']
start_urls = ['https://play.google.com/store/apps/category/GAME/collection/topselling_new_free?authuser=0']
def start_request(self,response):
for i in range(0,10):
yield FormRequest(url="https://play.google.com/store/apps/category/GAME/collection/topselling_new_free?authuser=0", method="POST", formdata={'start':str(i+60),'num':'60','numChildren':'0','ipf':'1','xhr':'1','token':'m1VdlomIcpZYfkJT5dktVuqLw2k:1455483261011'}, callback=self.parse)
def parse(self,response):
links = response.xpath("//a/@href").extract()
crawledLinks = [ ]
LinkPattern = re.compile("^/store/apps/details\?id=.")
for link in links:
if LinkPattern.match(link) and not link in crawledLinks:
crawledLinks.append("http://play.google.com"+link+"#release")
for link in crawledLinks:
yield scrapy.Request(link, callback=self.parse_every_app)
def parse_every_app(self,response):
start_request似乎在这里没有任何作用。如果我删除它们,蜘蛛仍会抓取相同数量的链接。
我已经在这个问题上工作了一个星期......如果你能帮助我,我非常感激...
答案 0 :(得分:0)
试试这个:
class googleAppSpider(Spider):
name = "googleApp"
allowed_domains = ['play.google.com']
start_urls = ['https://play.google.com/store/apps/category/GAME/collection/topselling_new_free?authuser=0']
def parse(self,response):
for i in range(0,10):
yield FormRequest(url="https://play.google.com/store/apps/category/GAME/collection/topselling_new_free?authuser=0", method="POST", formdata={'start':str(i*60),'num':'60','numChildren':'0','ipf':'1','xhr':'1','token':'m1VdlomIcpZYfkJT5dktVuqLw2k:1455483261011'}, callback=self.data_parse)
def data_parse(self,response):
item = googleAppItem()
map = {}
links = response.xpath("//a/@href").re(r'/store/apps/details.*')
for l in links:
if l not in map:
map[l] = True
item['url'] = l
yield item
使用scrapy crawl -o links.csv
或scrapy crawl -o links.json
抓取蜘蛛,您将获得csv文件或json文件中的所有链接。要增加要爬网的页数,请更改for循环的范围。