我正在尝试抓取Google Play商店的最新评论,并且我需要发帖子请求。
使用邮递员,它可以工作,我得到了理想的回应。
但终端中的帖子请求给我一个服务器错误
例如:此页面https://play.google.com/store/apps/details?id=com.supercell.boombeach
curl -H "Content-Type: application/json" -X POST -d '{"id": "com.supercell.boombeach", "reviewType": '0', "reviewSortOrder": '0', "pageNum":'0'}' https://play.google.com/store/getreviews
发出服务器错误和
Scrapy只是忽略了这一行:
frmdata = {"id": "com.supercell.boombeach", "reviewType": 0, "reviewSortOrder": 0, "pageNum":0}
url = "https://play.google.com/store/getreviews"
yield Request(url, callback=self.parse, method="POST", body=urllib.urlencode(frmdata))
答案 0 :(得分:24)
确保formdata
中的每个元素都是string / unicode
frmdata = {"id": "com.supercell.boombeach", "reviewType": '0', "reviewSortOrder": '0', "pageNum":'0'}
url = "https://play.google.com/store/getreviews"
yield FormRequest(url, callback=self.parse, formdata=frmdata)
我认为这样做
In [1]: from scrapy.http import FormRequest
In [2]: frmdata = {"id": "com.supercell.boombeach", "reviewType": '0', "reviewSortOrder": '0', "pageNum":'0'}
In [3]: url = "https://play.google.com/store/getreviews"
In [4]: r = FormRequest(url, formdata=frmdata)
In [5]: fetch(r)
2015-05-20 14:40:09+0530 [default] DEBUG: Crawled (200) <POST https://play.google.com/store/getreviews> (referer: None)
[s] Available Scrapy objects:
[s] crawler <scrapy.crawler.Crawler object at 0x7f3ea4258890>
[s] item {}
[s] r <POST https://play.google.com/store/getreviews>
[s] request <POST https://play.google.com/store/getreviews>
[s] response <200 https://play.google.com/store/getreviews>
[s] settings <scrapy.settings.Settings object at 0x7f3eaa205450>
[s] spider <Spider 'default' at 0x7f3ea3449cd0>
[s] Useful shortcuts:
[s] shelp() Shell help (print this help)
[s] fetch(req_or_url) Fetch request (or URL) and update local objects
[s] view(response) View response in a browser
答案 1 :(得分:20)
上面的答案并没有真正解决问题。他们将数据作为参数发送,而不是作为请求正文的JSON数据。
来自http://bajiecc.cc/questions/1135255/scrapy-formrequest-sending-json:
NSIndexPath
答案 2 :(得分:0)
在Scrapy中使用Post进行示例页面遍历:
def directory_page(self,response):
if response:
profiles = response.xpath("//div[@class='heading-h']/h3/a/@href").extract()
for profile in profiles:
yield Request(urljoin(response.url,profile),callback=self.profile_collector)
page = response.meta['page'] + 1
if page :
yield FormRequest('https://rotmanconnect.com/AlumniDirectory/getmorerecentjoineduser',
formdata={'isSortByName':'false','pageNumber':str(page)},
callback= self.directory_page,
meta={'page':page})
else:
print "No more page available"