最近我使用scrapy来刮擦zoominfo然后我测试下面的url
http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile
但有些如何在终端,它改变了这样
[scrapy] DEBUG: Crawled (200) <GET http://subscriber.zoominfo.com/zoominfo/?_escaped_fragment_=search%2Fprofile%2Fperson%3FpersonId%3D521850874%26targetid%3Dprofile>
我在AJAXCRAWL_ENABLED = True
中添加了setting.py
,但网址仍为escaped_fragment
。我怀疑我还没有进入我想要的正确页面。
spider.py
代码如下:
#!/usr/bin/env python
# -*- coding:utf-8 -*-
import scrapy
from scrapy.selector import Selector
from scrapy.http import Request, FormRequest
from tutorial.items import TutorialItem
from scrapy.spiders.init import InitSpider
class LoginSpider(InitSpider):
name = 'zoominfo'
login_page = 'https://www.zoominfo.com/login'
start_urls = [
'http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile',
]
headers = {
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Encoding":"gzip, deflate, br",
"Accept-Language":"en-US,en;q=0.5",
"Connectionc":"keep-alive",
"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:50.0) Gecko/20100101 Firefox/50.0",
}
def init_request(self):
return Request(url=self.login_page, callback=self.login)
def login(self, response):
print "Preparing Login"
return FormRequest.from_response(
response,
headers=self.headers,
formdata={
'task':'save',
'redirect':'http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile',
'username': username,
'password': password
},
callback=self.after_login,
dont_filter = True,
)
def after_login(self, response):
if "authentication failed" in response.body:
self.log("Login unsuccessful")
else:
self.log(":Login Successfully")
self.initialized()
return Request(url='http://subscriber.zoominfo.com/zoominfo/', callback=self.parse)
def parse(self, response):
base_url = 'http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile'
sel = Selector(response)
item = TutorialItem()
divs = sel.xpath("//div[3]//p").extract()
item['title'] = sel.xpath("//div[3]")
print divs
request = Request(base_url, callback=self.parse)
yield request
谢谢任何人都可以给我一个提示。
答案 0 :(得分:1)
#!
== _escaped_fragment_
_escaped_fragment_
被称为丑陋的网址,主要呈现给网络抓取工具,而真正的用户则获得漂亮的#!
版本。无论哪种方式,它们都意味着同样的事情,并且在功能上也应该有所不同。