搜寻一个网站,其中包含_dopostback方法,该方法使用隐藏的URL编写

时间:2018-07-21 07:45:09

标签: asp.net scrapy dopostback

我是Scrapy的新手。我正在尝试在asp中刮除this网站,其中包含各种配置文件。共259页。要浏览页面,底部有几个链接,例如1,2,3 ....依此类推。这些链接使用_dopostback

  

href =“ javascript:__ doPostBack('ctl00 $ ContentPlaceHolder1 $ RepeaterPaging $ ctl 00 $ Pagingbtn','')”

对于每个页面,仅粗体文本会更改。如何使用scrapy遍历多个页面并提取信息?表单数据如下:

__EVENTTARGET: ctl00%24ContentPlaceHolder1%24RepeaterPaging%24ctl01%24Pagingbtn
__EVENTARGUMENT: 
__VIEWSTATE: %2FwEPDwUKMTk1MjIxNTU1Mw8WAh4HdG90cGFnZQKDAhYCZg9kFgICAw9kFgICAQ9kFgoCAQ8WAh4LXyFJdGVtQ291bnQCFBYoZg9kFgJmDxUFCDY0MzMuanBnCzggR2VtcyBMdGQuCzggR2VtcyBMdGQuBDY0MzMKOTgyOTEwODA3MGQCAQ9kFgJmDxUFCDMzNTkuanBnCDkgSmV3ZWxzCDkgSmV3ZWxzBDMzNTkKOTg4NzAwNzg4OGQCAg9kFgJmDxUFCDc4NTEuanBnD0EgLSBTcXVhcmUgR2Vtcw9BIC0gU3F1YXJlIEdlbXMENzg1MQo5OTI5NjA3ODY4ZAIDD2QWAmYPFQUIMTg3My5qcGcLQSAmIEEgSW1wZXgLQSAmIEEgSW1wZXgEMTg3Mwo5MzE0Njk1ODc0ZAIED2QWAmYPFQUINzc5Ni5qcGcTQSAmIE0gR2VtcyAmIEpld2VscxNBICYgTSBHZW1zICYgSmV3ZWxzBDc3OTYKOTkyOTk0MjE4NWQCBQ9kFgJmDxUFCDc2NjYuanBnDEEgQSBBICBJbXBleAxBIEEgQSAgSW1wZXgENzY2Ngo4MjkwNzkwNzU3ZAIGD2QWAmYPFQUINjM2OC5qcGcaQSBBIEEgJ3MgIEdlbXMgQ29ycG9yYXRpb24aQSBBIEEgJ3MgIEdlbXMgQ29ycG9yYXRpb24ENjM2OAo5ODI5MDU2MzM0ZAIHD2QWAmYPFQUINjM2OS5qcGcPQSBBIEEgJ3MgSmV3ZWxzD0EgQSBBICdzIEpld2VscwQ2MzY5Cjk4MjkwNTYzMzRkAggPZBYCZg8VBQg3OTQ3LmpwZwxBIEcgIFMgSW1wZXgMQSBHICBTIEltcGV4BDc5NDcKODk0Nzg2MzExNGQCCQ9kFgJmDxUFCDc4ODkuanBnCkEgTSBCIEdlbXMKQSBNIEIgR2VtcwQ3ODg5Cjk4MjkwMTMyODJkAgoPZBYCZg8VBQgzNDI2LmpwZxBBIE0gRyAgSmV3ZWxsZXJ5EEEgTSBHICBKZXdlbGxlcnkEMzQyNgo5MzE0NTExNDQ0ZAILD2QWAmYPFQUIMTgyNS5qcGcWQSBOYXR1cmFsIEdlbXMgTi4gQXJ0cxZBIE5hdHVyYWwgR2VtcyBOLiBBcnRzBDE4MjUKOTgyODAxMTU4NWQCDA9kFgJmDxUFCDU3MjYuanBnC0EgUiBEZXNpZ25zC0EgUiBEZXNpZ25zBDU3MjYAZAIND2QWAmYPFQUINzM4OS5qcGcOQSBSYXdhdCBFeHBvcnQOQSBSYXdhdCBFeHBvcnQENzM4OQBkAg4PZBYCZg8VBQg1NDcwLmpwZxBBLiBBLiAgSmV3ZWxsZXJzEEEuIEEuICBKZXdlbGxlcnMENTQ3MAo5OTI4MTA5NDUxZAIPD2QWAmYPFQUIMTg5OS5qcGcSQS4gQS4gQS4ncyBFeHBvcnRzEkEuIEEuIEEuJ3MgRXhwb3J0cwQxODk5Cjk4MjkwNTYzMzRkAhAPZBYCZg8VBQg0MDE5LmpwZwpBLiBCLiBHZW1zCkEuIEIuIEdlbXMENDAxOQo5ODI5MDE2Njg4ZAIRD2QWAmYPFQUIMzM3OS5qcGcPQS4gQi4gSmV3ZWxsZXJzD0EuIEIuIEpld2VsbGVycwQzMzc5Cjk4MjkwMzA1MzZkAhIPZBYCZg8VBQgzMTc5LmpwZwxBLiBDLiBSYXRhbnMMQS4gQy4gUmF0YW5zBDMxNzkKOTgyOTY2NjYyNWQCEw9kFgJmDxUFCDc3NTEuanBnD0EuIEcuICYgQ29tcGFueQ9BLiBHLiAmIENvbXBhbnkENzc1MQo5ODI5MTUzMzUzZAIDDw8WAh4HRW5hYmxlZGhkZAIFDw8WAh8CaGRkAgcPPCsACQIADxYEHghEYXRhS2V5cxYAHwECCmQBFgQeD0hvcml6b250YWxBbGlnbgsqKVN5c3RlbS5XZWIuVUkuV2ViQ29udHJvbHMuSG9yaXpvbnRhbEFsaWduAh4EXyFTQgKAgAQWFGYPZBYCAgEPDxYKHg9Db21tYW5kQXJndW1lbnQFATAeBFRleHQFATEeCUJhY2tDb2xvcgoAHwJoHwUCCGRkAgEPZBYCAgEPDxYEHwYFATEfBwUBMmRkAgIPZBYCAgEPDxYEHwYFATIfBwUBM2RkAgMPZBYCAgEPDxYEHwYFATMfBwUBNGRkAgQPZBYCAgEPDxYEHwYFATQfBwUBNWRkAgUPZBYCAgEPDxYEHwYFATUfBwUBNmRkAgYPZBYCAgEPDxYEHwYFATYfBwUBN2RkAgcPZBYCAgEPDxYEHwYFATcfBwUBOGRkAggPZBYCAgEPDxYEHwYFATgfBwUBOWRkAgkPZBYCAgEPDxYEHwYFATkfBwUCMTBkZAINDw8WAh8HBQ1QYWdlIDEgb2YgMjU5ZGRkfEDzDJt%2FoSnSGPBGHlKDPRi%2Fbk0%3D
__EVENTVALIDATION: %2FwEWDALTg7oVAsGH9qQBAsGHisMBAsGHjuEPAsGHotEBAsGHpu8BAsGHupUCAsGH%2FmACwYeS0QICwYeW7wIC%2FLHNngECkI3CyQtVVahoNpNIXsQI6oDrxjKGcAokIA%3D%3D

我查看了多个解决方案和帖子,它们建议查看post调用的参数并使用它们,但是我无法理解post中提供的参数。

1 个答案:

答案 0 :(得分:1)

简而言之,您只需要发送__EVENTTARGET__EVENTARGUMENT__VIEWSTATE__EVENTVALIDATION

  • __EVENTTARGET:ctl00 $ ContentPlaceHolder1 $ RepeaterPaging $ ctl 00 $ Pagingbtn,将粗体文本更改为其他页面。
  • __EVENTARGUMENT:始终为空
  • __VIEWSTATE:在ID为__VIEWSTATE的输入标签中
  • __EVENTVALIDATION:在ID为__EVENTVALIDATION的输入标签中

值得一提的是,提取名称时,实际的xpath可能与您从Chrome复制的内容不同。

Actual xpath: //*[@id="aspnetForm"]/div/section/div/div/div[1]/div/h3/text()
Chrome version: //*[@id="aspnetForm"]/div[3]/section/div/div/div[1]/div/h3/text()

更新:对于05以后的页面,您应该每次更新__VIEWSTATE__EVENTVALIDATION,并使用“ ctl00 $ ContentPlaceHolder1 $ RepeaterPaging $ ctl 06 > $ Pagingbtn“作为__EVENTTARGET,进入下一页。

00中的__EVENTTARGET部分与当前页面相关,例如:

 1  2  3  4  5  6  7  8  9 10
00 01 02 03 04 05 06 07 08 09
               ^^
To get page 7: use index 06
------------------------------
 2  3  4  5  6  7  8  9 10 11
00 01 02 03 04 05 06 07 08 09
               ^^
To get page 8: use index 06
------------------------------
12 13 14 15 16 17 18 19 20 21
00 01 02 03 04 05 06 07 08 09
               ^^
To get page 18: use index 06
------------------------------
current page: ^^

__EVENTTARGET的另一部分保持不变,这意味着当前页面是用__VIEWSTATE(和__EVENTVALIDATION编码的吗?虽然不确定,但这没关系)。我们可以提取并再次发送它们,以显示服务器现在位于第10、100,...

要获取下一页,我们可以使用固定的__EVENTTARGET:ctl00 $ ContentPlaceHolder1 $ RepeaterPaging $ ctl 06 $ Pagingbtn。

当然,您可以使用ctl00 $ ContentPlaceHolder1 $ RepeaterPaging $ ctl 07 $ Pagingbtn获取下2页。


这是一个演示(已更新):

# SO Debug Spider
# OUTPUT: 2018-07-22 10:54:31 [SOSpider] INFO: ['Aadinath Gems & Jewels']
# The first person of page 4 is Aadinath Gems & Jewels
#
# OUTPUT: 2018-07-23 10:52:07 [SOSpider] ERROR: ['Ajay Purohit']
# The first person of page 12 is Ajay Purohit

import scrapy

class SOSpider(scrapy.Spider):
  name = "SOSpider"
  url = "http://www.jajaipur.com/Member_List.aspx"

  def start_requests(self):
    yield scrapy.Request(url=self.url, callback=self.parse_form_0_5)

  def parse_form_0_5(self, response):
    selector = scrapy.Selector(response=response)
    VIEWSTATE = selector.xpath('//*[@id="__VIEWSTATE"]/@value').extract_first()
    EVENTVALIDATION = selector.xpath('//*[@id="__EVENTVALIDATION"]/@value').extract_first()

    # It's fine to use this method from page 1 to page 5
    formdata = {
      # change pages here
      "__EVENTTARGET": "ctl00$ContentPlaceHolder1$RepeaterPaging$ctl03$Pagingbtn",
      "__EVENTARGUMENT": "",
      "__VIEWSTATE": VIEWSTATE,
      "__EVENTVALIDATION": EVENTVALIDATION,
    }
    yield scrapy.FormRequest(url=self.url, formdata=formdata, callback=self.parse_0_5)

    # After page 5, you should try this
    # get page 6
    formdata["__EVENTTARGET"] = "ctl00$ContentPlaceHolder1$RepeaterPaging$ctl05$Pagingbtn"
    yield scrapy.FormRequest(url=self.url, formdata=formdata, callback=self.parse, meta={"PAGE": 6})

  def parse(self, response):
    # use a metadata to control when to break
    currPage = response.meta["PAGE"]
    if (currPage == 15):
      return

    # extract names here
    selector = scrapy.Selector(response=response)
    names = selector.xpath('//*[@id="aspnetForm"]/div/section/div/div/div[1]/div/h3/text()').extract()
    self.logger.error(names)

    # parse VIEWSTATE and EVENTVALIDATION again, 
    # which contain current page
    VIEWSTATE = selector.xpath('//*[@id="__VIEWSTATE"]/@value').extract_first()
    EVENTVALIDATION = selector.xpath('//*[@id="__EVENTVALIDATION"]/@value').extract_first()

    # get next page
    formdata = {
      # 06 is the next 1 page, 07 is the next 2 page, ...
      "__EVENTTARGET": "ctl00$ContentPlaceHolder1$RepeaterPaging$ctl06$Pagingbtn",
      "__EVENTARGUMENT": "",
      "__VIEWSTATE": VIEWSTATE,
      "__EVENTVALIDATION": EVENTVALIDATION,
    }
    yield scrapy.FormRequest(url=self.url, formdata=formdata, callback=self.parse, meta={"PAGE": currPage+1})

  def parse_0_5(self, response):
    selector = scrapy.Selector(response=response)
    # only extract name
    names = selector.xpath('//*[@id="aspnetForm"]/div/section/div/div/div[1]/div/h3/text()').extract()
    self.logger.error(names)