如何在scrapy中执行请求后请求

时间:2018-10-11 09:59:26

标签: python post scrapy scrapy-spider

我想通过发布请求开始我的小蜘蛛

import requests
data = {
        'lang': 'en',
        'limit': '10',
        'offset': '0',
        'path': '/content/swisscom/en/about/news',
        'query': '' 
    }
    s_url = 'https://www.swisscom.ch/etc/swisscom/servlets/public/gcr/news/search'
    r = requests.post(url=s_url, data=data)

只要我像上面那样使用请求包直接从python执行请求,一切都可以正常工作。但是,一旦我使用草率的“ Formrequest”将其结合到蜘蛛中

import json
import scrapy
from scrapy.http import FormRequest

class example(scrapy.Spider):
name = "example"

def start_requests(self):
    data = {
        'lang':    'en',
        'limit': '10',
        'offset': '0',
        'path':    '/content/swisscom/en/about/news',
        'query': ''    
    }
    s_url = 'https://www.swisscom.ch/etc/swisscom/servlets/public/gcr/news/search'
    return[FormRequest(url=s_url, formdata=data, callback=self.parse )]

def parse(self, response):
    test = json.loads(response.text)
    for quote in test['results']:
        yield {
            'url': quote['url']
            }

我收到以下错误

2018-10-11 11:45:35 [scrapy.middleware] INFO: Enabled item pipelines:[]
2018-10-11 11:45:35 [scrapy.core.engine] INFO: Spider opened
2018-10-11 11:45:35 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-10-11 11:45:35 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-10-11 11:45:36 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <POST https://www.swisscom.ch/etc/swisscom/servlets/public/gcr/news/search> (failed 1 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2018-10-11 11:45:36 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <POST https://www.swisscom.ch/etc/swisscom/servlets/public/gcr/news/search> (failed 2 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2018-10-11 11:45:36 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <POST https://www.swisscom.ch/etc/swisscom/servlets/public/gcr/news/search> (failed 3 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2018-10-11 11:45:36 [scrapy.core.scraper] ERROR: Error downloading <POST https://www.swisscom.ch/etc/swisscom/servlets/public/gcr/news/search>
Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\core\downloader\middleware.py", line 43, in process_request defer.returnValue((yield download_func(request=request,spider=spider)))twisted.web._newclient.ResponseNeverReceived<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2018-10-11 11:45:36 [scrapy.core.engine] INFO: Closing spider (finished)

有人可以告诉我错误消息的含义是什么,为什么在使用'requests.post'一切都很好的同时,我的请求仍无法正常工作?

非常感谢

1 个答案:

答案 0 :(得分:0)

将您的start_requests方法更改为此:

def start_requests(self):
    data = {
        'lang':    'en',
        'limit': '10',
        'offset': '0',
        'path':    '/content/swisscom/en/about/news',
        'query': ''    
    }
    s_url = 'https://www.swisscom.ch/etc/swisscom/servlets/public/gcr/news/search'
    yield FormRequest(url=s_url, formdata=data, callback=self.parse )

yield抓取一个请求对象,它发出请求并将其解析为根据请求定义的callback方法。