Scrapy没有提出要求

时间:2018-01-06 12:14:43

标签: python web-scraping scrapy scrapy-spider

我可能做错了什么,我正在努力学习所以我用一个代码来刮取一些非常简单的东西作为' Youtube'只是为了看它是否有效。

var start_date=$("#start").val();
var end_date=$("#end").val();
var project=$("$project_name").val();
$.ajax({


url: url,
  type: 'POST',
  contentType:'application/json',
   data: JSON.stringify({
 projectName:projec_name,
 startDate:start_date,
  endDate:end_date,


  }),
 success: function (rtndata) {

         }
       }

 });

然后当我用

运行我的蜘蛛
import scrapy

class TesteSpider(scrapy.Spider):

    name= "teste"

    def start_request(self):
        url='http://www.youtube.com'
        yield scrapy.Request(url, self.parse)

    def parse(self, response):
        title = response.css('title::text').extract()
        with open('informacao', 'w') as f:
            f.write(title)
        self.log('saved file successfully')
蜘蛛跑了,但似乎只是打开然后完成了蜘蛛。

阅读下面的输入,没有任何获取请求。

scrapy crawl teste 

当我跑步时

    2018-01-06 10:06:38 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: tutorial)
2018-01-06 10:06:38 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9.7, cssselect 1.0.3, parsel 1.3.1, w3lib 1.18.0, Twisted 17.9.0, Python 3.5.2 (default, Nov 23 2017, 16:37:01) - [GCC 5.4.0 20160609], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g  2 Nov 2017), cryptography 2.1.4, Platform Linux-4.10.0-28-generic-x86_64-with-Ubuntu-16.04-xenial
2018-01-06 10:06:38 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'BOT_NAME': 'tutorial', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['tutorial.spiders']}
2018-01-06 10:06:38 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.corestats.CoreStats']
2018-01-06 10:06:38 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-01-06 10:06:38 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-01-06 10:06:38 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-01-06 10:06:38 [scrapy.core.engine] INFO: Spider opened
2018-01-06 10:06:38 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-01-06 10:06:38 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-01-06 10:06:38 [scrapy.core.engine] INFO: Closing spider (finished)
2018-01-06 10:06:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 1, 6, 12, 6, 38, 572007),
 'log_count/DEBUG': 1,
 'log_count/INFO': 7,
 'memusage/max': 52027392,
 'memusage/startup': 52027392,
 'start_time': datetime.datetime(2018, 1, 6, 12, 6, 38, 565527)}
2018-01-06 10:06:38 [scrapy.core.engine] INFO: Spider closed (finished)

我没有问题

1 个答案:

答案 0 :(得分:0)

正如@stranac在评论中提到的那样:

  

方法的名称是start_requests

您可以修改代码并重试