我有一个蜘蛛,它获取地址列表并从发送地址的单个页面请求信息和一些会话信息作为scrapy.FormRequest。但是,刮板似乎正在连续运行它们。我可以这样说是因为,如果您搜索10个地址,则需要17秒;如果您搜索20个地址,则需要两倍的时间(约32秒)。由于最大的延迟只是等待响应,并且没有进行复杂的解析,因此如果解析是同时运行的,则需要花费更长的时间吗?
我尝试确保并发请求在settings.py中设置得足够高。我已将dont_filter = true添加到表单请求中。但似乎仍然无法使其正常工作。还应该注意的是,刮板的工作方式确实可以拉取所需的信息。
settings.py
CONCURRENT_REQUESTS_PER_DOMAIN = 20
CONCURRENT_REQUESTS_PER_IP = 20
CONCURRENT_REQUESTS = 20
main.py
class WaterSpider(scrapy.Spider):
name = "water"
def __init__(self,stats):
dispatcher.connect(self.spider_closed, signals.spider_closed)
self.stats = stats
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.stats)
def start_requests(self):
#Pull saved session ID Info
with open('items.json') as f:
sessioninfo = json.load(f)
sessioninfo = sessioninfo[0]
url = 'http://cityservices.baltimorecity.gov/water/'
#Take the cookies from what was scraped above and add them to the new cookies to be passed.
cookies = {
'popup':'seen',
'ASP.NET_SessionId': sessioninfo['sessioncookie']
}
#Same with the different view states.
post_params = {
'__VIEWSTATE': sessioninfo['VIEWSTATE'],
'__VIEWSTATEGENERATOR': sessioninfo['VIEWSTATEGENERATOR'],
'__EVENTVALIDATION': sessioninfo['EVENTVALIDATION'],
'ctl00$ctl00$rootMasterContent$LocalContentPlaceHolder$btnGetInfoServiceAddress': 'Get Info'
}
#Run through all the addresses in our csv to start scraping.
with open('Addresses.csv', 'r') as csvfile:
addresses = csv.reader(csvfile)
for x,row in enumerate(addresses):
self.log("Row " + str(x))
address = row[0]
post_params['ctl00$ctl00$rootMasterContent$LocalContentPlaceHolder$ucServiceAddress$txtServiceAddress']= address
yield scrapy.FormRequest(url=url, callback=self.parseWaterBill, cookies = cookies, method='POST',formdata=post_params, meta={'address':address,'timestamp':datetime.today(),'row_num':str(x)},errback=self.errback_httpbin,dont_filter = True)
输出
/app/WaterBill/spiders/main.py:16: ScrapyDeprecationWarning: Importing from
scrapy.xlib.pydispatch is deprecated and will no longer be supported in
future Scrapy versions. If you just want to connect signals use the
from_crawler class method, otherwise import pydispatch directly if needed.
See: https://github.com/scrapy/scrapy/issues/1762
from scrapy.xlib.pydispatch import dispatcher
2018-09-09 15:02:28 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot:
WaterBill)
2018-09-09 15:02:28 [scrapy.utils.log] INFO: Versions: lxml 4.2.4.0, libxml2
2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python
3.6.6 (default, Jul 17 2018, 11:12:33) - [GCC 6.3.0 20170516], pyOpenSSL
18.0.0 (OpenSSL 1.1.0i 14 Aug 2018), cryptography 2.3.1, Platform Linux-
4.9.93-linuxkit-aufs-x86_64-with-debian-9.5
2018-09-09 15:02:28 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME':
'WaterBill', 'CONCURRENT_REQUESTS': 20, 'CONCURRENT_REQUESTS_PER_DOMAIN':
20, 'CONCURRENT_REQUESTS_PER_IP': 20, 'NEWSPIDER_MODULE':
'WaterBill.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES':
['WaterBill.spiders']}
2018-09-09 15:02:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2018-09-09 15:02:28 [scrapy.middleware] INFO: Enabled downloader
middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-09-09 15:02:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-09-09 15:02:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-09-09 15:02:28 [scrapy.core.engine] INFO: Spider opened
2018-09-09 15:02:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0
pages/min), scraped 0 items (at 0 items/min)
2018-09-09 15:02:28 [scrapy.extensions.telnet] DEBUG: Telnet console
listening on 127.0.0.1:6023
输出结束
2018-09-09 15:03:01 [scrapy.core.engine] INFO: Closing spider (finished)
2018-09-09 15:03:01 [water] DEBUG: Scraped 19 in 0:00:32.526211 seconds
2018-09-09 15:03:01 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 198397,
'downloader/request_count': 21,
'downloader/request_method_count/GET': 1,
'downloader/request_method_count/POST': 20,
'downloader/response_bytes': 2057507,
'downloader/response_count': 21,
'downloader/response_status_count/200': 21,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 9, 9, 15, 3, 1, 223408),
'item_scraped_count': 19,
'log_count/DEBUG': 62,
'log_count/INFO': 7,
'memusage/max': 51474432,
'memusage/startup': 51474432,
'response_received_count': 21,
'scheduler/dequeued': 20,
'scheduler/dequeued/memory': 20,
'scheduler/enqueued': 20,
'scheduler/enqueued/memory': 20,
'start_time': datetime.datetime(2018, 9, 9, 15, 2, 28, 697197)}
2018-09-09 15:03:01 [scrapy.core.engine] INFO: Spider closed (finished)