当网址由csv文件构建时,Scrapy不会转到下一页(关注链接)

时间:2016-10-01 11:40:11

标签: python scrapy

嗨,我有这个简单的scrapy文件。如您所见,我将船舶imo编号添加到response.url()。但是当我运行程序时,我没有得到任何结果(也没有错误消息)。我在settings.py中设置了USER AGENT。

import re
import csv

import scrapy
from mcdetails.items import McdetailsItem

class GetVesselDetails(scrapy.Spider):
    name = "mconnector"
    allowed_domains = ["http://maritime-connector.com/"]
    start_urls = [
        'http://maritime-connector.com/ship/',
    ]

    def parse(self, response):
        with open('output.csv', "r") as f:
            mclist = csv.reader(f, delimiter=',')
            next(f)
            for l in mclist:
                if not l[1] in (None, ""):
                    # eg. 'http://maritime-connector.com/ship' + '849949'
                    mcurl = response.urljoin(l[1])
                    yield scrapy.Request(mcurl, callback=self.parse_ships_details)

    def parse_ships_details(self,response):
        item = McdetailsItem()
        # item = resonse.meta['item']
        item['v_name'] = response.xpath('//title/text()').extract_first()
        yield {'vessell name': item['v_name']}

这就是输出。

2016-10-01 15:37:52 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-01 15:37:52 [scrapy] INFO: Spider opened
2016-10-01 15:37:52 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-01 15:37:52 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-10-01 15:37:53 [scrapy] DEBUG: Redirecting (301) to <GET http://maritime-connector.com/robots.txt/> from <GET http://maritime-connector.com/robots.txt>
2016-10-01 15:37:53 [scrapy] DEBUG: Crawled (404) <GET http://maritime-connector.com/robots.txt/> (referer: None)
2016-10-01 15:37:53 [scrapy] DEBUG: Crawled (200) <GET http://maritime-connector.com/ship/> (referer: None)
2016-10-01 15:37:53 [scrapy] DEBUG: Filtered offsite request to 'maritime-connector.com': <GET http://maritime-connector.com/ship/8986080>
2016-10-01 15:37:54 [scrapy] INFO: Closing spider (finished)
2016-10-01 15:37:54 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1109,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 25884,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/301': 1,
 'downloader/response_status_count/404': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 10, 1, 11, 37, 54, 36472),
 'log_count/DEBUG': 5,
 'log_count/INFO': 7,
 'offsite/domains': 1,
 'offsite/filtered': 913,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 10, 1, 11, 37, 52, 641119)}

1 个答案:

答案 0 :(得分:1)

追溯不言自明:

2016-10-01 15:37:53 [scrapy] DEBUG: Filtered offsite request to 'maritime-connector.com': <GET http://maritime-connector.com/ship/8986080>

您的allowed_domains错了。你应该使用:

allowed_domains = ["maritime-connector.com"]

http://不在域中发挥作用。