我知道这个问题已被多次询问,但它似乎从未在任何地方得到解决。我查看了几个主题并尝试了所有建议而没有成功。
问题是:为什么没有任何东西存储到我的本地数据库中?
我正在抓一个网站,按照每个网站上的2个链接来搜索更多数据。 我已经成功检查的是:
mySpider.py
import scrapy
from mySpider.items import myItems
class bandSpider(scrapy.Spider):
name = "info"
def start_requests(self):
urls = [ 'http://example.com' ]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
item = myItems()
item['id'] = response.xpath('//h1/a/@href').re_first(r'\d+')
item['name'] = response.xpath('//h1/a/text()').extract_first()
item['logo'] = response.xpath('//a[@id="logo"]/@href').extract()
item['img'] = response.xpath('//a[@id="photo"]/@href').extract()
yield item
# follow link
yield scrapy.Request('https://example.com/page1' + response.xpath('//h1/a/@href').re_first(r'\d+'), callback=self.parse_page1)
yield scrapy.Request('https://example.com/page2', callback=self.parse_page2)
def parse_page1(self, response):
item = myItems()
item['comment'] = response.xpath('//body//text()').extract()
yield item
def parse_page2(self, response):
item = myItems()
item['another'] = response.css('a.link ::text').extract()
yield item
pipelines.py
import scrapy
import pymysql
class MyspiderPipeline(object):
def __init__(self):
self.conn = pymysql.connect(host='localhost', user='root', password='', database='mydb', charset='utf8')
self.cursor = self.conn.cursor()
self.conn.autocommit(True)
def process_item(self, item, spider):
for i in range(5):
try:
self.cursor.execute("""INSERT INTO `tablename` ( `id`, `name` , `logo` , `img` , `comment` )
VALUES ( %s , %s , %s , %s , %s ) ON DUPLICATE KEY UPDATE name = name, logo = logo, img = img, comment = comment""",(item['id'].encode('utf-8'), item["name"].encode('utf-8'), item["logo".encode('utf-8')], item["img"].encode('utf-8'), item["comment"].encode('utf-8')))
except:
continue
items.py
import scrapy
class bandinfo(scrapy.Item):
# define the fields for your item here like:
id = scrapy.Field()
name = scrapy.Field()
logo = scrapy.Field()
img = scrapy.Field()
comment = scrapy.Field()
another = scrapy.Field()
return item
def close_spider(self, spider):
self.cursor.close()
self.conn.close()
settings.py
BOT_NAME = 'mySpider'
SPIDER_MODULES = ['mySpider.spiders']
NEWSPIDER_MODULE = 'mySpider.spiders'
ROBOTSTXT_OBEY = True
COOKIES_ENABLED = False
ITEM_PIPELINES = {
'mySpider.pipelines.MyspiderPipeline': 300,
}
控制台输出
PS \scrapy\mySpider> scrapy crawl info
2017-09-14 11:29:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: mySpider)
2017-09-14 11:29:55 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'mySpider', 'COOKIES_ENABLED': False, 'NEWSPIDER_MODULE': 'mySpider.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['mySpider.spiders']}
2017-09-14 11:29:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2017-09-14 11:29:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-09-14 11:29:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-09-14 11:29:55 [scrapy.middleware] INFO: Enabled item pipelines:
['mySpider.pipelines.MyspiderPipeline']
2017-09-14 11:29:55 [scrapy.core.engine] INFO: Spider opened
2017-09-14 11:29:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-09-14 11:29:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-09-14 11:29:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com/robots.txt> (referer: None)
2017-09-14 11:29:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com> (referer: None)
2017-09-14 11:29:57 [scrapy.core.scraper] DEBUG: Scraped from <200 https://example.com/img.jpeg>
{'img': ['https://example.com/img.jpeg'],
'logo': ['https://example.com/logo.jpeg'],
'name': 'any name',
'id': '546',
2017-09-14 11:29:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com/page1> (referer: https://example.com )
2017-09-14 11:29:57 [scrapy.core.scraper] DEBUG: Scraped from <200 https://example.com/page1>
{'comment': ['tons of text'}
2017-09-14 11:29:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com/page2> (referer: https://example.com )
2017-09-14 11:29:57 [scrapy.core.scraper] DEBUG: Scraped from <200 https://example.com/page2>
{'another': ['tons of text']}
2017-09-14 11:29:57 [scrapy.core.engine] INFO: Closing spider (finished)
2017-09-14 11:29:57 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1101,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 10979,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 9, 14, 9, 29, 57, 981309),
'item_scraped_count': 3,
'log_count/DEBUG': 8,
'log_count/INFO': 7,
'request_depth_max': 1,
'response_received_count': 4,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2017, 9, 14, 9, 29, 55, 590397)}
2017-09-14 11:29:57 [scrapy.core.engine] INFO: Spider closed (finished)
我在输出中看到的是:
Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
我也试图搜索这个问题,但我找不到任何解决方案。 有什么遗失?似乎很多人都有同样的问题而且从未解决过。
答案 0 :(得分:1)
升级到最新的scrapy只是为了确保它不是已经解决的问题。
pip install scrapy --force --update
为什么你在items.py中有close_spider?
为什么要使用except:
和continue
屏蔽问题?
您应该捕获异常并打印问题。该异常将告知查询是否失败