我尝试使用scrapy
编写网络抓取工具。但是,当我尝试使用其交互式shell来测试one of the page时。
错误消息
2016-03-01 22:15:08 [scrapy] INFO: Scrapy 1.0.5 started (bot: momo)
2016-03-01 22:15:08 [scrapy] INFO: Optional features available: ssl, http11
2016-03-01 22:15:08 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'momo.spiders', 'FEED_FORMAT': 'json', 'SPIDER_MODULES': ['momo.spiders'], 'FEED_URI': 'j.json', 'BOT_NAME': 'momo'}
2016-03-01 22:15:08 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2016-03-01 22:15:08 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-03-01 22:15:08 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-03-01 22:15:08 [scrapy] INFO: Enabled item pipelines:
2016-03-01 22:15:08 [scrapy] INFO: Spider opened
2016-03-01 22:15:08 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-03-01 22:15:08 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-03-01 22:15:09 [scrapy] DEBUG: Crawled (200) <GET http://www.momoshop.com.tw/main/Main.jsp> (referer: None)
2016-03-01 22:15:11 [scrapy] DEBUG: Crawled (200) <GET http://www.momoshop.com.tw/goods/GoodsDetail.jsp?i_code=1697199&str_category_code=2200700058&cid=ec&oid=1c&mdiv=1000000000-bt_0_209_01-bt_0_209_01_e11&ctype=B> (referer: http://www.momoshop.com.tw/main/Main.jsp)
2016-03-01 22:15:11 [scrapy] DEBUG: Crawled (200) <GET http://www.momoshop.com.tw/goods/GoodsDetail.jsp?i_code=3753480&str_category_code=1514200303&cid=ec&oid=2a&mdiv=1000000000-bt_0_209_01-bt_0_209_01_e25&ctype=B> (referer: http://www.momoshop.com.tw/main/Main.jsp)
2016-03-01 22:15:11 [scrapy] DEBUG: Crawled (200) <GET http://www.momoshop.com.tw/goods/GoodsDetail.jsp?i_code=3754704&str_category_code=1417802005&cid=ec&oid=1f&mdiv=1000000000-bt_0_209_01-bt_0_209_01_e20&ctype=B> (referer: http://www.momoshop.com.tw/main/Main.jsp)
2016-03-01 22:15:11 [scrapy] DEBUG: Crawled (200) <GET http://www.momoshop.com.tw/goods/GoodsDetail.jsp?i_code=3811447&str_category_code=1318900078&cid=ec&oid=1d&mdiv=1000000000-bt_0_209_01-bt_0_209_01_e14&ctype=B> (referer: http://www.momoshop.com.tw/main/Main.jsp)
{'Date': ['Tue, 01 Mar 2016 14:15:10 GMT'], 'Set-Cookie': ['loginRsult=null;Expires=Thu, 01-Jan-01970 00:00:10 GMT;Path=/', 'loginUser=null;Expires=Thu, 01-Jan-01970 00:00:10 GMT;Path=/', 'cardUser=null;Expires=Thu, 01-Jan-01970 00:00:10 GMT;Path=/', '18YEARAGREE=null;Expires=Thu, 01-Jan-01970 00:00:10 GMT;Path=/', 'Browsehist=1697199,3753480,3754704,2189725;Path=/', 'FTOOTH=22;Path=/', 'DCODE=2200700058;Path=/'], 'Content-Type': ['']}
2016-03-01 22:15:11 [scrapy] ERROR: Spider error processing <GET http://www.momoshop.com.tw/goods/GoodsDetail.jsp?i_code=1697199&str_category_code=2200700058&cid=ec&oid=1c&mdiv=1000000000-bt_0_209_01-bt_0_209_01_e11&ctype=B> (referer: http://www.momoshop.com.tw/main/Main.jsp)
Traceback (most recent call last):
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output
for x in result:
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 54, in <genexpr>
return (r for r in result or () if _filter(r))
File "/Users/Shane/Desktop/scrapy/momo/momo/spiders/default_spider.py", line 35, in parseGoods
item.item = response.css('h1').extract()
AttributeError: 'Response' object has no attribute 'css'
我发现这个特定页面的响应没有Content-Type
,它写在head/meta
上。
import scrapy
from scrapy.http import Request
class MomoItem(scrapy.Item):
item = scrapy.Field()
price = scrapy.Field()
specification = scrapy.Field()
class MomoSpider(scrapy.Spider):
name = "momo"
allowed_domains = ["www.momoshop.com.tw"]
start_urls = [
"http://www.momoshop.com.tw/main/Main.jsp"
]
def parse(self, response):
for href in response.xpath('//a[contains(@href, "/goods")]/@href'):
url = response.urljoin(href.extract())
yield Request(url, callback=self.parseGoods)
# for href in response.xpath('//a[contains(@href, "/category")]'):
# url = response.urljoin(href.extract())
# yield scrapy.Request(url, callback=self.parse)
#
# for href in response.xpath('//a[contains(@href, "/brand")]'):
# url = response.urljoin(href.extract())
# yield scrapy.Request(url, callback=self.parse)
def parseGoods(self, response):
item = MomoItem()
print(response.headers)
item.item = response.css('h1').extract()
item.price = response.xpath('//ul[@class="prdPrice"]/li/span/text()').extract()
print(item)
yield item
Traceback (most recent call last):
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred
result = f(*args, **kw)
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/core/spidermw.py", line 48, in process_spider_input
return scrape_func(response, request, spider)
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/scrapy/core/scraper.py", line 145, in call_spider
dfd.addCallbacks(request.callback or spider.parse, request.errback)
File "/Users/Shane/Desktop/scrapy/lib/python2.7/site-packages/twisted/internet/defer.py", line 299, in addCallbacks
assert callable(callback)
AssertionError
答案 0 :(得分:2)
问题在于默认的scrapy
HTML解析器。一旦我切换到另一个解析器,它就像一个魅力。 lxml
似乎无法像美丽的汤4那样完美地解析破碎的HTML。
from bs4 import BeautifulSoup
import scrapy
class MomoItem(scrapy.Item):
item = scrapy.Field()
price = scrapy.Field()
# specification = scrapy.Field()
class MomoSpider(scrapy.Spider):
name = "momo"
allowed_domains = ["www.momoshop.com.tw"]
start_urls = ["http://www.momoshop.com.tw/main/Main.jsp"]
def parse(self, response):
for href in response.xpath('//a[contains(@href, "/goods")]/@href'):
url = response.urljoin(href.extract())
self.parseGoods(response)
yield scrapy.Request(url, callback=self.parseGoods)
# for href in response.xpath('//a[contains(@href, "/category")]'):
# url = response.urljoin(href.extract())
# yield scrapy.Request(url, callback=self.parse)
#
# for href in response.xpath('//a[contains(@href, "/brand")]'):
# url = response.urljoin(href.extract())
# yield scrapy.Request(url, callback=self.parse)
def parseGoods(self, response):
soup = BeautifulSoup(response._body, 'html.parser')
item = MomoItem()
item['item'] = soup.find_all('h1')[0].get_text()
item['price'] = soup.find_all('ul', class_='prdPrice')[0]
.find_all('li', class_='special')[0].span.get_text()
yield item
答案 1 :(得分:1)
您使用的是
吗? from scrapy.selector import Selector
编辑:
问题在于回调。回调函数需要参数self.parseGoods(响应)否则.css()用于函数parseGoods。在我的笔记本电脑上工作。
EDITEDIT:
def parse(self, response):
for href in response.xpath('//a[contains(@href, "/goods")]/@href'):
url = response.urljoin(href.extract())
self.parseGoods(response)
yield Request(url, callback=self.parseGoods(response))
# for href in response.xpath('//a[contains(@href, "/category")]'):
# url = response.urljoin(href.extract())
# yield scrapy.Request(url, callback=self.parse)
#
# for href in response.xpath('//a[contains(@href, "/brand")]'):
# url = response.urljoin(href.extract())
# yield scrapy.Request(url, callback=self.parse)
def parseGoods(self, response):
item = MomoItem()
print(response.headers)
item['item'] = response.css('h1').extract()
item['price'] = response.xpath('//ul[@class="prdPrice"]/li/span/text()').extract()
print(item)
return item
好的,这应该有效。 更改了产量项以返回并更改了项目调用,例如item ['item']而不是item.item 尝试一下,告诉我是否s.th.是错的