项目不起作用的问题。但我已经尽力做研究,但我还是要提出问题。我简化了如下代码。
简而言之,我想从网站上获取一些产品的细节,我必须使用splash来让自己能够阅读一些CSS。我已经注册了一个项目和两个db类,我的计划是将产品存储在产品表中,将它们的图像路径存储在另一个中。
但是,最后下载了图像,但尚未触发项目管道。
在我的管道中,我只能获得两张照片, - >
虽然我可以得到图片,但我无法收到我的信息
print(“pipeline”+ image_url)
最重要的是
pipeline.py
=============
from sqlalchemy.orm import sessionmaker
from scrapy.exceptions import DropItem
from itembot.database.models import Products, db_connect, create_products_table
from scrapy.pipelines.images import ImagesPipeline
class ImagesPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
for image_url in item["image_urls"]:
print("pipeline" + image_url)
yield scrapy.Request(image_url)
def item_completed(self, results, item, info):
image_paths = [x["path"] for ok, x in results if ok]
print("imagepath" + image_paths)
if not image_paths:
raise DropItem("Item contains no images")
item["image_paths"] = image_paths
return item
class ItembotPipeline(object):
def __init__(self):
print("pipeline inited: " )
engine = db_connect()
create_products_table(engine)
self.Session = sessionmaker(bind=engine)
print("end init")
def process_item(self, item, spider):
print("pipeline Entered : ",item )
print("pipeline Entered : item is products ",item )
products = Products(**item)
try:
session = self.Session()
print("pipeline adding : ",item )
session.add(products)
session.commit()
print("pipeline commited : ",item )
session.refresh(products)
item[id] = products[id]
yield item[id]
except:
session.rollback()
raise
finally:
session.close()
if(products[id] is not None):
print("pipeline 2if: ",item )
productsphotos = ProductsPhotos(**item)
try:
session = self.Session()
session.add(productsphotos)
session.commit()
session.refresh(productsphotos)
except:
session.rollback()
raise
finally:
session.close()
return item
最重要的是蜘蛛
import scrapy
from scrapy.loader import ItemLoader
from scrapy import Request
from w3lib.html import remove_tags
import re
from ..database.models import Products
from itembot.items import ItembotItem
from scrapy_splash import SplashRequest
class FreeitemSpider(scrapy.Spider):
name = "freeitem"
start_urls = [
"https://google.com.hk" ,
]
def parse(self, response):
yield SplashRequest(url=response.url, callback=self.parse_product, args={"wait": 0.5})
def parse_product(self, response):
products = response.css(" div.classified-body.listitem.classified-summary")
c = 0
item = []
for product in products:
item = ItembotItem()
imageurl = {}
fullurls=[]
item["title"]= product.css("h4.R a::text").extract_first()
pc = product.css("div#gallery"+str(c) + " ul a::attr(href)").extract()
for link in pc:
fullurls.append(response.urljoin(link))
item["image_urls"]= fullurls
url = product.css("a.button-tiny-short.R::attr(href)").extract_first()
item["webURL"]= response.urljoin(url)
c = c+1
yield [item]
这是我的item.py
import scrapy
class ItembotItem(scrapy.Item):
id = scrapy.Field(default"null")
title = scrapy.Field(default="null")
details = scrapy.Field(default="null")
webURL = scrapy.Field(default="null")
images = scrapy.Field(default="null")
image_urls = scrapy.Field(default="null")
class ProductsPhotos(DeclarativeBase):
__tablename__ = "products_photos"
id = Column(Integer, primary_key=True)
product_ID = Column(ForeignKey(Products.id),nullable=False)
photo_path = Column(String(200))
parent = relationship(Products, load_on_pending=True)
settings.py
ITEM_PIPELINES = {
"itembot.pipelines.ItembotPipeline": 300,
"scrapy.pipelines.images.ImagesPipeline": 1,
}
IMAGES_STORE = "./photo"
model.py
class Products(DeclarativeBase):
__tablename__ = "products"
id = Column(Integer, primary_key=True)
title = Column(String(300))
webURL = Column(String(200))
def __str__(self):
return self.title
class ProductsPhotos(DeclarativeBase):
__tablename__ = "products_photos"
id = Column(Integer, primary_key=True)
product_ID = Column(ForeignKey(Products.id),nullable=False)
photo_path = Column(String(200))
parent = relationship(Products, load_on_pending=True)
答案 0 :(得分:1)
我发现一个可以解释你问题的重大错误。
首先
class ImagesPipeline(ImagesPipeline)
不要为自己的类使用与父类相同的名称
更好
class MyImagesPipeline(ImagesPipeline)
现在你的主要错误
ITEM_PIPELINES = {
...
"scrapy.pipelines.images.ImagesPipeline": 1,
}
您使用ImagesPipeline
中的标准scrapy.pipelines.images
,
不是来自ImagesPipeline
MyImagesPipeline
(itembot.pipelines
)
因此下载图片但不会运行print("pipeline" + image_url)
应该是
ITEM_PIPELINES = {
...
"itembot.pipelines.ImagesPipeline": 1,
}
或者如果您使用名称MyImagesPipeline
ITEM_PIPELINES = {
...
"itembot.pipelines.MyImagesPipeline": 1,
}