我刚开始使用Scrapy进行网站抓取。我要抓取9000多个网址。
我已经尝试并成功了,除了我想根据url在json文件中输出结果(如果我从url1抓取了十个项目,我希望这些项目在带有url1的json对象中,相同用于url2等)
{"url1": "www.reddit.com/page1",
"results1: {
["name": "blabla",
"link": "blabla",
],
["name": "blabla",
"link": "blabla",
],
["name": "blabla",
"link": "blabla",
]
},
{"url2": "www.reddit.com/page2",
"results2: {
["name": "blabla",
"link": "blabla",
],
["name": "blabla",
"link": "blabla",
],
["name": "blabla",
"link": "blabla",
]
}
是否可以这样做?还是最好刮掉整个网站,然后在工作后对它进行分类?
我现在的代码:
import scrapy
class glenmarchSpider(scrapy.Spider):
name = "glenmarch"
def start_requests(self):
start_urls = reversed([
'https://www.glenmarch.com/cars/results?make=&model=&auction_house_id=&auction_location=&year_start=1913&year_end=1916&low_price=&high_price=&auction_id=&fromDate=&toDate=&keywords=AC+10+HP&show_unsold_cars=0&show_unsold_cars=1?limit=9999',
'https://www.glenmarch.com/cars/results?make=&model=&auction_house_id=&auction_location=&year_start=1918&year_end=1928&low_price=&high_price=&auction_id=&fromDate=&toDate=&keywords=AC+12+HP&show_unsold_cars=0&show_unsold_cars=1?limit=9999'
])
for url in start_urls:
yield scrapy.Request(url, callback=self.parse)
def parse(self, response):
for caritem in response.css("div.car-item-border"):
yield {
"model": caritem.css("div.make::text").get(),
"price": caritem.css("div.price::text").get(),
"auction": caritem.css("div.auctionHouse::text").get(),
"date": caritem.css("div.date::text").get(),
"auction_url": caritem.css("div.view-auction a::attr(href)").get(),
"img": caritem.css("img.img-responsive::attr(src)").get()
}
答案 0 :(得分:0)
仅使用response.url
不会完成这项工作吗?
yield {
"url": response.url,
# ...
}