我正在抓取以下网页http://www.starcitygames.com/catalog/category/Duel%20Decks%20Venser%20vs%20Koth,需要获取卡的名称,价格,库存和状态。好吧,我得到了四个工作中的三个,但是我遇到了麻烦。不管我尝试什么,它要么只是给我NULL要么是其他不正确的东西。
部分HTML代码
<td class="deckdbbody search_results_7">
<a href="http://www.starcitygames.com/content/cardconditions">NM/M</a>
</td>
SplashSpider.py
import csv
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import GameItem
# process the csv file so the url + ip address + useragent pairs are the same as defined in the file # returns a list of dictionaries, example:
# [ {'url': 'http://www.starcitygames.com/catalog/category/Rivals%20of%20Ixalan',
# 'ip': 'http://204.152.114.244:8050',
# 'ua': "Mozilla/5.0 (BlackBerry; U; BlackBerry 9320; en-GB) AppleWebKit/534.11"},
# ...
# ]
def process_csv(csv_file):
data = []
reader = csv.reader(csv_file)
next(reader)
for fields in reader:
if fields[0] != "":
url = fields[0]
else:
continue # skip the whole row if the url column is empty
if fields[1] != "":
ip = "http://" + fields[1] + ":8050" # adding http and port because this is the needed scheme
if fields[2] != "":
useragent = fields[2]
data.append({"url": url, "ip": ip, "ua": useragent})
return data
class MySpider(Spider):
name = 'splash_spider' # Name of Spider
# notice that we don't need to define start_urls
# just make sure to get all the urls you want to scrape inside start_requests function
# getting all the url + ip address + useragent pairs then request them
def start_requests(self):
# get the file path of the csv file that contains the pairs from the settings.py
with open(self.settings["PROXY_CSV_FILE"], mode="r") as csv_file:
# requests is a list of dictionaries like this -> {url: str, ua: str, ip: str}
requests = process_csv(csv_file)
for req in requests:
# no need to create custom middlewares
# just pass useragent using the headers param, and pass proxy using the meta param
yield SplashRequest(url=req["url"], callback=self.parse, args={"wait": 3},
headers={"User-Agent": req["ua"]},
splash_url = req["ip"],
)
# Scraping
def parse(self, response):
item = GameItem()
for game in response.css("tr[class^=deckdbbody]"):
# Card Name
item["card_name"] = game.css("a.card_popup::text").extract_first()
item["condition"] = game.css("a::text").extract_first() #Problem is here
item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
item["price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()
yield item
答案 0 :(得分:1)
我认为使用此选择器无法获得正确的<a>
元素。条件的css说要获取<a>
中的第一个tr[class^=deckdbbody]
,但条件列不是<a>
中的第一个tr[class^=deckdbbody]
元素。
为了选择正确的元素,可以使用xpath contains()
来测试它是否是所需的链接。
>>> response.css("tr[class^=deckdbbody]").xpath(".//a[contains(@href, 'cardconditions')]/text()").extract()
['NM/M', 'PL', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M']
此外,我认为您不需要Scrapy Splash即可抓取该网站,该数据似乎可以从scrapy shell
命令获得。
还值得一看https://stackoverflow.com/help/minimal-reproducible-example
答案 1 :(得分:1)
您需要在CSS表达式中指定目标单元格:
item["condition"] = game.css("td[class^=deckdbbody].search_results_7 a::text").get()