我有一个抓取程序,我需要在抓取时单击下一步按钮,实际上我大约在一周前在这里问了一个有关如何执行此操作的问题,并获得了一些不错的答复,但是我得到的答案代码仅部分起作用。它将抓取第1页和第2页,但没有跳到第3页,而是跳到了最后一页,第10页,我不知道为什么。
import csv
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import GameItem
def process_csv(csv_file):
data = []
reader = csv.reader(csv_file)
next(reader)
for fields in reader:
if fields[0] != "":
url = fields[0]
else:
continue # skip the whole row if the url column is empty
if fields[1] != "":
ip = "http://" + fields[1] + ":8050" # adding http and port because this is the needed scheme
if fields[2] != "":
useragent = fields[2]
data.append({"url": url, "ip": ip, "ua": useragent})
return data
class MySpider(Spider):
name = 'splash_spider' # Name of Spider
# notice that we don't need to define start_urls
# just make sure to get all the urls you want to scrape inside start_requests function
# getting all the url + ip address + useragent pairs then request them
def start_requests(self):
# get the file path of the csv file that contains the pairs from the settings.py
with open(self.settings["PROXY_CSV_FILE"], mode="r") as csv_file:
# requests is a list of dictionaries like this -> {url: str, ua: str, ip: str}
requests = process_csv(csv_file)
for req in requests:
# no need to create custom middlewares # just pass useragent using the headers param, and pass proxy using the meta param
yield SplashRequest(url=req["url"], callback=self.parse, args={"wait": 3},
headers={"User-Agent": req["ua"]},
splash_url = req["ip"],
)
# Scraping
def parse(self, response):
item = GameItem()
for game in response.css("tr"):
# Card Name
yield {
'card_name': game.css("a.card_popup::text").get(),
}
next_page = response.css('table+ div a:nth-child(8)::attr("href")').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
答案 0 :(得分:1)
next_page = response.css('table+ div a:nth-child(8)::attr("href")').get()
您肯定不想要nth-child(8)
,而是想要最后一个div
及其最后一个a
,它包含一个href
属性,即:
response.css("#content > div:last-of-type > a[href]:last-of-type')
如果您想更加勤奋,请检查匹配的<a>
的文本,以确保其中包含短语Next
答案 1 :(得分:0)
这是正确的代码,需要使用xpath而不是CSS。现在工作正常。
next_page = response.xpath('//a[contains(., "- Next>>")]/@href').get()
if next_page is not None:
yield response.follow(next_page, self.parse)