我已经编写了代码,以便在 Python 中使用 Scrapy 抓取整个页面。在下面,我粘贴了 main.py 代码。但是,每当我运行蜘蛛时, 它只会从首页抓取 (DEBUG:从<200 https://www.tuscc.si/produkti/instant-juhe>中刮下来),这也是请求标头(在检查时)。
我尝试添加粘贴在此处的“请求有效载荷”字段数据的来源:{“ action”:“ loadList”,“ skip”:64,“ filter”:{“ 1005”:[], “ 1006”:[],“ 1007”:[],“ 1009”:[],“ 1013”:[]}},以及当我尝试用它打开页面时(在此监视中进行了修改:
https://www.tuscc.si/produkti/instant-juhe#32;'action':'loadList';'skip':'32';'sort':'none'
),浏览器将其打开。但是外壳不整洁。我还尝试过添加请求网址https://www.tuscc.si/cache/script/tuscc.js?1563872492384中的数字,其中查询字符串参数为1563872492384;但它仍然不会从请求的页面中抓取。
此外,我尝试了许多变体并添加了很多东西,所有这些我都在线阅读了,只是为了看看是否会有进展,但是没有。...
代码是:
from scrapy.spiders import CrawlSpider
from tus_pomos.items import TusPomosItem
from tus_pomos.scrapy_splash import SplashRequest
class TusPomosSpider(CrawlSpider):
name = 'TUSP'
allowed_domains = ['www.tuscc.si']
start_urls = ["https://www.tuscc.si/produkti/instant-juhe#0;1563872492384;",
"https://www.tuscc.si/produkti/instant-juhe#64;1563872492384;", ]
download_delay = 5.0
def start_requests(self):
# payload = [
# {"action": "loadList",
# "skip": 0,
# "filter": {
# "1005": [],
# "1006": [],
# "1007": [],
# "1009": [],
# "1013": []}
# }]
for url in self.start_urls:
r = SplashRequest(url, self.parse, magic_response=False, dont_filter=True, endpoint='render.json', meta={
'original_url': url,
'dont_redirect': True},
args={
'wait': 2,
'html': 1
})
r.meta['dont_redirect'] = True
yield r
def parse(self, response):
items = TusPomosItem()
pro = response.css(".thumb-box")
for p in pro:
pro_link = p.css("a::attr(href)").extract_first()
pro_name = p.css(".description::text").extract_first()
items['pro_link'] = pro_link
items['pro_name'] = pro_name
yield items
最后,我要求从分页中抓取所有页面,例如此页面(我也尝试过使用scrapy shell url命令):
https://www.tuscc.si/produkti/instant-juhe#64;1563872492384;
但是响应始终是第一页,并且正在反复抓取:
如果您能帮助我,我将不胜感激。谢谢
PARSE_DETAILS生成器功能
def parse_detail(self, response):
items = TusPomosItem()
pro = response.css(".thumb-box")
for p in pro:
pro_link = p.css("a::attr(href)").extract_first()
pro_name = p.css(".description::text").extract_first()
items['pro_link'] = pro_link
items['pro_name'] = pro_name
my_details = {
'pro_link': pro_link,
'pro_name': pro_name
}
with open('pro_file.json', 'w') as json_file:
json.dump(my_details, json_file)
yield items
# yield scrapy.FormRequest(
# url='https://www.tuscc.si/produkti/instant-juhe',
# callback=self.parse_detail,
# method='POST',
# headers=self.headers
# )
在这里,我不确定是否应该按原样分配“ items”变量,还是应该从response.body中获取变量? 另外,产量应该是它的原样吗,还是应该用Request更改它(由给定的ANSWER代码部分复制)?
我是新来的,所以谢谢您的理解!
答案 0 :(得分:1)
与其使用Splash呈现页面,不如从发出的基础请求中获取数据可能更有效。 下面的代码遍历所有带有文章的页面。在parse_detail下,您可以编写逻辑以将响应中的数据加载到json中,在其中您可以找到产品的“ pro_link”和“ pro_name”。
import scrapy
import json
from scrapy.spiders import Spider
from ..items import TusPomosItem
class TusPomosSpider(Spider):
name = 'TUSP'
allowed_domains = ['tuscc.si']
start_urls = ["https://www.tuscc.si/produkti/instant-juhe"]
download_delay = 5.0
headers = {
'Origin': 'https://www.tuscc.si',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-GB,en;q=0.9,nl-BE;q=0.8,nl;q=0.7,ro-RO;q=0.6,ro;q=0.5,en-US;q=0.4',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36',
'Content-Type': 'application/json; charset=UTF-8',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'Referer': 'https://www.tuscc.si/produkti/instant-juhe',
}
def parse(self, response):
number_of_pages = int(response.xpath(
'//*[@class="paginationHolder"]//@data-size').extract_first())
number_per_page = int(response.xpath(
'//*[@name="pageSize"]/*[@selected="selected"]/text()').extract_first())
for page_number in range(0, number_of_pages):
skip = number_per_page * page_number
data = {"action": "loadList",
"filter": {"1005": [], "1006": [], "1007": [], "1009": [],
"1013": []},
"skip": str(skip),
"sort": "none"
}
yield scrapy.Request(
url='https://www.tuscc.si/produkti/instant-juhe',
callback=self.parse_detail,
method='POST',
body=json.dumps(data),
headers=self.headers
)
def parse_detail(self, response):
detail_page = json.loads(response.text)
for product in detail_page['docs']:
item = TusPomosItem()
item['pro_link'] = product['url']
item['pro_name'] = product['title']
yield item