我是新来的爬行者,我试图从https://www.lacuarta.com/中提取一些新闻,也只是与标签san-valentin相匹配的新闻。
该网页只是带有新闻图片的标题,如果您想阅读该新闻,则必须单击该新闻,这会将ypu带到该故事的页面(https://www.lacuarta.com/etiqueta/san-valentin/)
所以,我想我要采取的步骤是:
我已经有了要点1和2:
import scrapy
class SpiderTags(scrapy.Spider):
name = "SpiderTags"
def start_requests(self):
url = 'https://www.lacuarta.com/etiqueta/'
tag = getattr(self, 'tag', None)
if tag is not None:
url = url + 'etiqueta/' + tag
yield scrapy.Request(url, self.parse)
def parse(self, response):
for url in response.css("h4.normal a::attr(href)"):
yield{
"link:": url.get()
}
到目前为止,我有新闻的链接,现在我不知道如何输入该新闻以提取所需的数据,然后返回到原始网页转到第2页并重复所有操作>
PD:我想要的信息已经知道如何获取
response.css("title::text").get()
response.css("div.col-md-11 p::text").getall()
response.css("div.col-sm-6 h4 a::text").getall()
response.css("div.col-sm-6 h4 small span::text").getall()
答案 0 :(得分:1)
您需要yield
新的Request
以便关注链接。例如:
def parse(self, response):
for url in response.css("h4.normal a::attr(href)"):
# This will get the URL value, not follow it:
# yield{
# "link:": url.get()
# }
# This will follow the URL:
yield scrapy.Request(url.get(), self.parse_news_item)
def parse_news_item(self, response):
# Extract things from the news item page.
yield {
'Title': response.css("title::text").get(),
'Story': response.css("div.col-md-11 p::text").getall(),
'Author': response.css("div.col-sm-6 h4 a::text").getall(),
'Date': response.css("div.col-sm-6 h4 small span::text").getall(),
}
答案 1 :(得分:1)
import scrapy
from scrapy.spiders import CrawlSpider
class SpiderName(CrawlSpider):
name = 'spidername'
allowed_domains = ['lacuarta.com']
start_urls = ['https://www.lacuarta.com/etiqueta/san-valentin/']
def parse(self, response):
for item in response.xpath('//article[@class="archive-article modulo-fila"]'):
# maybe you need more data whithin `item`
post_url = item.xpath('.//h4/a/@href').extract_first()
yield response.follow(post_url, callback=self.post_parse)
next_page = response.xpath('//li[@class="active"]/following-sibling::li/a/@href').extract_first()
if next_page:
yield response.follow(next_page, callback=self.parse)
def post_parse(self, response):
title = response.xpath('//h1/text()').extract_first()
story = response.xpath('//div[@id="ambideXtro"]/child::*').extract()
author = response.xpath('//div[@class="col-sm-6 m-top-10"]/h4/a/text()').extract_first()
date = response.xpath('//span[@class="ltpicto-calendar"]').extract_first()
yield {'title': title, 'story': story, 'author': author, 'date': date}