这是我尝试在AWS博客站点的第一页中爬网URL列表。 但是它什么也没返回。我认为xpath可能有问题,但是不确定如何解决。
import scrapy
class AwsblogSpider(scrapy.Spider):
name = 'awsblog'
allowed_domains = ['aws.amazon.com/blogs']
start_urls = ['http://aws.amazon.com/blogs/']
def parse(self, response):
blogs = response.xpath('//li[@class="m-card"]')
for blog in blogs:
url = blog.xpath('.//div[@class="m-card-title"]/a/@href').extract()
print(url)
Attempt 2
import scrapy
class AwsblogSpider(scrapy.Spider):
name = 'awsblog'
allowed_domains = ['aws.amazon.com/blogs']
start_urls = ['http://aws.amazon.com/blogs/']
def parse(self, response):
blogs = response.xpath('//div[@class="aws-directories-container"]')
for blog in blogs:
url = blog.xpath('//li[@class="m-card"]/div[@class="m-card-title"]/a/@href').extract_first()
print(url)
日志输出:
2019-11-06 10:38:30 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-11-06 10:38:30 [scrapy.core.engine] INFO: Spider opened
2019-11-06 10:38:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-11-06 10:38:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-11-06 10:38:31 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://aws.amazon.com/robots.txt> from <GET http://aws.amazon.com/robots.txt>
2019-11-06 10:38:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://aws.amazon.com/robots.txt> (referer: None)
2019-11-06 10:38:31 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://aws.amazon.com/blogs/> from <GET http://aws.amazon.com/blogs/>
2019-11-06 10:38:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://aws.amazon.com/blogs/> (referer: None)
2019-11-06 10:38:32 [scrapy.core.engine] INFO: Closing spider (finished)
任何帮助将不胜感激!
答案 0 :(得分:1)
您使用了错误的解析器,该站点正在通过动态脚本功能加载博客详细信息。查看页面源代码以了解博客内容的可用性。
要获取数据,您应该使用以下动态数据获取技术
1. Scrapy splash
2. Selenium