我想使用scrapy来爬行网站,只是在网站内部,而不是外部链接。
这是我尝试过的:
import scrapy
import json
import uuid
import os
from scrapy.linkextractors import LinkExtractor
class ItemSpider(scrapy.Spider):
name = "items"
allowed_domains = ['https://www.website.com']
start_urls = ['https://www.website.com/post']
rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)
def parse_obj(self, response):
for link in LxmlLinkExtractor(allow=self.allowed_domains).extract_links(response):
response_obj = {}
counter = 1
for item in response.css(".category-lcd"):
title = item.css("div.td-post-header > header > h1::text").extract()
title_name = title[0]
response_obj[counter] = {
'demo': item.css("div.td-post-content > blockquote:nth-child(10) > p::text").extract(),
'title_name': title_name,
'download_link': item.css("div.td-post-content > blockquote:nth-child(12) > p::text").extract()
}
counter += 1
filename = str(uuid.uuid4()) + ".json"
with open(os.path.join('C:/scrapy/tutorial/results/',filename), 'w') as fp:
json.dump(response_obj, fp)
但是刮板不起作用,怎么了?! 它说:
Scrapy TabError:缩进中的制表符和空格使用不一致
答案 0 :(得分:2)
您需要在此部分添加缩进:
for link in LxmlLinkExtractor(allow=self.allowed_domains).extract_links(response):
response_obj = {}
counter = 1