我有两个问题,第一个是抓取站点地图并提取网址并将其放入txt文件中,第二个是读取并逐行抓取该网址。
我的代码像下面这样:
class sitemapSpider(SitemapSpider):
name = "filmnetmapSpider"
sitemap_urls = ['http://filmnet.ir/sitemap.xml']
sitemap_rules = [
('/series/', 'parse_item')
]
storage_file = 'urls.txt'
def parse_item(self, response):
videoid = response.url
with open(self.storage_file, 'a') as handle:
yield handle.writelines(videoid + '\n')
第二只蜘蛛:
class filmnetSpider(scrapy.Spider):
name = 'filmnetSpider'
def start_requests(self):
with open('urls.txt') as fp:
for line in fp:
yield Request(line.strip(), callback=self.parse_website)
def parse_website(self, response):
hxs = HtmlXPathSelector(response)
url = hxs.xpath('//script[@type="application/ld+json"]/text()').extract()
url = ast.literal_eval(json.dumps(url))
url = url[1]
obj = json.loads(url)
poster = obj['image']
name = obj['name']
description = obj['description']
如何更改代码以删除对文件的读/写?
如何在其中使用回调?
注意:此代码不能在一个scrapy蜘蛛中使用;代码是:两个给定的scrapy +波纹管代码,例如在doc
中说过process = CrawlerProcess()
process.crawl(filmnetSpider)
process.crawl(sitemapSpider)
process.start()
答案 0 :(得分:1)
这应该有效:
class sitemapSpider(SitemapSpider):
name = "filmnetmapSpider"
sitemap_urls = ['http://filmnet.ir/sitemap.xml']
sitemap_rules = [
('/series/', 'parse_item')
]
def parse_item(self, response):
videoid = response.url
yield Request(videoid, callback=self.parse_website)
def parse_website(self, response):
hxs = HtmlXPathSelector(response)
url = hxs.xpath('//script[@type="application/ld+json"]/text()').extract()
url = ast.literal_eval(json.dumps(url))
url = url[1]
obj = json.loads(url)
poster = obj['image']
name = obj['name']
description = obj['description']