JMS的Spring启动依赖是什么?

时间:2017-12-09 01:08:58

标签: spring-boot spring-integration

spring-boot guide https://spring.io/guides/gs/messaging-jms/使用spring-boot-starter-activemq,而这又指spring-jms。如果我想使用不同的消息代理并且spring没有为该特定消息代理提供启动jar,那么如何在依赖项中包含spring-jms?我可以只包含spring-jms还是有更好的方法来做到这一点?

1 个答案:

答案 0 :(得分:0)

这是正确的:你需要包括import scrapy class CollegiateSpider(scrapy.Spider): name = 'Collegiate' allowed_domains = ['collegiate-ac.com'] start_urls = ['https://collegiate-ac.com/uk-student-accommodation/'] # Step 1 - Get the area links def parse(self, response): for url in response.xpath('//*[@id="top"]/div[1]/div/div[1]/div/ul/li/a/@href').extract(): url = response.urljoin(url) #print('>>>', url) yield scrapy.Request(url, callback=self.parse_area_page) # Step 2 - Get the block links def parse_area_page(self, response): for url in response.xpath('//div[3]/div/div/div/a/@href').extract(): url = response.urljoin(url) yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage) # Step 3 Get the room links def parse_unitpage(self, response): for url in response.xpath('//*[@id="subnav"]/div/div[2]/ul/li[5]/a/@href').extract(): url = response.urljoin(url) yield scrapy.Request(url, callback=self.parse_final) # Step 4 - Scrape the data def parse_final(self, response): # show some information for test print('>>> parse_final:', response.url) # send url as item so it can save it in file yield {'final_url': response.url} # --- run it without project --- import scrapy.crawler c = scrapy.crawler.CrawlerProcess({ "FEED_FORMAT": 'csv', "FEED_URI": 'output.csv' }) c.crawl(CollegiateSpider) c.start() 和 供应商特定的依赖关系。

更多信息位于Reference Manual