我在链接中添加了一个子组件。完成后,我将调用父组件以获取新链接。
我在script = """
function main(splash)
splash:init_cookies(splash.args.cookies)
assert(splash:go(splash.args.url))
splash:wait(0.5)
local element = splash:select('div.result-content-columns div.result-title')
local bounds = element:bounds()
element:mouse_click{x=bounds.width/2, y=bounds.height/2}
return {
cookies = splash:get_cookies(),
html = splash:html()
}
end
"""
class MySpider(scrapy.Spider):
custom_settings = {
'DOWNLOADER_MIDDLEWARES' : {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
},
'SPLASH_URL': 'http://192.168.59.103:8050',
'SPIDER_MIDDLEWARES': {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
},
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',
}
def star_requests(self):
yield SplashRequest(url=some_url, meta={'cookiejar': 1},
callback=self.parse,
cookies={'store_language':'en'},
endpoint='render.html',
args={'wait': 5},
)
def parse(self, response):
self.extract_data_from_page(response)
href = response.xpath('//div[@class="paging"]/p/a[contains(text(),"Next")]/@href')
if href:
new_url = href.extract_first()
yield SplashRequest(new_url, self.parse,
cookies={'store_language':'en'},
endpoint='execute', args={'lua_source': self.script})
中的链接没有更新。它们仅在我用新条目重新加载页面时更新。提交后,我希望我的子组件通知/调用父母fetchLinks函数来更新屏幕上的所有链接(除非我刷新页面,否则它不会更新)
这是来自子v-for
的表单成功功能
AddLinksComponent
父组件
// Get the new updated subscriptions
this.$parent.fetchLinks()
答案 0 :(得分:3)
在子组件中,尝试像完成任务那样向父组件发出事件:
this.$emit('fetch');
在父组件中:
<add-links @fetch="fetchLinks"></add-links>