我有一只蜘蛛用于刮取一些数据和pdf文件。除了pdf之外,一切都已完成。 pdf无法直接下载到file_urls字段中。 html看起来像这样
<a onclick="document.forms[0].target ='_blank';" id="main_0_body_0_lnkDownloadBio" href="javascript:__doPostBack('main_0$body_0$lnkDownloadBio','')">Download full <span class="no-wrap">bio <i class="fa fa-angle-right" data-nowrap-cta=""></i></span></a>
似乎有些javascript点击方法正在运行而不是src。当我们点击它时,它将打开一个带有下载选项的新窗口。现在我计划使用splash请求和lua脚本。这是代码
class DataSpider(scrapy.Spider):
name = config.NAME
allowed_domains = [config.DOMAIN]
def start_requests(self):
for url in config.START_URLS:
yield scrapy.Request(url, self.parse_data)
def parse_data(self, response):
script = """
function main(splash)
local url = splash.args.url
assert(splash:go(url))
assert(splash:wait(1))
-- go back 1 month in time and wait a little (1 second)
assert(splash:runjs("document.getElementById('DownloadBio').click()"))
assert(splash:wait(1))
-- return result as a JSON object
return {
html = splash:html(),
}
end
"""
response = json.loads(response.text)
res = response['people']
for index, i in enumerate(res[1]):
first_name = res[index]['name']
last_name = res[index]['lastname']
location = res[index]['location']
link = res[index]['pageurl']
link = config.HOST + link
item = ProtoscraperItem(first_name=first_name, last_name=last_name, title=title, location=location, link=link)
# This request is for the detail page and there is more info and pdf.
request = SplashRequest(link, self.parse_details, meta={
'splash': {
'args': {'lua_source': script, 'wait': 30, 'timeout': 40},
'endpoint': 'execute',
},)
request.meta['item'] = item
request.meta['link'] = link
yield request
def parse_details(self, response):
# what to do here
所以在这里我点击锚标签来执行javscript。而且我认为它有效,但没有任何内容被下载。我在这里失踪了什么。是否可以指定下载路径?我认为这对selenium来说是可能的,但我怎么能用splash和lua来做呢?
答案 0 :(得分:1)
通过查看单击按钮,我相信它在ASP.net中调用了“__doPostBack”功能。单击该提交按钮时,表单[0]将被提交给某些值。您需要使用表单提交为所有提交的元素填充页面。
执行此操作所需的参数是
__ EVENTTARGET,
__ EVENTARGUMENT
__ VIEWSTATE
__ VIEWSTATEGENERATOR
__ EVENTVALIDATION
或许更多,通常这些参数在表单中设置为隐藏值。 (请在您的网页上验证)
arguments = {'__EVENTTARGET': 'main_0$body_0$lnkDownloadBio',
'__EVENTARGUMENT': '',
'__VIEWSTATE': viewstate,
'__VIEWSTATEGENERATOR': viewstategen,
'__EVENTVALIDATION': eventvalid,
'search': '',
'filters': '',
'score': ''
}
HEADERS = {
'Content-Type':'application/x-www-form-urlencoded',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64)
AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/60.0.3112.101 Safari/537.36',
'Accept': 'text / html, application / xhtml + xml,
application / xml;q = 0.9, image / webp, image / apng, *
/ *;q = 0.8'
}
data = urllib.urlencode(arguments)
r = requests.post(submitin_url, data, allow_redirects=False, headers=HEADERS)
with open(some_filename, 'wb') as f:
f.write(r.content)
我和我的项目有类似的工作,我这样做了。使用Python Request发送表单值和参数。响应将是您尝试下载的文件。将其写入文件并确保扩展名正确。我希望它会对你有所帮助。