抓取javascript生成的链接

时间:2014-05-21 00:16:30

标签: python-2.7 web-scraping scrapy

我正在使用Scrapy抓取网站,我需要抓取的其中一个链接似乎是由页面中的一小段Javascript代码生成的,如下所示:

 <!--
 var prefix = 'm&#97;&#105;lt&#111;:';
 var suffix = '';
 var attribs = '';
 var path = 'hr' + 'ef' + '=';
 var addy59933 = 'HR-C&#111;l&#111;gn&#101;' + '&#64;';
 addy59933 = addy59933 + 'sc&#111;r' + '&#46;' + 'c&#111;m';
 var addy_text59933 = 'Submit your application';
 document.write( '<a ' + path + '\'' + prefix + addy59933 + suffix + '\'' + attribs + '>' );
 document.write( addy_text59933 );
 document.write( '<\/a>' );
 //-->

除非您从浏览器查看该页面,否则该链接不会显示,但我仍然需要我的蜘蛛能够抓取它。由于代码嵌入在页面中,我有想法从那里抓取然后重新组合链接URL,但文本是我不熟悉的格式。

有更好的方法吗?

编辑:刚才发现那些是HTML角色实体。我还是想知道是否有更好的方法来克服这种混淆。

1 个答案:

答案 0 :(得分:3)

以下是使用js2xml的解决方案:

>>> import js2xml
>>> import pprint
>>> jscode = r"""
... var prefix = 'm&#97;&#105;lt&#111;:';
... var suffix = '';
... var attribs = '';
... var path = 'hr' + 'ef' + '=';
... var addy59933 = 'HR-C&#111;l&#111;gn&#101;' + '&#64;';
... addy59933 = addy59933 + 'sc&#111;r' + '&#46;' + 'c&#111;m';
... var addy_text59933 = 'Submit your application';
... document.write( '<a ' + path + '\'' + prefix + addy59933 + suffix + '\'' + attribs + '>' );
... document.write( addy_text59933 );
... document.write( '<\/a>' );
>>> js = js2xml.parse(jscode)

变量声明由var_decl元素表示,它们的名称在identifier节点中,它们的值在这里是字符串,+运算符,所以让我们做一个{{ 1}}使用dict "".join()元素上的string/text()

>>> # variables
... variables = dict([(var.xpath('string(./identifier)'), u"".join(var.xpath('.//string/text()')))
...                   for var in js.xpath('.//var_decl')])
>>> pprint.pprint(variables)
{'addy59933': u'HR-C&#111;l&#111;gn&#101;&#64;',
 'addy_text59933': u'Submit your application',
 'attribs': u'',
 'path': u'href=',
 'prefix': u'm&#97;&#105;lt&#111;:',
 'suffix': u''}

然后,赋值会更改某些变量的值,并混合使用字符串和变量。连接变量标识符%(identifidername)s和字符串

的字符串值
>>> # identifiers are assigned other string values
... assigns = {}
>>> for assign in js.xpath('.//assign'):
...     value = u"".join(['%%(%s)s' % el.text if el.tag=='identifier' else el.text
...                       for el in assign.xpath('./right//*[self::string or self::identifier]')])
...     key = assign.xpath('string(left/identifier)')
...     assigns[key] = value
... 
>>> pprint.pprint(assigns)
{'addy59933': u'%(addy59933)ssc&#111;r&#46;c&#111;m'}

更新变量dict&#34;应用&#34;分配

>>> # update variables dict with new values
... for key, val in assigns.items():
...    variables[key] = val % variables
... 
>>> pprint.pprint(variables)
{'addy59933': u'HR-C&#111;l&#111;gn&#101;&#64;sc&#111;r&#46;c&#111;m',
 'addy_text59933': u'Submit your application',
 'attribs': u'',
 'path': u'href=',
 'prefix': u'm&#97;&#105;lt&#111;:',
 'suffix': u''}
>>> 

函数参数位于arguments节点(XPath .//arguments/*):

>>> # interpret arguments of document.write()
... arguments = [u"".join(['%%(%s)s' % el.text if el.tag=='identifier' else el.text
...                        for el in arg.xpath('./descendant-or-self::*[self::string or self::identifier]')])
...              for arg in js.xpath('.//arguments/*')]
>>> 
>>> pprint.pprint(arguments)
[u"<a %(path)s'%(prefix)s%(addy59933)s%(suffix)s'%(attribs)s>",
 u'%(addy_text59933)s',
 u'</a>']
>>> 

如果你替换那里的标识符,你得到

>>> # apply string formatting replacing identifiers
... arguments = [arg % variables for arg in arguments]
>>> 
>>> pprint.pprint(arguments)
[u"<a href='m&#97;&#105;lt&#111;:HR-C&#111;l&#111;gn&#101;&#64;sc&#111;r&#46;c&#111;m'>",
 u'Submit your application',
 u'</a>']
>>> 

现在看起来很有趣,让我们通过lxml.html运行它来摆脱数字字符引用:

>>> import lxml.html
>>> import lxml.etree
>>> 
>>> doc = lxml.html.fromstring("".join(arguments))
>>> print lxml.etree.tostring(doc)
<a href="mailto:HR-Cologne@scor.com">Submit your application</a>
>>> 

使用Scrapy Selector

>>> from scrapy.selector import Selector
>>> selector = Selector(text="".join(arguments), type="html")
>>> selector.xpath('.//a/@href').extract()
[u'mailto:HR-Cologne@scor.com']
>>>