如何等待页面加载完成?

时间:2016-05-10 12:07:32

标签: python web-crawler

我试图从http://www.neimanmarcus.com/Stuart-Weitzman-Reserve-Suede-Over-the-Knee-Boot-Black/prod179890262/p.prod

获取可用的启动尺寸(在$(' option.addedOption')下)

我尝试了以下代码,但它总是在获得大小之前返回。

# config.url = 'http://www.neimanmarcus.com/Stuart-Weitzman-Reserve-Suede-Over-the-Knee-Boot-Black/prod179890262/p.prod'
import urllib2
import requests
import config
import time
from lxml.cssselect import CSSSelector
from lxml.html import fromstring

print config.url
headers = {
    "Host": "www.neimanmarcus.com",
    "Connection": "keep-alive",
    "Content-Length": 106,
    "Pragma": "no-cache",
    "Cache-Control": "no-cache",
    "Accept": "*/*",
    "Origin": "http://www.neimanmarcus.com",
    "X-Requested-With": "XMLHttpRequest",
    "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36",
    "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
    "Referer": "http://www.neimanmarcus.com/Stuart-Weitzman-Reserve-Suede-Over-the-Knee-Boot-Black/prod179890262/p.prod",
    "Accept-Language": "en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4,fr;q=0.2,cs;q=0.2,zh-TW;q=0.2"
}
request = urllib2.Request(config.url, headers=headers)
html = urllib2.urlopen(request)
time.sleep(10)
html = html.read()
print html
html = fromstring(html)
sel = CSSSelector('option.addedOption')
try:
    options = sel(html)
    print options
except Exception as e:
    print e

我发现尺寸符合请求' http://www.neimanmarcus.com/product.service' (实际上,Header是根据此请求的请求标头创建的。)

如何获取整个页面信息(特别是启动大小)?

我还尝试直接请求http://www.neimanmarcus.com/product.service但也失败了。

2 个答案:

答案 0 :(得分:3)

正如我所理解的那样:无论代码睡了多久,它仍然没有加载鞋子尺寸?

由于您没有使用无头浏览器,因此不要在请求的页面上执行javascript。尝试使用像PhantomJS这样的无头浏览器。这里列出了更多headless browsers

这是一种如何使用PhantomJS in Python的方法。

答案 1 :(得分:0)

使用它像:

with urllib2.urlopen(request) as response:
   html = response.read()
   print html
   html = fromstring(html)
   sel = CSSSelector('option.addedOption')
   try:
       options = sel(html)
       print options
   except Exception as e:
       print e

而不是

html = urllib2.urlopen(request)
time.sleep(10)
html = html.read()
...
相关问题