试过Python BeautifulSoup和Phantom JS:STILL无法抓取网站

时间:2014-02-25 23:49:00

标签: javascript python web-scraping beautifulsoup phantomjs

过去几周你可能已经看到了我绝望的挫败感。我一直在抓一些等待时间的数据,但仍无法从这两个网站获取数据

http://www.centura.org/erwait

http://hcavirginia.com/home/

起初我尝试使用BS4 for Python。以下是HCA Virgina的示例代码

from BeautifulSoup import BeautifulSoup
import requests

url = 'http://hcavirginia.com/home/'
r = requests.get(url)

soup = BeautifulSoup(r.text)
wait_times = [span.text for span in soup.findAll('span', attrs={'class': 'ehc-er-digits'})]

fd = open('HCA_Virginia.csv', 'a')

for w in wait_times:
    fd.write(w + '\n')

fd.close()

这一切都是打印空白到控制台或CSV。所以我尝试使用PhantomJS,因为有人告诉我它可能正在加载JS。然而,同样的结果!打印空白到控制台或CSV。示例代码如下。

var page = require('webpage').create(),
url = 'http://hcavirginia.com/home/';

page.open(url, function(status) {
if (status !== "success") {
    console.log("Can't access network");
} else {
    var result = page.evaluate(function() {

        var list = document.querySelectorAll('span.ehc-er-digits'), time = [], i;
        for (i = 0; i < list.length; i++) {
            time.push(list[i].innerText);
        }
        return time;

    });
    console.log (result.join('\n'));
    var fs = require('fs');
    try 
    {                   
        fs.write("HCA_Virginia.csv", '\n' + result.join('\n'), 'a');
    } 
    catch(e) 
    {
        console.log(e); 
    } 
}

phantom.exit();
});

Centura Health也存在同样的问题:(

我做错了什么?

1 个答案:

答案 0 :(得分:12)

您遇到的问题是元素是由JS创建的,可能需要一些时间来加载它们。您需要一个处理JS的scraper,并且可以等到创建所需的元素。

您可以使用PyQt4。调整this recipe from webscraping.com和像BeautifulSoup这样的HTML解析器,这非常简单:

(写完之后,我找到了webscraping的python库。可能值得一看)

import sys
from bs4 import BeautifulSoup
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import * 

class Render(QWebPage):
    def __init__(self, url):
        self.app = QApplication(sys.argv)
        QWebPage.__init__(self)
        self.loadFinished.connect(self._loadFinished)
        self.mainFrame().load(QUrl(url))
        self.app.exec_()

    def _loadFinished(self, result):
        self.frame = self.mainFrame()
        self.app.quit()   

url = 'http://hcavirginia.com/home/'
r = Render(url)
soup = BeautifulSoup(unicode(r.frame.toHtml()))
# In Python 3.x, don't unicode the output from .toHtml(): 
#soup = BeautifulSoup(r.frame.toHtml()) 
nums = [int(span) for span in soup.find_all('span', class_='ehc-er-digits')]
print nums

输出:

[21, 23, 47, 11, 10, 8, 68, 56, 19, 15, 7]

这是我原来的答案,使用ghost.py

我设法使用ghost.py一起为你破解了一些东西。 (在Python 2.7上测试,ghost.py 0.1b3和PyQt4-4 32-bit)。我不建议在生产代码中使用它!

from ghost import Ghost
from time import sleep

ghost = Ghost(wait_timeout=50, download_images=False)
page, extra_resources = ghost.open('http://hcavirginia.com/home/',
                                   headers={'User-Agent': 'Mozilla/4.0'})

# Halt execution of the script until a span.ehc-er-digits is found in 
# the document
page, resources = ghost.wait_for_selector("span.ehc-er-digits")

# It should be possible to simply evaluate
# "document.getElementsByClassName('ehc-er-digits');" and extract the data from
# the returned dictionary, but I didn't quite understand the
# data structure - hence this inline javascript.
nums, resources = ghost.evaluate(
    """
    elems = document.getElementsByClassName('ehc-er-digits');
    nums = []
    for (i = 0; i < elems.length; ++i) {
        nums[i] = elems[i].innerHTML;
    }
    nums;
    """)

wt_data = [int(x) for x in nums]
print wt_data
sleep(30) # Sleep a while to avoid the crashing of the script. Weird issue!

一些意见:

  • 从我的评论中可以看出,我并没有完全从Ghost.evaluate(document.getElementsByClassName('ehc-er-digits');)中找出返回的dict的结构 - 尽管可能使用这样的查询找到所需的信息。

  • 我的脚本崩溃时也遇到了一些问题。睡了30秒就解决了这个问题。