如何在QWebPage中逐个加载多个页面

时间:2014-06-19 11:29:47

标签: qt pyside web-crawler qwebkit qwebpage

我正在尝试抓取新闻文章页面以征求意见。经过一些研究,我发现大多数网站都使用iframe。我想得到" src" iframe的我使用PySide在Python中使用QtWebKit。它实际上只工作一次。它没有加载其他网页。我使用以下代码:

import sys  
import pymysql
from PySide.QtGui import *  
from PySide.QtCore import *  
from PySide.QtWebKit import *
from pprint import pprint
from bs4 import BeautifulSoup

class Render(QWebPage):
  def __init__(self, url):
    try:
      self.app = QApplication(sys.argv)
    except RuntimeError:
      self.app = QCoreApplication.instance()
    QWebPage.__init__(self)
    self.loadFinished.connect(self._loadFinished)
    self.mainFrame().load(QUrl(url))
    self.app.exec_()

   def _loadFinished(self, result):
    self.frame = self.mainFrame()
    self.app.quit()


def visit(url):
  r = Render(url)
  p = r.frame.toHtml()
  f_url = str(r.frame.url().toString())
  return p

def is_comment_url(url):
  lower_url = url.lower()
  n = lower_url.find("comment")
  if n>0:
    return True
  else:
    return False

with open("urls.txt") as f:
  content = f.read().splitlines()
list_of_urls = []

for url in content:
  page = visit(url)
  soup = BeautifulSoup(page)
  for tag in soup.findAll('iframe', src=True):
    link = tag['src']
    if is_comment_url(link):
      print(link)
      list_of_urls += link
pprint(list_of_urls)

但问题是它只适用于单次迭代并且卡住了。

还有什么方法可以保存浏览器显示的网页(在执行所有的javascript等之后)。

0 个答案:

没有答案