通过不断更新数据从网站获取实时数据

时间:2016-08-31 16:23:04

标签: python web-scraping beautifulsoup

当我在while循环中放入 html = urllib.request.urlopen(req)时,我可以轻松获取数据,但获取数据大约需要3秒钟。所以我想,也许如果我把它放在外面,我可以更快地得到它,因为它不必每次都打开URL,但这会引发一个 AttributeError:' str'对象没有属性'读' 。也许它不能识别HTML变量名。如何加快处理速度?

def soup():
url = "http://www.investing.com/indices/major-indices"
req = urllib.request.Request(
url, 
data=None, 
headers={
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
    'Connection': 'keep-alive'    }
       )
global Ltp
global html
html = urllib.request.urlopen(req)
while True:
    html = html.read().decode('utf-8')
    bsobj = BeautifulSoup(html, "lxml")   

    Ltp = bsobj.find("td", {"class":"pid-169-last"} )
    Ltp = (Ltp.text)
    Ltp = Ltp.replace(',' , '');
    os.system('cls')     
    Ltp = float(Ltp)
    print (Ltp, datetime.datetime.now())    

soup()

2 个答案:

答案 0 :(得分:0)

如果你想要直播,你需要定期回忆网址

html = urllib.request.urlopen(req)

这个应该循环。

import os
import urllib
import datetime
from bs4 import BeautifulSoup
import time


def soup():
    url = "http://www.investing.com/indices/major-indices"
    req = urllib.request.Request(
    url,
    data=None,
    headers={
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
        'Connection': 'keep-alive'    }
           )
    global Ltp
    global html
    while True:
        html = urllib.request.urlopen(req)
        ok = html.read().decode('utf-8')
        bsobj = BeautifulSoup(ok, "lxml")

        Ltp = bsobj.find("td", {"class":"pid-169-last"} )
        Ltp = (Ltp.text)
        Ltp = Ltp.replace(',' , '');
        os.system('cls')
        Ltp = float(Ltp)
        print (Ltp, datetime.datetime.now())
        time.sleep(3)

soup()

结果:

sh: cls: command not found
18351.61 2016-08-31 23:44:28.103531
sh: cls: command not found
18351.54 2016-08-31 23:44:36.257327
sh: cls: command not found
18351.61 2016-08-31 23:44:47.645328
sh: cls: command not found
18351.91 2016-08-31 23:44:55.618970
sh: cls: command not found
18352.67 2016-08-31 23:45:03.842745

答案 1 :(得分:0)

you reassign html to equal the UTF-8 string response then keep calling it like its an IO ... this code does not fetch new data from the server on every loop, read simply reads the bytes from the IO object, it doesnt make a new request.

you can speed up the processing with the Requests library and utilise persistent connections (or urllib3 directly)

Try this (you will need to pip install requests)

import os
import datetime

from requests import Request, Session
from bs4 import BeautifulSoup

s = Session()

while True:
  resp = s.get("http://www.investing.com/indices/major-indices")
  bsobj = BeautifulSoup(resp.text, "html.parser")   
  Ltp = bsobj.find("td", {"class":"pid-169-last"} )
  Ltp = (Ltp.text)
  Ltp = Ltp.replace(',' , '');
  os.system('cls')     
  Ltp = float(Ltp)
  print (Ltp, datetime.datetime.now())