使用bs4和Python从网页中提取

时间:2017-10-10 09:15:56

标签: python web-scraping beautifulsoup

如何从以下网站的&#34;当前流编号:1&#34; 中提取数字1,请参阅我目前使用python和bs4的尝试,但未成功< / p>

我试图抓取的网页来源

<head><link href="basic.css" rel="stylesheet" type="text/css"></head>
<body>
<p><b>STATUS</b><br>
<p><b>Device information:</b><br>
Hardware type:  
Exstreamer 110
 (ID 20)<br>
<br>
Firmware: Streaming Client<br>
FW version: B2.17&nbsp;-&nbsp;31/05/2010 (dd/mm/yyyy)<br>
WEB version: 04.00<br>
Bootloader version: 99.19<br>
Setup version: 01.02<br>
Sg version: A8.05&nbsp;-&nbsp;May 31 2010<br>
Fs version: A2.05&nbsp;-&nbsp;31/05/2010 (dd/mm/yyyy)<br>
<p><b>System status:</b><br>
Ticks: 1588923494 ms<br>
Uptime: 10178858 s<br>
<p><b>Streaming status:</b><br>
Volume: 90%<br>
Shuffle:   Off<br>
Repeat:   Off<br>
Output peak level L: -63dBFS<br>
Output peak level R: -57dBFS<br>
Buffer level: 65532 bytes<br>
RTP decoder latency: 0 ms; average 0 ms<br>
Current stream number:   1   <br>
Current URL: http://listen.qkradio.com.au:8382/listen.mp3<br>
Current channel: 0<br>
Stream bitrate: 32 kbps<br>

代码:

from bs4 import BeautifulSoup
import urllib2
import lxml

SERVER = 'http://xx.xx.xx.xx:8080/ixstatus.html'
authinfo = urllib2.HTTPPasswordMgrWithDefaultRealm()
authinfo.add_password(None, SERVER, 'user', 'password')
page = 'http://xxx.xxx.xxx.xxx:8080/ixstatus.html'
handler = urllib2.HTTPBasicAuthHandler(authinfo)
myopener = urllib2.build_opener(handler)
opened = urllib2.install_opener(myopener)
output = urllib2.urlopen(page)
#print output.read()
soup = BeautifulSoup(output.read(), "lxml")
#print(soup)

print "stream number:", soup.select('Current stream number')[0].text

1 个答案:

答案 0 :(得分:0)

您对select的调用使BS4使用CSS选择器来查找不存在的内容。 <number><stream>内的<Current> {/ 1}}。

由于html代码没有class或id属性,您可以使用它们来定位所需的数据。您(可能)最好的选择是查看段落并使用正则表达式查找子字符串,如:Current stream number: some_number

以下是我的表现:

import re
import bs4

page = "html code to scrape"

# this pattern will be used to find data we want
pattern = r'\s*Current\s+stream\s+number:\s*(\d+)'

soup = bs4.BeautifulSoup(page, 'lxml')

paragraphs = soup.findAll('p')
data = []
for para in paragraphs:
    found = re.finditer(pattern, para.text, re.IGNORECASE);

    data.extend([x.group(1) for x in found])


print(data)