使用BS4从远程HTML解析表

时间:2014-01-29 23:11:16

标签: python web-scraping beautifulsoup

我是python(3)的新手,我想解析一个HTML页面。我使用BS4,并且想要解析这个页面:http://www.myfxbook.com/members/fxgrowthbot/forex-growth-bot/71611

我只对

感兴趣
<div id="history"  style="display:none" >

表及其相关&lt; td&gt;标签

这就是我所拥有的。我不知道如何迭代所有&lt; td&gt;在表中。

import urllib.request
from html.parser import HTMLParser

url_to_parse = 'http://www.myfxbook.com/members/fxgrowthbot/forex-growth-bot/71611'

from bs4 import BeautifulSoup
print( 'Requesting URL ' + url_to_parse + '...')
response = urllib.request.urlopen( url_to_parse )
print('Done')

print( 'Reading URL ' + url_to_parse + '...')
html = response.read()
print('Done')

soup = BeautifulSoup( str(html) )

print( '*** History ***')
for h in soup.find_all("div", attrs={"id" : "history"}):
print( 'Found Historyy <div>!')

history = soup.select("#history")
# How to iterate over history table's td?

任何帮助都将不胜感激。

此致

2 个答案:

答案 0 :(得分:0)

以下是您的操作方法:

import urllib.request
from bs4 import BeautifulSoup

url_to_parse = 'http://www.myfxbook.com/members/fxgrowthbot/forex-growth-bot/71611'
response = urllib.request.urlopen(url_to_parse)
html = response.read()
soup = BeautifulSoup(html)
a = soup.find(id='history').find_all('td')

print(len(a))  # 300

答案 1 :(得分:0)

这是我使用的完整代码,只返回&lt; td&gt;的第一行:     import urllib.request     来自bs4 import BeautifulSoup

url_to_parse = 'http://www.myfxbook.com/members/autotrade/wallstreet-forex-robot-real/95290'

print( 'Requesting URL ' + url_to_parse + '...')
response = urllib.request.urlopen( url_to_parse )
print('Done')

print( 'Reading URL ' + url_to_parse + '...')
html = response.read()
print('Done')

soup = BeautifulSoup( str(html) )

history_td = soup.find(id='history').find_all('td')
for td in history_td:
    print(td)