我尝试使用Python从大型HTML页面中提取特定信息/链接。例如,从IMDb页面的下面给出的HTML输出中,我尝试提取电影链接,如下所示:
HREF =" /标题/ tt2388771 / ref_ = nm_flmg_act_1&#34?;丛林书:起源
使用以下Python代码似乎不起作用:
from urllib2 import urlopen
import re
source = urlopen("http://www.imdb.com/name/nm0000288/").read()
print re.findall('href="/title/', source)
print source
任何帮助/建议?
<span class="ghost">|</span> <a href="#self"
onclick="handleFilmoJumpto(this);" data-category="self">Self</a></a>
<span class="ghost">|</span> <a href="#archive_footage"
onclick="handleFilmoJumpto(this);" data-category="archive_footage">Archive footage</a></a>
</div>
<div id="filmography">
<div id="filmo-head-actor" class="head" data-category="actor" onclick="toggleFilmoCategory(this);">
<span id="hide-actor" class="hide-link"
>Hide <img src="http://ia.media-imdb.com/images/G/01/imdb/images/icons/hide-1061525577._CB358668250_.png" class="absmiddle" alt="Hide" width="18" height="16"></span>
<span id="show-actor" class="show-link"
>Show <img src="http://ia.media-imdb.com/images/G/01/imdb/images/icons/show-582987296._CB358668248_.png" class="absmiddle" alt="Show" width="18" height="16"></span>
<a name="actor">Actor</a> (49 credits)
</div>
<div class="filmo-category-section"
>
<div class="filmo-row odd" id="actor-tt2388771">
<span class="year_column">
2017
</span>
<b><a href="/title/tt2388771/?ref_=nm_flmg_act_1"
>Jungle Book: Origins</a></b>
(<a href="/r/legacy-inprod-name/title/tt2388771" class="in_production">filming</a>)
<br/>
<a href="/character/ch0011743/?ref_=nm_flmg_act_1"
>Bagheera</a>
</div>
<div class="filmo-row even" id="actor-tt1596363">
<span class="year_column">
2016
</span>
<b><a href="/title/tt1596363/?ref_=nm_flmg_act_2"
>The Big Short</a></b>
(<a href="/r/legacy-inprod-name/title/tt1596363" class="in_production">filming</a>)
<br/>
Michael Burry
</div>
&#13;
答案 0 :(得分:1)
无需使用正则表达式在HTML文件中搜索信息。请改用the worldwide famous Beautiful Soup。
您的用例示例:
from urllib2 import urlopen
from bs4 import BeautifulSoup
import re
source = urlopen("http://www.imdb.com/name/nm0000288/").read()
soup = BeautifulSoup(source)
soup.findAll('a', href=re.compile('^/title/'))