跨<div>的数据搜索

时间:2018-01-09 23:43:01

标签: python html web-scraping beautifulsoup

我试图从包含许多嵌入式的重复行集中提取信息。对于该页面,我正在尝试编写一个刮刀来获取this页面中的各种元素。出于某种原因,我找不到使用包含每行信息的类来获取标记的方法。此外,我无法隔离提取信息所需的部分。作为参考,这是一行的样本:

<div id="dTeamEventResults" class="col-md-12 team-event-results"><div>
    <div class="row team-event-result team-result">
        <div class="col-md-12 main-info">
            <div class="row">
                <div class="col-md-7 event-name">
                    <dl>
                        <dt>Team Number:</dt> 
                        <dd><a href="/team-event-search/team?program=JFLL&amp;year=2017&amp;number=11733" class="result-name">11733</a></dd>
                        <dt>Team:</dt> 
                        <dd> Aqua Duckies</dd>
                        <dt>Program:</dt> 
                        <dd>FIRST LEGO League Jr.</dd>
                    </dl>
                </div>

我开始构建的脚本如下所示:

from urllib2 import urlopen as uReq
from bs4 import BeautifulSoup as soup

my_url = 'https://www.firstinspires.org/team-event-search#type=teams&sort=name&keyword=NJ&programs=FLLJR,FLL,FTC,FRC&year=2017'

uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

page_soup = soup(page_html, "html.parser")

rows = page_soup.findAll("div", {"class":"row team-event-result team-result"})

每当我运行len(行)时,它总是会导致0.我好像遇到了障碍而且遇到了麻烦。谢谢你的帮助!

3 个答案:

答案 0 :(得分:1)

这似乎是多类标签的问题。我相信this question可能会帮助您找到解决方案。

答案 1 :(得分:1)

您可以专门搜索包含目标数据的dtdd标签:

from bs4 import BeautifulSoup as soup
from urllib2 import urlopen as uReq
import re
data = str(uReq('https://www.firstinspires.org/team-event-search#type=teams&sort=name&keyword=NJ&programs=FLLJR,FLL,FTC,FRC&year=2017').read())
s = soup(data, 'lxml')
headers = map(lambda x:x[:-1], [[b.text for b in i.find_all('dt')] for i in s.find_all('dl')][0])
data = [[re.sub('\s{2,}', '', b.text) for b in i.find_all('dd')] for i in s.find_all('dl')]
print(data)
final_data = [dict(zip(headers, i)) for i in data]
print(final_data)

在上面的示例中运行此代码时,输​​出为:

[[u'11733', u' Aqua Duckies', u'FIRST LEGO League Jr.']]
[{u'Program': u'FIRST LEGO League Jr.', u'Team Number': u'11733', u'Team': u' Aqua Duckies'}]

答案 2 :(得分:1)

此页面的内容是动态生成的,因此要注意您需要使用selenium之类的任何浏览器模拟器。以下是将获取所需内容的脚本。试一试:

from bs4 import BeautifulSoup
from selenium  import webdriver

driver = webdriver.Chrome()
driver.get('https://www.firstinspires.org/team-event-search#type=teams&sort=name&keyword=NJ&programs=FLLJR,FLL,FTC,FRC&year=2017')
soup = BeautifulSoup(driver.page_source,"lxml")
for items in soup.select('.main-info'):
    docs = ' '.join([' '.join([item.text,' '.join(val.text.split())]) for item,val in zip(items.select(".event-name dt"),items.select(".event-name dd"))])
    location = ' '.join([' '.join(item.text.split()) for item in items.select(".event-location-type address")])
    print("Event_Info: {}\nEvent_Location: {}\n".format(docs,location))
driver.quit()

结果如下所示:

Event_Info: Team Number: 11733 Team: Aqua Duckies Program: FIRST LEGO League Jr.
Event_Location: Sparta, NJ 07871 USA

Event_Info: Team Number: 4281 Team: Bulldogs Program: FIRST Robotics Competition
Event_Location: Somerset, NJ 08873 USA