Webcrawler BeautifulSoup - 如何从没有类标签的链接获取标题

时间:2015-06-24 02:52:27

标签: python beautifulsoup web-crawler

我尝试从中收集数据的网站是http://www.boxofficemojo.com/yearly/chart/?yr=2015&p=.htm。现在我想在这个页面上获得所有电影的标题,然后转移到其余的数据(工作室等)以及每个链接中的其他数据。这就是我到目前为止所做的:

import requests
from bs4 import BeautifulSoup
from urllib2 import urlopen

def trade_spider(max_pages):
    page = 0
    while page <= max_pages:
        url = 'http://www.boxofficemojo.com/yearly/chart/?page=' + str(page) + '&view=releasedate&view2=domestic&yr=2015&p=.htm'
        source_code = requests.get(url)
        plain_text = source_code.text
        soup = BeautifulSoup(plain_text)
        for link in soup.findAll('a', {'div':'body'}):
            href = 'http://www.boxofficemojo.com' + link.get('href')
            title = link.string
            print title
            get_single_item_data(href)
        page += 1

def get_single_item_data(item_url):
    source_code = requests.get(item_url)
    plain_text = source_code.text
    soup = BeautifulSoup(plain_text)
    for item_name in soup.findAll('section', {'id':'postingbody'}):
        print item_name.text

trade_spider(1)

我遇到麻烦的部分是

  

for soup.findAll('a',{'div':'body'})中的链接:

     

href ='http://www.boxofficemojo.com'+ link.get('href')

问题是,在网站上,没有标识类,其中所有链接都是其中的一部分。链接只有一个“&lt; ahref&gt;”标记。

如何获取此页面上所有链接的标题?

2 个答案:

答案 0 :(得分:1)

很抱歉没有给出完整的答案,但这是一个线索。

我在抓取这些问题时有一个名称。 当我使用find()find_all()方法时,我称之为Abstract Identification,因为当标记类/标识名称不是面向数据时,您可以获得随机数据。

然后是Nested Identification。当您必须查找不使用find()find_all()方法的数据时,而是直接通过标记嵌套进行抓取。这需要更加熟练地使用BeautifulSoup

嵌套识别是一个较长的过程,通常是凌乱的,但有时是唯一的解决方案。

那怎么办?当您拥有<class 'bs4.element.Tag'>对象时,您可以找到存储为标记对象属性的标记。

from bs4 import element, BeautifulSoup as BS

html = '' +\
'<body>' +\
    '<h3>' +\
        '<p>Some text to scrape</p>' +\
        '<p>Some text NOT to scrape</p>' +\
    '</h3>' +\
    '\n\n' +\
    '<strong>' +\
        '<p>Some more text to scrape</p>' +\
        '\n\n' +\
        '<a href="www.example.com/some-url/you/find/important/">Some Important Link</a>' +\
    '</strong>' +\
'</body>'



soup = BS(html)

# Starting point to extract a link
h3_tag = soup.find('h3') # finds the first h3 tag in the soup object

child_of_h3__p = h3_tag.p # locates the first p tag in the h3 tag

# climbing in the nest
child_of_h3__forbidden_p = h3_tag.p.next_sibling 
# or
#child_of_h3__forbidden_p = child_of_h3__p.next_sibling


# sometimes `.next_sibling` will yield '' or '\n', think of this element as a 
# tag separator in which case you need to continue using `.next_sibling`
# to get past the separator and onto the tag.

# Grab the tag below the h3 tag, which is the strong tag
# we need to go up 1 tag, and down 2 from our current object.
# (down 2 so we skip the tag_seperator)
tag_below_h3 = child_of_h3__p.parent.next_sibling.next_sibling


# Heres 3 different ways to get to the link tag using Nested Identification

# 1.) getting a list of childern from our object
childern_tags = tag_below_h3.contents

p_tag = childern_tags[0]
tag_separator = childern_tags[1]
a_tag = childern_tags[2] # or childrent_tags[-1] to get the last tag

print (a_tag)
print '1.) We Found the link: %s' % a_tag['href']


# 2.) Theres only 1 <a> tag, so we can just grab it directly
a_href = tag_below_h3.a['href']

print '\n2.) We Found the link: %s' % a_href


# 3.) using next_sibling to crawl
tag_separator = tag_below_h3.p.next_sibling
a_tag = tag_below_h3.p.next_sibling.next_sibling # or tag_separator.next_sibling

print '\n3.) We Found the link: %s' % a_tag['href']
print '\nWe also found a tag seperator: %s' % repr(tag_separator)

# our tag seperator is a NavigableString.
if type(tag_separator) == element.NavigableString:
    print '\nNavigableString\'s  are usually plain text that reside inside a tag.'
    print 'In this case however it is a tag seperator.\n' 

现在,如果我没记错,访问某个代码或代码分隔符会将对象从Tag更改为NavigableString,在这种情况下,您需要将其传递给BeautifulSoup能够使用find()等方法。要检查这一点,你可以这样做。

from bs4 import element, BeautifulSoup
# ... Do some beautiful soup data mining
# reach a NavigableString object
if type(formerly_a_tag_obj) == element.NavigableString:
    formerly_a_tag_obj = BeautifulSoup(formerly_a_tag_obj) # is now a soup

答案 1 :(得分:1)

一种可能的方法是使用接受CSS选择器参数的.select()方法:

for link in soup.select('td > b > font > a[href^=/movies/?]'):
    ......
    ......

关于正在使用的CSS选择器的简要说明:

  • td > b:找到所有td元素,然后从每个td找到直接子b元素
  • > font:来自已过滤的b元素,找到直接子font元素
  • > a[href^=/movies/?]:来自已过滤的font元素,返回带有a属性值的直接子href元素以"/movies/?"开头