从网站上搜索数据

时间:2014-03-11 02:52:21

标签: python python-2.7 web-scraping html-parsing beautifulsoup

我在将链接链接在一起时遇到了问题。我需要蜘蛛代码,链接页面上的链接,并抓住我所需的详细信息,直到现在我的代码能够获取所需的信息,但也有其他页面,所以我需要其他页面信息太链接base_url包含应用程序信息然后我想要从该页面收集所有链接,然后想要切换下一页并重复相同的事情,然后我需要从我收集的链接中收集每个应用程序的详细信息,如他们的名字,版本号等 所以现在我能够收集所有的信息只有链接不相互关联我怎么能帮助我.....这里是我的代码:

#extracting links
def linkextract(soup): 
    print "\n extracting links of next pages"
    print "\n\n page 2 \n"
        sAll = [div.find('a') for div in soup.findAll('div', attrs={'class':''})]
        for i in sAll:
            suburl = ""+i['href'] #checking pages
        print suburl
        pages = mech.open(suburl)
        content = pages.read()
        anosoup = BeautifulSoup(content)
        extract(anosoup)
    app_url = ""
    print app_url
    #print soup.prettify()
    page1 = mech.open(app_url)
    html1 = page1.read()
    soup1 = BeautifulSoup(html1)
    print "\n\n application page details \n"
    extractinside(soup1)

需要帮助,谢谢。

1 个答案:

答案 0 :(得分:2)

这是你应该开始的:

import urllib2
from bs4 import BeautifulSoup

URL = 'http://www.pcwelt.de/download-neuzugaenge.html'

soup = BeautifulSoup(urllib2.urlopen(URL))
links = [tr.td.a['href'] for tr in soup.find('div', {'class': 'boxed'}).table.find_all('tr') if tr.td]

for link in links:
    url = "http://www.pcwelt.de{0}".format(link)
    soup = BeautifulSoup(urllib2.urlopen(url))

    name = soup.find('span', {'itemprop': 'name'}).text
    version = soup.find('td', {'itemprop': 'softwareVersion'}).text
    print "Name: %s; Version: %s" % (name, version)

打印:

Name: Ashampoo Clip Finder HD Free; Version: 2.3.6
Name: Many Cam; Version: 4.0.63
Name: Roboform; Version: 7.9.5.7
...

希望有所帮助。