Urllib Python没有提供我使用inspect元素看到的html代码

时间:2014-09-19 01:18:42

标签: python html web-scraping urllib

我试图抓取此链接中的结果:

url =" http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F"

当我用firebug检查它时,我可以看到html代码,我知道我需要做些什么来提取推文。问题是当我使用urlopen获得响应时,我没有获得相同的HTML代码。我只得到标签。我错过了什么?

以下示例代码:

   def get_tweets(section_url):
     html = urlopen(section_url).read()
     soup = BeautifulSoup(html, "lxml")
     tweets = soup.find("div", "results")
     category_links = [dd.a["href"] for tweet in tweets.findAll("div", "result-tweet")]
     return category_links

url =  "http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F"
cat_links = get_tweets(url)

谢谢, YB

1 个答案:

答案 0 :(得分:2)

问题是results div的内容充满了额外的HTTP调用和浏览器端执行的javascript代码。 urllib只有"看到"最初的HTML页面不包含您需要的数据。

一种选择是遵循@ Himal的建议,并模拟使用推文发送给数据的trackbacks.js的基础请求。结果是JSON格式,您可以load()使用标准库附带的json模块:

import json
import urllib2

url = 'http://otter.topsy.com/trackbacks.js?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F&infonly=0&call_timestamp=1411090809443&apikey=09C43A9B270A470B8EB8F2946A9369F3'
data = json.load(urllib2.urlopen(url))
for tweet in data['response']['list']:
    print tweet['permalink_url']

打印:

http://twitter.com/Evonomie/status/512179917610835968
http://twitter.com/abs_office/status/512054653723619329
http://twitter.com/TKE_Global/status/511523709677756416
http://twitter.com/trevinocreativo/status/510216232122200064
http://twitter.com/TomCrouser/status/509730668814028800
http://twitter.com/Evonomie/status/509703168062922753
http://twitter.com/peterchaly/status/509592878491136000
http://twitter.com/chandagarwala/status/509540405411840000
http://twitter.com/Ayjay4650/status/509517948747526144
http://twitter.com/Marketingccc/status/509131671900536832

这是"归结为金属"选项。


否则,您可以选择"高级"接近并且不用担心引擎盖下发生了什么。让真正的浏览器加载您将通过selenium WebDriver进行交互的页面:

from selenium import webdriver

driver = webdriver.Chrome()  # can be Firefox(), PhantomJS() and more
driver.get("http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F")

for tweet in driver.find_elements_by_class_name('result-tweet'):
    print tweet.find_element_by_xpath('.//div[@class="media-body"]//ul[@class="inline"]/li//a').get_attribute('href')

driver.close()

打印:

http://twitter.com/Evonomie/status/512179917610835968
http://twitter.com/abs_office/status/512054653723619329
http://twitter.com/TKE_Global/status/511523709677756416
http://twitter.com/trevinocreativo/status/510216232122200064
http://twitter.com/TomCrouser/status/509730668814028800
http://twitter.com/Evonomie/status/509703168062922753
http://twitter.com/peterchaly/status/509592878491136000
http://twitter.com/chandagarwala/status/509540405411840000
http://twitter.com/Ayjay4650/status/509517948747526144
http://twitter.com/Marketingccc/status/509131671900536832

这是你可以扩展第二个选项以获得分页后的所有推文的方法:

from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

BASE_URL = 'http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F&offset={offset}'

driver = webdriver.Chrome()

# get tweets count
driver.get('http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F')
tweets_count = int(driver.find_element_by_xpath('//li[@data-name="all"]/a/span').text)

for x in xrange(0, tweets_count, 10):
    driver.get(BASE_URL.format(offset=x))

    # page header appears in case no more tweets found
    try:
        driver.find_element_by_xpath('//div[@class="page-header"]/h3')
    except NoSuchElementException:
        pass
    else:
        break

    # wait for results
    WebDriverWait(driver, 5).until(
        EC.presence_of_element_located((By.ID, "results"))
    )

    # get tweets
    for tweet in driver.find_elements_by_class_name('result-tweet'):
        print tweet.find_element_by_xpath('.//div[@class="media-body"]//ul[@class="inline"]/li//a').get_attribute('href')

driver.close()