python中的beautifulsoup解析错误 - 垃圾字符

时间:2014-05-18 22:02:33

标签: python beautifulsoup mechanize

代码 - 不确定我做了什么让BeautifulSoup(BS)无效

import mechanize
import urllib2
from bs4 import BeautifulSoup

#create a browser object to login
browser = mechanize.Browser()

#tell the browser we are human, and not a robot, so the mechanize library doesn't block us
browser.set_handle_robots(False)

browser.addheaders = [('User-Agent','Mozilla/5.0 (Windows U; Windows NT 6.0; en-US; rv:9.0.6')]
#url
url = 'https://www.google.com.au/search?q=python'
#open the url in our virtual browser
browser.open(url)
html = browser.response().read()
print html
soup = BeautifulSoup(html)
print(soup.prettify())

错误

HTMLParseError: junk characters in start tag: u'{t:1}); class="gbzt ', at line 1, column 42892

<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en-AU"><head><meta content="text/html; charset=UTF-8" http-equiv="Content-Type"><meta content="/images/google_favicon_128.png" itemprop="image"><title>python - Google Search</title><style>#gb{font:13px/27px Arial,sans-serif;height:30px}#gbz,#gbg{position:absolute;white-space:nowrap;top:0;height:30px;z-index:1000}#gbz{left:0;padding-left:4px}#gbg{right:0;padding-right:5px}#gbs{background:transparent;position:absolute;top:-999px;v

1 个答案:

答案 0 :(得分:0)

尝试使用requests

import requests
from bs4 import BeautifulSoup
#url
url = 'https://www.google.com.au/search?q=python'
r=requests.get(url)
html = r.text
print html
soup = BeautifulSoup(html)
print(soup.prettify())