Python urllib.urlopen IOError

时间:2010-04-20 03:19:21

标签: python urllib

所以我在函数中有以下几行代码

sock = urllib.urlopen(url)
html = sock.read()
sock.close()

当我手动调用函数时它们工作正常。但是,当我在循环中调用该函数时(使用与之前相同的URL),我收到以下错误:

> Traceback (most recent call last):
  File "./headlines.py", line 256, in <module>
    main(argv[1:])
  File "./headlines.py", line 37, in main
    write_articles(headline, output_folder + "articles_" + term +"/")
  File "./headlines.py", line 232, in write_articles
    print get_blogs(headline, 5)
  File "/Users/michaelnussbaum08/Documents/College/Sophmore_Year/Quarter_2/Innovation/Headlines/_code/get_content.py", line 41, in get_blogs
    sock = urllib.urlopen(url)
  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 87, in urlopen
    return opener.open(url)
  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 203, in open
    return getattr(self, name)(url)
  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 314, in open_http
    if not host: raise IOError, ('http error', 'no host given')
IOError: [Errno http error] no host given

有什么想法吗?

修改更多代码:

def get_blogs(term, num_results):
    search_term = term.replace(" ", "+")
    print "search_term: " + search_term
    url = 'http://blogsearch.google.com/blogsearch_feeds?hl=en&q='+search_term+'&ie=utf-8&num=10&output=rss'
    print "url: " +url  

    #error occurs on line below

    sock = urllib.urlopen(url)
    html = sock.read()
    sock.close()

def write_articles(headline, output_folder, num_articles=5):

    #calls get_blogs

    if not os.path.exists(output_folder):
    os.makedirs(output_folder)

    output_file = output_folder+headline.strip("\n")+".txt"
    f = open(output_file, 'a')
    articles = get_articles(headline, num_articles)
    blogs = get_blogs(headline, num_articles)


    #NEW FUNCTION
    #the loop that calls write_articles
    for term in trend_list: 
        if do_find_max == True:
        fill_search_term(term, output_folder)
    headlines = headline_process(term, output_folder, max_headlines, do_find_max)
    for headline in headlines:
    try:
        write_articles(headline, output_folder + "articles_" + term +"/")
    except UnicodeEncodeError:
        pass

3 个答案:

答案 0 :(得分:6)

当我使用网址连接变量时遇到此问题,在您的情况下search_term

url = 'http://blogsearch.google.com/blogsearch_feeds?hl=en&q='+search_term+'&ie=utf-8&num=10&output=rss'

最后有一个换行符。所以一定要确保

search_term = search_term.strip()

你可能也想做

search_term = urllib2.quote(search_term)

确保您的字符串对于网址是安全的

答案 1 :(得分:1)

在你的函数循环中,就在调用urlopen之前,也许放一个print语句:

print(url)
sock = urllib.urlopen(url)

这样,当您运行脚本并获得IOError时,您将看到导致问题的url。如果url等于'http://' ...

,则可以复制错误“无主机给定”

答案 2 :(得分:1)

如果您不想自己处理每个块的读取,请使用urllib2。 这可能就是你所期望的。

import urllib2
req = urllib2.Request(url='http://stackoverflow.com/')
f = urllib2.urlopen(req)
print f.read()