Mechanize和Beautifulsoup出现错误httplib.InvalidURL:非数字端口:''(Python)

时间:2013-01-01 16:05:39

标签: python beautifulsoup mechanize

我正在浏览一个URL列表,并使用我的脚本打开它们,使用Mechanize / BeautifulSoup。

但是我收到了这个错误:

File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 718, in _set_hostport
    raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
httplib.InvalidURL: nonnumeric port: ''

在这行代码中发生了这种情况:

page = mechanize.urlopen(req)

以下是我的代码。对我做错了什么的任何见解?许多URL工作正常,当它遇到某些URL时,我收到此错误消息,因此不确定原因。

from mechanize import Browser
from BeautifulSoup import BeautifulSoup
import re, os
import shutil
import mechanize
import urllib2
import sys
reload(sys)
sys.setdefaultencoding("utf-8")

mech = Browser()
linkfile = open ("links.txt")
urls = []
while 1:
    url = linkfile.readline()
    urls.append("%s" % linkfile.readline())
    if not url:
        break

for url in urls:
    if "http://" or "https://" not in url: 
        url = "http://" + url
    elif "..." in url:
    elif ".pdf" in url:
        #print "this is a pdf -- at some point we should save/log these"
        continue
    elif len (url) < 8:
        continue
    req = mechanize.Request(url)
    req.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8')
    req.add_header('User-Agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20100101 Firefox/17.0')
    req.add_header('Accept-Language', 'Accept-Language  en-US,en;q=0.5')
    try:
        page = mechanize.urlopen(req)
    except urllib2.HTTPError, e:
        print "there was an error opening the URL, logging it"
        print e.code
        logfile = open ("log/urlopenlog.txt", "a")
        logfile.write(url + "," + "couldn't open this page" + "\n")
        pass

1 个答案:

答案 0 :(得分:1)

我认为这段代码

if "http://" or "https://" not in url: 

没有做你想做的事(或者你认为它会做什么)。

if "http://"

将始终评估为true,因此您的网址永远不会加前缀。 您需要将其重写为(例如):

if "https://" not in url and "http://" not in url:

此外,现在我开始测试你的作品了:

urls = []
while 1:
    url = linkfile.readline()
    urls.append("%s" % linkfile.readline())
    if not url:
        break

这实际上确保您的URL文件读取不正确并且每读取第2行,您可能希望阅读:

urls = []
while 1:
    url = linkfile.readline()
    if not url:
        break
    urls.append("%s" % url)

原因是 - 您拨打linkfile.readline()两次,强制它读取2行,并且每隔2行只保存一次。

此外,您希望在追加之前添加if子句,以防止在列表末尾显示空条目。

但您的特定网址示例适合我。更多信息,我可能需要您的链接文件。