OSError:[Errno 22}参数无效:'已下载/ misc / jquery.js?v = 1.4.4'

时间:2017-02-19 17:24:38

标签: python web-scraping

  

tfp = open(filename,' wb')

     

OSError:[Errno 22}参数无效:'已下载/ misc / jquery.js?v = 1.4.4'

任何人都可以帮我解决这个错误吗?我认为它与jquery.js?v=1.4.4无效有关。我是python的新手;如果我遗漏了一些明显的东西,我道歉。

以下是代码:

import os
from urllib.request import urlretrieve
from urllib.request import urlopen
from bs4 import BeautifulSoup

downloadDirectory = "downloaded"
baseUrl = "http://pythonscraping.com"

def getAbsoluteURL(baseUrl, source):
    if source.startswith("http://www."):
        url = "http://"+source[11:]
    elif source.startswith("http://"):
        url = source
    elif source.startswith("www."):
        url = source[4:]
        url = "http://"+source
    else:
        url = baseUrl+"/"+source
    if baseUrl not in url:
        return None
    return url

def getDownloadPath(baseUrl, absoluteUrl, downloadDirectory):
    path = absoluteUrl.replace("www.", "")
    path = path.replace(baseUrl, "")
    path = downloadDirectory+path
    directory = os.path.dirname(path)

    if not os.path.exists(directory):
        os.makedirs(directory)

    return path

html = urlopen("http://www.pythonscraping.com")
bsObj = BeautifulSoup(html, "html.parser")
downloadList = bsObj.findAll(src=True)

for download in downloadList:
    fileUrl = getAbsoluteURL(baseUrl, download["src"])
    if fileUrl is not None:
        print(fileUrl)
        urlretrieve(fileUrl, getDownloadPath(baseUrl, fileUrl, downloadDirectory))

2 个答案:

答案 0 :(得分:1)

对于函数urlretrieve(url, filename, reporthook, data), 您为filename参数提供的参数必须是操作系统上的有效文件名。

在这种情况下,当你运行

urlretrieve(fileUrl, getDownloadPath(baseUrl, fileUrl, downloadDirectory))

您为url提供的参数是“http://pythonscraping.com/misc/jquery.js?v=1.4.4”,您为filename提供的参数是“downloads / misc / jquery.js?v = 1.4.4”。

“jquery.js?v = 1.4.4”我认为这不是一个有效的文件名。

解决方案:在getDownloadPath功能中,将return path更改为

return path.partition('?')[0]

答案 1 :(得分:0)

已下载/ misc / jquery.js?v = 1.4.4不是有效的文件名。 我认为这是一个更好的解决方案:

import requests
from bs4 import BeautifulSoup

download_directory = "downloaded"
base_url = "http://www.pythonscraping.com/"
# Use Requests instead urllib
def get_files_url(base_url):
    # Return a list of tag elements that contain src attrs
    html = requests.get(base_url)
    soup = BeautifulSoup(html.text, "lxml")
    return soup.find_all(src=True)

def get_file_name(url):
    # Return the last part after the last "/" as file name
    # Eg: return a.png as file name if url=http://pythonscraping.com/a.png
    # Remove characters not valid in file name
    file_name = url.split("/")[-1]
    remove_list = "?><\/:\"*|"
    for ch in remove_list:
        if ch in file_name:
            file_name = file_name.replace(ch, "")
    return download_directory + "/" + file_name

def get_formatted_url(url):
    if not url.startswith("http://"):
        return base_url + url
    elif base_url not in url:
        return None
    else:
        return url

links = get_files_url(base_url)

for link in links:
    url = link["src"]
    url = get_formatted_url(url)
    if url is None:
        continue
    print(url)
    result = requests.get(url, stream=True)
    file_name = get_file_name(url)
    print(file_name)
    with open(file_name, 'wb') as f:
        for chunk in result.iter_content(10):
            f.write(chunk)