如何在sklearn中使用datasets.fetch_mldata()?

时间:2013-10-22 23:56:46

标签: python numpy machine-learning

我正在尝试运行以下代码以获得简短的机器学习算法:

import re
import argparse
import csv
from collections import Counter
from sklearn import datasets
import sklearn
from sklearn.datasets import fetch_mldata

dataDict = datasets.fetch_mldata('MNIST Original')

在这段代码中,我试图通过sklearn阅读mldata.org上的数据集“MNIST Original”。这会导致以下错误(代码行数较多但我在此特定行收到错误):

Traceback (most recent call last):
  File "C:\Program Files (x86)\JetBrains\PyCharm 2.7.3\helpers\pydev\pydevd.py", line 1481, in <module>
    debugger.run(setup['file'], None, None)
  File "C:\Program Files (x86)\JetBrains\PyCharm 2.7.3\helpers\pydev\pydevd.py", line 1124, in run
    pydev_imports.execfile(file, globals, locals) #execute the script
  File "C:/Users/sony/PycharmProjects/Machine_Learning_Homework1/zeroR.py", line 131, in <module>
    dataDict = datasets.fetch_mldata('MNIST Original')
  File "C:\Anaconda\lib\site-packages\sklearn\datasets\mldata.py", line 157, in fetch_mldata
    matlab_dict = io.loadmat(matlab_file, struct_as_record=True)
  File "C:\Anaconda\lib\site-packages\scipy\io\matlab\mio.py", line 176, in loadmat
    matfile_dict = MR.get_variables(variable_names)
  File "C:\Anaconda\lib\site-packages\scipy\io\matlab\mio5.py", line 294, in get_variables
    res = self.read_var_array(hdr, process)
  File "C:\Anaconda\lib\site-packages\scipy\io\matlab\mio5.py", line 257, in read_var_array
    return self._matrix_reader.array_from_header(header, process)
  File "mio5_utils.pyx", line 624, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy\io\matlab\mio5_utils.c:5717)
  File "mio5_utils.pyx", line 653, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy\io\matlab\mio5_utils.c:5147)
  File "mio5_utils.pyx", line 721, in scipy.io.matlab.mio5_utils.VarReader5.read_real_complex (scipy\io\matlab\mio5_utils.c:6134)
  File "mio5_utils.pyx", line 424, in scipy.io.matlab.mio5_utils.VarReader5.read_numeric (scipy\io\matlab\mio5_utils.c:3704)
  File "mio5_utils.pyx", line 360, in scipy.io.matlab.mio5_utils.VarReader5.read_element (scipy\io\matlab\mio5_utils.c:3429)
  File "streams.pyx", line 181, in scipy.io.matlab.streams.FileStream.read_string (scipy\io\matlab\streams.c:2711)
IOError: could not read bytes

我曾尝试过在互联网上进行研究,但几乎没有任何帮助。任何与解决此错误相关的专家帮助都将非常感激。

TIA。

11 个答案:

答案 0 :(得分:10)

看起来缓存的数据已损坏。尝试删除它们并再次下载(需要一点时间)。如果没有另外指定,'MINST original'的数据应该在

~/scikit_learn_data/mldata/mnist-original.mat

答案 1 :(得分:7)

从0.20版开始,sklearn deprecates fetch_mldata开始运行,并添加了fetch_openml

使用以下代码下载MNIST dataset

from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784')

虽然格式有所更改。例如,mnist['target']是字符串类别标签的数组(不像以前那样浮动)。

答案 2 :(得分:4)

我从此链接下载了数据集

https://github.com/amplab/datascience-sp14/blob/master/lab7/mldata/mnist-original.mat

然后我输入这些行

from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original', transpose_data=True, data_home='files')

***路径是(您的工作目录)/files/mldata/mnist-original.mat

我希望你能得到它,对我来说效果很好

答案 3 :(得分:1)

以下是一些示例代码,说明如何准备好用于sklearn的MNIST数据:

def get_data():
    """
    Get MNIST data ready to learn with.

    Returns
    -------
    dict
        With keys 'train' and 'test'. Both do have the keys 'X' (features)
        and'y' (labels)
    """
    from sklearn.datasets import fetch_mldata
    mnist = fetch_mldata('MNIST original')

    x = mnist.data
    y = mnist.target

    # Scale data to [-1, 1] - This is of mayor importance!!!
    x = x/255.0*2 - 1

    from sklearn.cross_validation import train_test_split
    x_train, x_test, y_train, y_test = train_test_split(x, y,
                                                        test_size=0.33,
                                                        random_state=42)
    data = {'train': {'X': x_train,
                      'y': y_train},
            'test': {'X': x_test,
                     'y': y_test}}
    return data

答案 4 :(得分:1)

我遇到了同样的问题,并且在我使用可怜的WiFi时,在不同的时间发现了mnist-original.mat的不同文件大小。我切换到LAN,它工作正常。这可能是网络问题。

答案 5 :(得分:0)

试试这样:

dataDict = fetch_mldata('MNIST original')

这对我有用。由于您使用了from ... import ...语法,因此在使用时不应添加datasets

答案 6 :(得分:0)

我还得到了一个fetch_mldata()“IOError:无法读取字节”错误。这是解决方案;相关的代码行是

from sklearn.datasets.mldata import fetch_mldata
mnist = fetch_mldata('mnist-original', data_home='/media/Vancouver/apps/mnist_dataset/')

...请务必更改首选位置(目录)的“data_home”。

这是一个脚本:

#!/usr/bin/python
# coding: utf-8

# Source:
# https://stackoverflow.com/questions/19530383/how-to-use-datasets-fetch-mldata-in-sklearn
# ... modified, below, by Victoria

"""
pers. comm. (Jan 27, 2016) from MLdata.org MNIST dataset contactee "Cheng Ong:"

    The MNIST data is called 'mnist-original'. The string you pass to sklearn
    has to match the name of the URL:

    from sklearn.datasets.mldata import fetch_mldata
    data = fetch_mldata('mnist-original')
"""

def get_data():

    """
    Get MNIST data; returns a dict with keys 'train' and 'test'.
    Both have the keys 'X' (features) and 'y' (labels)
    """

    from sklearn.datasets.mldata import fetch_mldata

    mnist = fetch_mldata('mnist-original', data_home='/media/Vancouver/apps/mnist_dataset/')

    x = mnist.data
    y = mnist.target

    # Scale data to [-1, 1]
    x = x/255.0*2 - 1

    from sklearn.cross_validation import train_test_split

    x_train, x_test, y_train, y_test = train_test_split(x, y,
        test_size=0.33, random_state=42)

    data = {'train': {'X': x_train, 'y': y_train},
            'test': {'X': x_test, 'y': y_test}}

    return data

data = get_data()
print '\n', data, '\n'

答案 7 :(得分:0)

如果您没有提供data_home,程序会查看$ {yourprojectpath} /mldata/minist-original.mat,您可以下载程序并将文件放入正确的路径

答案 8 :(得分:0)

我过去也遇到过这个问题。这是由于数据集非常大(大约55.4 mb),我运行“fetch_mldata”,但由于互联网连接,它需要一段时间才能全部下载。我不知道并打断这个过程。

数据集已损坏,并说明错误发生的原因。

答案 9 :(得分:0)

除了@szymon提到的内容外,您还可以使用以下方式加载数据集:

from six.moves import urllib
from sklearn.datasets import fetch_mldata

from scipy.io import loadmat
mnist_alternative_url = "https://github.com/amplab/datascience-sp14/raw/master/lab7/mldata/mnist-original.mat"
mnist_path = "./mnist-original.mat"
response = urllib.request.urlopen(mnist_alternative_url)
with open(mnist_path, "wb") as f:
    content = response.read()
    f.write(content)
mnist_raw = loadmat(mnist_path)
mnist = {
    "data": mnist_raw["data"].T,
    "target": mnist_raw["label"][0],
    "COL_NAMES": ["label", "data"],
    "DESCR": "mldata.org dataset: mnist-original",
}

答案 10 :(得分:-1)

那是'MNIST原创'。小写在“o”。