数据规范化过程出了点问题(np.max np.mean等)

时间:2019-11-06 21:24:16

标签: python numpy tensorflow pycharm conv-neural-network

我是Python的新手。当我使用tensorflow进行分类时,出现了一个错误,该错误似乎与numpy的规范化有关:

C:\ Users ... \ lib \ site-packages \ numpy \ core \ fromnumeric.py:3257:RuntimeWarning:空片的平均值。     out = out,** kwargs)

C:\ Users ... \ lib \ site-packages \ numpy \ core_methods.py:161:RuntimeWarning:true_divide中遇到无效值     ret = ret.dtype.type(ret / rcount)

回溯(最近通话最近):

模块205行中的文件“ C:/ Users / fy055 /.../ ccnn_class_CONVtrainFULLtrain_hcp.py”

train_data,train_labels,test_data,test_labels = create_train_and_test_data(i,ID,subjectID,标签,data_tensor)

create_train_and_test_data中的文件“ C:/ Users / fy055 /.../ ccnn_class_CONVtrainFULLtrain_hcp.py”,第142行

test_data = normalize_tensor(data_tensor [testIDs,:,:,:])。astype(np.float32)

文件“ C:/ Users / fy055 /.../ ccnn_class_CONVtrainFULLtrain_hcp.py”,行92,在normalize_tensor中

data_tensor / = np.max(np.abs(data_tensor))

文件“ <__ array_function__ internals>”,第6行,以amax

文件“ C:\ Users \ fy055 ... \ lib \ site-packages \ numpy \ core \ fromnumeric.py”,第2621行,以最大数量显示

keepdims = keepdims,initial = initial,where = where)

_wrapreduction中第90行的文件“ C:\ Users \ fy055 ... \ lib \ site-packages \ numpy \ core \ fromnumeric.py”

返回ufunc.reduce(obj,axis,dtype,out,* passkwargs)*

ValueError:大小为零的数组,直到没有身份的缩小操作最大值

我尝试用Google搜索它,但找不到出路。我不太确定数据规范化过程是否有问题。有谁知道问题出在哪里以及如何解决?非常感谢。

代码:

with open(pickle_file, 'rb') as f:
    data_tensor = pickle.load(f)
……
# normalize_tensor standardizes an n dimesional np.array to have zero mean and standard deviation of 1
def normalize_tensor(data_tensor):
    data_tensor -= np.mean(data_tensor)
    **data_tensor /= np.max(np.abs(data_tensor))**             **#line 92**
    return data_tensor
……
def create_train_and_test_data(fold, IDs, subjects, labels, data_tensor):
    #create one-hot encoding of labels
    num_labels = len(np.unique(labels))
    labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)

    #identify the IDs of test subjects
    testIDs = np.in1d(subjects, IDs[:,fold])

    test_data = normalize_tensor(data_tensor[testIDs,:,:,:]).astype(np.float32)   **#line 142**
    test_labels = labels[testIDs]

    train_data = normalize_tensor(data_tensor[~testIDs,:,:,:]).astype(np.float32)
    train_labels = labels[~testIDs]
    train_data, train_labels = randomize_tensor(train_data, train_labels)

    return train_data, train_labels, test_data, test_labels
……
# Creating train and test data for the given fold
    train_data, train_labels, test_data, test_labels = create_train_and_test_data(i, IDs, subjectIDs, labels, data_tensor)            **#line 205**

0 个答案:

没有答案