是array [:, 0:-1] = array [:] [0:-1]?

时间:2019-07-25 09:11:18

标签: python arrays dataframe

我正在追踪LSTM tutorial

为了将训练数据分为输入(x)和输出(y),我必须执行以下命令:

X, y = train[:, 0: -1], train[:, -1]

很遗憾,它不起作用,并且在打印train[:, 0: -1]时会产生以下错误:

> TypeError: '(slice(None, None, None), slice(0, -1, None))' is an invalid key

我尝试用以下命令替换此命令:

X, y = train[:][0: -1], train[:][-1]

但是我很确定它不会提供相同的输出,因为(在我的情况下)不合逻辑的输入和一个输出都是

以下是带有代码示例的最小可复制代码:

from pandas import DataFrame
from pandas import datetime
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
import numpy

O = [0.701733664614, 0.699495411782, 0.572129320819, 0.613315597684, 0.58079660603, 0.596638918579, 0.48453382119]
Ab = [datetime(2018, 12, 11, 14, 0), datetime(2018, 12, 21, 10, 0), datetime(2018, 12, 21, 14, 0), datetime(2019, 1, 1, 10, 0), datetime(2019, 1, 1, 14, 0), datetime(2019, 1, 11, 10, 0), datetime(2019, 1, 11, 14, 0)]

data = DataFrame(numpy.column_stack([O, Ab]),
                 columns=['ndvi', 'datetime'])

def fit_lstm(train, batch_size, nb_epoch, neurons):
    X, y = train[:, 0: -1], train[:, -1]
    X = X.values.reshape(X.shape[0], 1, X.shape[1])
    model = Sequential()
    model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True))
    model.add(Dense(1))
    model.compile(loss='mean_squared_error', optimizer='adam')
    for i in range(nb_epoch):
        model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False)
        model.reset_states()
    return model

train, test = data.values[0:-2], data.values[-2:]

print (train[:, 0:-1])

我要解决的问题是适合LSTM模型:

lstm_model = fit_lstm(train, 1, 3000, 4)

也许,在这种情况下,我必须使用shift(),以通过最后一个时间步作为输入,而将当前时间步作为输出? 像这样:

shift_steps = 1
train_targets = train.shift(-shift_steps)
X, y = train, train_targets

2 个答案:

答案 0 :(得分:1)

您正在使用DataFrame,并且切片有所不同。

假设“ nvdi”列具有功能,而datetime列则是每种训练数据的预期结果,则需要将它们称为:

X = train['nvdi']
y = train['datetime']

我在数据中只看到两列。

答案 1 :(得分:0)

我认为我发现了我的错误: 语法与我的数据集不兼容。

正如我在问题的最后编辑中提到的那样,我使用了shift。它是这样工作的:


def fit_lstm(train, batch_size, nb_epoch, neurons):
    X, y = train, train_targets
    X = X.values.reshape(X.shape[0], 1, X.shape[1])
    model = Sequential()
    model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True))
    model.add(Dense(1))
    model.compile(loss='mean_squared_error', optimizer='adam')
    for i in range(nb_epoch):
        model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False)
        model.reset_states()
    return model

train, test = data.values[0:-2], data.values[-2:]
shift_steps = 1
train_targets = data.shift(-shift_steps).values[0:-2] 

||

||

/

print (train)
[0.70173366 0.69949541 0.57212932 0.6133156  0.58079661]
print (train_targets)
[0.69949541 0.57212932 0.6133156  0.58079661 0.59663892]

非常感谢@ 00和@Sebas的帮助:)