实际上,我正在构建一个keras模型,并且我有一个msg格式的数据集,其中包含超过1000万个实例以及40个全部属于分类的要素。目前,我仅使用它的一个示例,因为读取所有数据集并对其进行编码不可能适合内存。这是我正在使用的部分代码:
import pandas as pd
from category_encoders import BinaryEncoder as be
from sklearn.preprocessing import StandardScaler
def model():
model = Sequential()
model.add(Dense(120, input_dim=233, kernel_initializer='uniform', activation='selu'))
model.add(Dense(12, kernel_initializer='uniform', activation='sigmoid'))
model.compile(SGD(lr=0.008),loss='mean_squared_error', metrics=['accuracy'])
return model
def addrDataLoading():
data=pd.read_msgpack('datum.msg')
data=data.dropna(subset=['s_address','d_address'])
data=data.sample(300000) # taking a sample of all the dataset to make the encoding possible
y=data[['s_address','d_address']]
x=data.drop(['s_address','d_address'],1)
encX = be().fit(x, y)
numeric_X= encX.transform(x)
encY=be().fit(y,y)
numeric_Y=encY.transform(y)
scaler=StandardScaler()
X_all=scaler.fit_transform(numeric_X)
x_train=X_all[0:250000,:]
y_train=numeric_Y.iloc[0:250000,:]
x_val=X_all[250000:,:]
y_val=numeric_Y.iloc[250000:,:]
return x_train,y_train,x_val,y_val
x_train,y_train,x_val,y_val=addrDataLoading()
model.fit(x_train, y_train,validation_data=(x_val,y_val),nb_epoch=20, batch_size=200)
所以我的问题是如何使用自定义数据生成器函数读取和处理我拥有的所有数据而不仅仅是样本,然后使用fit_generator()函数训练模型?
编辑
这是数据示例: netData
我认为从数据中获取不同的样本会导致不同的编码维度。
对于此样本,有16种不同的类别:4个地址(3位),4个主机名(3位),1个子网掩码(1位),5个基础结构(3位),1个访问区(1位),因此为二进制编码将为我们提供11位,而数据的新维度为先前的11。5。因此,对于address
列中的另一个示例,我们有8种不同的类别,这将为二进制提供4位,而让相同数量的其他列中的类别,因此整体编码将产生12个维度。我相信是造成问题的原因。
答案 0 :(得分:0)
先丢弃NA,然后进一步处理干净的数据,以避免重新分配数据框。
data = pd.read_msgpack('datum.msg')
data.dropna(subset=['s_address','d_address']).to_msgpack('datum_clean.msg')
在此解决方案中,data_generator可以多次处理相同的数据。如果不是很关键,则可以使用此解决方案。
定义功能,读取数据snd拆分索引以进行训练和测试。它不会消耗很多内存。
import pandas as pd
from category_encoders import BinaryEncoder as be
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import numpy as np
def model():
#some code defining the model
def train_test_index_split():
# if there's enough memory to add one more column
data = pd.read_msgpack('datum_cleaned.msg')
train_idx, test_idx = train_test_split(data.index)
return data, train_idx, test_idx
data, train_idx, test_idx = train_test_index_split()
定义和初始化数据生成器,用于训练和验证
def data_generator(data, encX, encY, bathc_size, n_steps, index):
# EDIT: As the data was cleaned, you don't need dropna
# data = data.dropna(subset=['s_address','d_address'])
for i in range(n_steps):
batch_idx = np.random.choice(index, batch_size)
sample = data.loc[batch_idx]
y = sample[['s_address', 'd_address']]
x = sample.drop(['s_address', 'd_address'], 1)
numeric_X = encX.transform(x)
numeric_Y = encY.transform(y)
scaler = StandardScaler()
X_all = scaler.fit_transform(numeric_X)
yield X_all, numeric_Y
已编辑的部分现在可以训练二进制编码器。您应该对数据进行二次采样以创建编码器的代表性训练集。我想数据形状的错误是由未经训练的BinaryEncoder
(Error when checking input: expected dense_9_input to have shape (233,) but got array with shape (234,)
)引起的:
def get_minimal_unique_frame(df):
return (pd.Series([df[column].unique() for column in df], index=df.columns)
.apply(pd.Series) # tranform list on unique values to pd.Series
.T # transope frame: columns is columns again
.fillna(method='ffill')) # fill NaNs with last value
x = get_minimal_unique_frame(data.drop(['s_address', 'd_address'], 1))
y = get_minimal_unique_frame(data[['s_address', 'd_address']])
NB :我从未使用category_encoders,并且系统配置不兼容,因此无法安装和检查它。因此,以前的代码会引起问题。我想在那种情况下,您应该比较x
和y
数据帧的长度并使之相同,并可能更改数据帧的索引。
encX = be().fit(x, y)
encY = be().fit(y, y)
batch_size = 200
train_steps = 100000
val_steps = 5000
train_gen = data_generator(data, encX, encY, batch_size, train_steps, train_idx)
test_gen = data_generator(data, encX, encY, batch_size, test_steps, test_idx)
编辑,请提供一个示例x_sample
,运行train_gen
并保存输出,然后发布x_samples
,y_smaples
:
x_samples = []
y_samples = []
for i in range(10):
x_sample, y_sample = next(train_gen)
x_samples.append(x_sample)
y_samples.append(y_sample)
注意:数据生成器不会自行停止。但是它会在train_steps
之后通过fit_generator
方法停止。
带有生成器的适合模型:
model.fit_generator(generator=train_gen, steps_per_epoch=train_steps, epochs=1,
validation_data=test_gen, validation_steps=val_steps)
据我所知,如果您不会使用python
来显式地进行复制,copy()
不会复制熊猫数据帧。因此,两个生成器都使用相同的对象。但是,如果您使用Jupyter Notebook,则可能会发生数据泄漏/未收集的数据,并且随之而来的是内存问题。
清理您的数据
data = pd.read_msgpack('datum.msg')
data.dropna(subset=['s_address','d_address']).to_msgpack('datum_clean.msg')
如果有足够的磁盘空间,请创建训练/测试拆分,对其进行预处理并存储为numpy数组。
data, train_idx, test_idx = train_test_index_split()
def data_preprocessor(data, path, index):
# data = data.dropna(subset=['s_address','d_address'])
sample = data.loc[batch_idx]
y = sample[['s_address', 'd_address']]
x = sample.drop(['s_address', 'd_address'], 1)
encX = be().fit(x, y)
numeric_X = encX.transform(x)
encY = be().fit(y, y)
numeric_Y = encY.transform(y)
scaler = StandardScaler()
X_all = scaler.fit_transform(numeric_X)
np.save(path + '_X', X_all)
np.save(path + '_y', numeric_Y)
data_preprocessor(data, 'train', train_idx)
data_preprocessor(data, 'test', test_idx)
删除不必要的数据:
del data
加载文件并使用以下生成器:
train_X = np.load('train_X.npy')
train_y = np.load('train_y.npy')
test_X = np.load('test_X.npy')
test_y = np.load('test_y.npy')
def data_generator(X, y, batch_size, n_steps):
idxs = np.arange(len(X))
np.random.shuffle(idxs)
ptr = 0
for _ in range(n_steps):
batch_idx = idxs[ptr:ptr+batch_size]
x_sample = X[batch_idx]
y_sample = y[batch_idx]
ptr += batch_size
if ptr > len(X):
ptr = 0
yield x_sapmple, y_sample
准备生成器:
train_gen = data_generator(train_X, train_y, batch_size, train_steps)
test_gen = data_generator(test_X, test_y, batch_size, test_steps)
并最终拟合模型。希望其中一种解决方案会有所帮助。至少如果python确实传递了数组和数据帧,那么它们是购买参考,而不是按值购买。 Stackoverflow answer about it。