keras准确性提高不超过59%

时间:2020-08-24 15:26:18

标签: python tensorflow keras

这是我尝试的代码:

# normalizing the train data
cols_to_norm = ["WORK_EDUCATION", "SHOP", "OTHER",'AM','PM','MIDDAY','NIGHT', 'AVG_VEH_CNT', 'work_traveltime', 'shop_traveltime','work_tripmile','shop_tripmile', 'TRPMILES_sum',
                'TRVL_MIN_sum', 'TRPMILES_mean', 'HBO', 'HBSHOP', 'HBW', 'NHB', 'DWELTIME_mean','TRVL_MIN_mean', 'work_dweltime', 'shop_dweltime', 'firsttrip_time', 'lasttrip_time']
dataframe[cols_to_norm] = dataframe[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max()-x.min()))
# labels    
y = dataframe.R_SEX.values

# splitting train and test set
X_train, X_test, y_train, y_test =train_test_split(X, y, test_size=0.33, random_state=42)

model = Sequential()
model.add(Dense(256, input_shape=(X_train.shape[1],), activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(layers.Dropout(0.3))
model.add(Dense(256, activation='relu'))
model.add(layers.Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adam' , metrics=['acc'])
print(model.summary())

model.fit(X_train, y_train , batch_size=128, epochs=30, validation_split=0.2)

Epoch 23/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6623 - acc: 0.5985 - val_loss: 0.6677 - val_acc: 0.5918
Epoch 24/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6618 - acc: 0.5993 - val_loss: 0.6671 - val_acc: 0.5925
Epoch 25/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6618 - acc: 0.5997 - val_loss: 0.6674 - val_acc: 0.5904
Epoch 26/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6614 - acc: 0.6001 - val_loss: 0.6669 - val_acc: 0.5911
Epoch 27/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6608 - acc: 0.6004 - val_loss: 0.6668 - val_acc: 0.5920
Epoch 28/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6605 - acc: 0.6002 - val_loss: 0.6679 - val_acc: 0.5895
Epoch 29/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6602 - acc: 0.6009 - val_loss: 0.6663 - val_acc: 0.5932
Epoch 30/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6597 - acc: 0.6027 - val_loss: 0.6674 - val_acc: 0.5910
<tensorflow.python.keras.callbacks.History at 0x7fdd8143a278>

我试图修改神经网络并双重检查数据。

我可以做些什么来改善结果吗?模型不够深吗?是否有其他适合我的数据的模型?这是否意味着这些功能没有预测价值?我对下一步该怎么办感到困惑。

谢谢

更新

我尝试在数据框中添加新列,这是针对性别分类的KNN模型的结果。这是我所做的:

#Import knearest neighbors Classifier model
from sklearn.neighbors import KNeighborsClassifier

#Create KNN Classifier
knn = KNeighborsClassifier(n_neighbors=41)

#Train the model using the training sets
knn.fit(X, y)

#predict sex for the train set so that it can be fed to the nueral net
y_pred = knn.predict(X)

#add the outcome of knn to the train set
X = X.assign(KNN_result=y_pred)

它将培训和验证的准确性提高了61%。

Epoch 26/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6525 - acc: 0.6166 - val_loss: 0.6604 - val_acc: 0.6095
Epoch 27/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6523 - acc: 0.6173 - val_loss: 0.6596 - val_acc: 0.6111
Epoch 28/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6519 - acc: 0.6177 - val_loss: 0.6614 - val_acc: 0.6101
Epoch 29/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6512 - acc: 0.6178 - val_loss: 0.6594 - val_acc: 0.6131
Epoch 30/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6510 - acc: 0.6183 - val_loss: 0.6603 - val_acc: 0.6103
<tensorflow.python.keras.callbacks.History at 0x7fe981bbe438>

谢谢

4 个答案:

答案 0 :(得分:2)

在我看来,对于神经网络来说,您的数据变化不够。您的数据集中有很多相似的值。这可能是准确性较低的原因。尝试使用简单的回归器而不是神经网络。

如果要以任何方式使用神经网络,则应更改以下内容:

通常,对于回归,应将最后一层的激活函数设置为“ relu”或“ linear”,Sigmoid通常用于隐藏层。

尝试先更改这些内容。如果不起作用,请尝试以下其他策略:

  1. 增加批次大小
  2. 增加时期数
  3. 在运行之前(预处理阶段)将白化应用于数据集。
  4. 降低学习率,您应该使用scheduler

要美白,您可以:

from sklearn.decomposition import PCA

pca = PCA(whiten=True)
pca.fit(X)
X = pca.transform(X)

# make here train test split ...

X_test = pca.transform(X_test) # use the same pca model for the test set.

您的数据集中有很多零。在这里,您可以看到每列零值的百分比列表(介于0和1之间):

0.6611697598907094 WORK_EDUCATION
0.5906196483663051 SHOP
0.15968546556987515 OTHER
0.4517919980835284 AM
0.3695455825652879 PM
0.449195697003247 MIDDAY
0.8160996565242585 NIGHT
0.03156998520561604 AVG_VEH_CNT
1.618641571247746e-05 work_traveltime
2.2660981997468445e-05 shop_traveltime
0.6930343378622924 work_tripmile
0.605410795044367 shop_tripmile
0.185622578107549 TRPMILES_sum
3.237283142495492e-06 TRVL_MIN_sum
0.185622578107549 TRPMILES_mean
0.469645614614391 HBO
0.5744850291841075 HBSHOP
0.8137429143965219 HBW
0.5307266729469959 NHB
0.2017960446874565 DWELTIME_mean
1.618641571247746e-05 TRVL_MIN_mean
0.6959996892208183 work_dweltime
0.6099365168775757 shop_dweltime
0.0009258629787537107 firsttrip_time
0.002949164942813393 lasttrip_time
0.7442934791405661 age_2.0
0.7541995655566023 age_3.0
0.7081200773063214 age_4.0
0.9401296855626884 age_5.0
0.3490503429901489 KNN_result

答案 1 :(得分:2)

简而言之: NN很少是用于分类少量数据或已经由几个非异构列紧凑表示的数据的最佳模型。通常,通过类似的努力,增强方法或GLM可以产生更好的结果。

您可以使用模型做什么?与直觉相反,有时阻碍网络容量可能是有益的,尤其是当网络参数的数量超过训练点的数量时。可以减少神经元的数量,例如在您的情况下将层大小设置为16左右,并同时删除层。引入正则化(标签平滑,权重衰减等);或通过以不同(对数,二进制)比例添加更多派生列来生成更多数据。

另一种方法是搜索为您的数据类型设计的NN模型。例如,Self-Normalizing Neural NetworksWide & Deep Learning for Recommender Systems

如果您仅尝试尝试1件事,我建议对学习率进行网格搜索或尝试一些其他的优化器。

如何对使用哪种模型做出更好的决策?浏览完kaggle.com竞赛并找到与手头相似的数据集,然后查看排名靠前的技术。

答案 2 :(得分:1)

减小批次大小并使用pd.get_dummies变换因变量(Y)

在最后一层中使用:

model.add(Dense(2, activation='sigmoid'))

此外,根据数据的大小,您可以将神经元的数量从256个减少到128个。

将辍学率降低到0.2:

model = Sequential()
model.add(Dense(128, input_shape=(X_train.shape[1],), activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(layers.Dropout(0.2))
model.add(Dense(128, activation='relu'))
model.add(layers.Dropout(0.2))
model.add(Dense(2, activation='sigmoid'))

答案 3 :(得分:1)

好的,我测试了另一种算法Catboost,并且获得了准确性:0.9295405324035856

这里:

导入,常量和函数:

from catboost import CatBoostRegressor, Pool

PARAMS_CATBOOST_REGRESSOR = dict()
PARAMS_CATBOOST_REGRESSOR['learning_rate']=0.1
PARAMS_CATBOOST_REGRESSOR['use_best_model']= True
PARAMS_CATBOOST_REGRESSOR['logging_level'] = 'Silent'
PARAMS_CATBOOST_REGRESSOR['l2_leaf_reg'] = 1.0 # lambda, default 3, S: 300

SPLITS=5

def get_prediction(X, Y, X_predict):
    kf = KFold(n_splits=SPLITS, shuffle=True)
    
    count=0
    cat_features=[]
    y_test_predict = np.zeros((X_predict.shape[0]))
    oo = np.zeros((X.shape[0]))  
    
    clf = CatBoostRegressor(**PARAMS_CATBOOST_REGRESSOR)
        
    for train_index, test_index in kf.split(X, Y):
           count = count+1
           print("Split "+str(count)+" ... ")

           X_train, X_test = X.iloc[train_index], X.iloc[test_index]
           y_train, y_test = Y.iloc[train_index], Y.iloc[test_index]

           train_dataset = Pool(data=X_train,
                     label=y_train,
                     cat_features=cat_features)

           eval_dataset = Pool(data=X_test,
                     label=y_test,
                     cat_features=cat_features)

           clf.fit(train_dataset,
                    use_best_model=True,
                    eval_set=eval_dataset)

           print("Count of trees in model = {}".format(clf.tree_count_))
            
           oo[test_index] = clf.predict(X_test)
           y_test_predict += clf.predict(X_predict)    
            
    y_test_predict = y_test_predict/float(SPLITS)
    return (oo, y_test_predict)

预处理:

df = pd.read_csv('df1.csv')
y = df['R_SEX'].values

# Add a column with number of zeros per row
df['c_zero'] = (df == 0).astype(int).sum(axis=1) 

# Delete columns with more than 70% zeros
l1 = ['R_SEX', 'NIGHT', 'age_2.0', 'age_3.0', 'age_4.0','age_5.0', 'HBW', 'work_tripmile',  'work_dweltime', 'WORK_EDUCATION' ]
x_cols = [c for c in df.columns if c not in l1]

X = df[x_cols].values
sh = df.shape[0]

# Normalize : 
for c in df[x_cols]:
  dd = df.loc[df[c]==0]
  print(dd.shape[0]/df.shape[0], c)

运行:

df.reset_index(drop=True, inplace=True)
YY = df['R_SEX']
XX = df[x_cols]
(oo, ypred) = get_prediction(XX, YY, XX)

解析得分:

df = pd.DataFrame(oo, columns=['pred'])
df['y'] = YY
df['v'] = df.apply(lambda row: 1 if row['pred']>=0.5 else 0, axis=1)
df.sort_values(by=['y'])
df['diff'] = df['v'] - df['y']
pp = df[df['diff']==0].shape[0]
pdf = df.shape[0]
    
print(pp/pdf)

结果:

Score = 0.9295405324035856
相关问题