我是机器学习的新手,我正在尝试训练一种可以在句子中检测到Prague
城市的模型。它可以是许多单词形式。
布拉格,PRAHA,Z Prahy等...
所以我有一个火车数据集,它由title
和result
组成,其中result
是二进制-1或0(大约5000个例子)
您可以在代码注释中看到该示例。
我的思想:
火车会打印以下内容:
Epoch 15/20
- 0s - loss: 0.0303 - acc: 0.9924
Epoch 16/20
- 0s - loss: 0.0304 - acc: 0.9922
Epoch 17/20
- 0s - loss: 0.0648 - acc: 0.9779
Epoch 18/20
- 0s - loss: 0.0589 - acc: 0.9816
Epoch 19/20
- 0s - loss: 0.0494 - acc: 0.9844
Epoch 20/20
但是测试返回以下值:
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0]
这意味着它在测试csv中的这两个句子中检测到单词Prague
:
第一句话是X_train
中一个句子的子字符串,第二句话等于X_train
中一个句子。
如果没有成功,我试图增加epochs
和ak batch_size
的数量...
其他测试句子是随机创建的,或者是通过修改X_test
句子来创建的。
def train():
# load train dataset
# "TIP! Ukraine Airlines - Thajsko - levné letenky Bangkok z Prahy (a zpět) 9.790,- kč",1
# Predvianočná MALAGA s odletom z Viedne už za 18€,0
# S 5* Singapore Airlines z Prahy do Singapuru a pak na Maledivy za 15.940 Kč,1
# Athény z Katowic či Blavy,0
# Z Prahy na kanárský ostrov Tenerife vč. zavazadla. Letenky od 1 990 Kč,1
# Hotel v Praze i na víkend za 172Kč! (i jednolůžkové pokoje),1
dataframe = pandas.read_csv("prague_train_set.csv")
dataframe['title'] = dataframe['title'].str.lower()
dataset = dataframe.values
# load test dataset
# v Praze je super # Should be 1, predicts 0
# Silvestr v Dublinu z Prahy # Should be 1, predicts 1
# do Prahy zavita peter # Should be 1, predicts 0
# toto nie # Should be 0, predicts 0
# xxx # Should be 0, predicts 0
# Praha **** # Should be 1, predicts 0
# z Prahy Přímo # Should be 1, predicts 0
# Tip na dárek: Řím z Prahy za 778Kč (letfdenky tam i zpět) # Should be 1, predicts 0
# lety do BRUSELU z PRAHY od 518 K # Should be 1, predicts 0
# Přímé lety do BRUSELU z PRAHY od 518 Kč # Should be 1, predicts 1
# Gelachovský stit # Should be 0, predicts 0
tdataframe = pandas.read_csv("prague_test_set.csv")
tdataframe['title'] = tdataframe['title'].str.lower()
tdataset = tdataframe.values
# Preprocess dataset
X_train = dataset[:,0]
X_test = tdataset[:,0]
y_train = dataset[:,1]
tokenizer = Tokenizer(char_level=True)
tokenizer.fit_on_texts(X_train)
X_train = tokenizer.texts_to_sequences(X_train)
SEQ_MAX_LEN = 200
X_train = sequence.pad_sequences(X_train, maxlen=SEQ_MAX_LEN)
X_test = tokenizer.texts_to_sequences(X_test)
X_test = sequence.pad_sequences(X_test, maxlen=SEQ_MAX_LEN)
# create model
model = Sequential()
# model.add(Embedding(tokenizer.word_index.__len__(), 32, input_length=100))
model.add(Dense(SEQ_MAX_LEN, input_dim=SEQ_MAX_LEN, init='uniform', activation='relu'))
model.add(Dense(10, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, epochs=20, batch_size=32, verbose=2)
# model.save("trainmodel.h5")
# model = load_model("trainmodel.h5")
# calculate predictions
predictions = model.predict(X_test)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)
您知道该怎么做才能使其正常工作吗?
答案 0 :(得分:0)
有两个可能的问题。 1.数据偏度 2.过度拟合
数据偏斜度:您的数据集数据可能偏斜,例如,它只有1%的正数,然后预测0的简单算法将具有99%的准确度。在这里,您需要使用以下指标来量化“善良”
过度拟合:也称为泛化问题,从理论上讲,如果训练参数更多(您的权重和神经网络的偏差),则可能适合它的参数对训练有好处,但不能将其泛化。从理论上讲,VC限制是它的限制,它取决于您的训练示例(m),因此您可以尝试