我目前正在研究机器学习掌握的this示例:
这是我的代码:
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!','Good work','Great effort','nice work','Excellent!','Weak','Poor effort!','not good','poor work', 'Could have done better.']
# define class labels
labels = [1,1,1,1,1,0,0,0,0,0]
vocab_size = 50
encoded_docs = [one_hot(d, vocab_size) for d in docs]
# The sequences have different lengths and Keras prefers inputs to be vectorized and all inputs to have the same length
#...So we pad documents to a max length of 4 words:
max_length = 4
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
print(padded_docs)
#Creating the Embedding Layer
# Define the model
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy*100))
然后我创建了一个函数来尝试测试一个示例单词,我正在尝试查看我的模型是返回0还是1,但我得到的结果如下:[[ 0.55765963]]
。
我写了这个函数,但我不理解输出,我期待0或1:
sample_string = ['nice work']
def model_builder_predict(sample_string):
vocab_size = 50
max_length = 4
encoded_docs = [one_hot(d, vocab_size) for d in sample_string]
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
model_answer = model.predict(np.array(padded_docs))
return model_answer
print(model_builder_predict(sample_string))
任何帮助都会很棒!