是否可以在HPO(人类表型本体)数据集中实施LSTM

时间:2019-06-05 11:21:58

标签: python keras lstm recurrent-neural-network

我正在建立一个项目,该项目使用LSTM从输入的医学报告中检测表型文本(甚至RNN也可以使用)。我通常使用一些标记为数据集的文本。只是对于如何使用HPO数据集来训练我的模型没有正确的方向。 HPO数据集链接(https://raw.githubusercontent.com/obophenotype/human-phenotype-ontology/master/hp.obo)。 数据样本:

HP:0000002  abnormality of body height
HP:0000003  multicystic renal dysplasia
HP:0000003  multicystic kidney dysplasia
HP:0000003  multicystic dysplastic kidney
HP:0000003  multicystic kidneys
HP:0000008  abnormality of female internal genitalia
HP:0000009  functional abnormality of the bladder
HP:0000009  poor bladder function
HP:0000010  urinary tract infections recurrent
HP:0000010  recurrent utis
HP:0000010  recurrent urinary tract infections
HP:0000010  frequent urinary tract infections

我已经对推文进行了情感分析,并且已经有了开源的标签数据集。下面的代码是LPO在HPO上的非常糟糕的实现。

text_words = df1['Text']
max_words = 20000
tokenizer = Tokenizer(num_words=max_words, oov_token='UNK')
tokenizer.fit_on_texts(list(text_words))

tokenizer.word_index = {e:i for e,i in tokenizer.word_index.items() if i <= max_words} # <= because tokenizer is 1 indexed
tokenizer.word_index[tokenizer.oov_token] = max_words + 1

maxlen = 100
X = pad_sequences(tokenized_train, maxlen = maxlen)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)

model = Sequential()
model.add(Embedding(max_words, output_dim=128, mask_zero=True))
model.add(LSTM(60))
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

output = model.fit(X_train, y_train, batch_size=128, epochs=5, validation_split=0.2)

我得到的准确性很低。 期望的解决方案是知道如何使用LSTM或RNN处理此类数据集。

0 个答案:

没有答案