我一直在研究字符级文本生成的示例:https://keras.rstudio.com/articles/examples/lstm_text_generation.html
我无法将此示例扩展到字级模型。见下面的代表
library(keras)
library(readr)
library(stringr)
library(purrr)
library(tokenizers)
# Parameters
maxlen <- 40
# Data Preparation
# Retrieve text
path <- get_file(
'nietzsche.txt',
origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt'
)
# Load, collapse, and tokenize text
text <- read_lines(path) %>%
str_to_lower() %>%
str_c(collapse = "\n") %>%
tokenize_words( simplify = TRUE)
print(sprintf("corpus length: %d", length(text)))
words <- text %>%
unique() %>%
sort()
print(sprintf("total words: %d", length(words)))
给出了:
[1] "corpus length: 101345"
[1] "total words: 10283"
当我继续下一步时,我遇到了问题:
# Cut the text in semi-redundant sequences of maxlen characters
dataset <- map(
seq(1, length(text) - maxlen - 1, by = 3),
~list(sentece = text[.x:(.x + maxlen - 1)], next_char = text[.x + maxlen])
)
dataset <- transpose(dataset)
# Vectorization
X <- array(0, dim = c(length(dataset$sentece), maxlen, length(words)))
y <- array(0, dim = c(length(dataset$sentece), length(words)))
for(i in 1:length(dataset$sentece)){
X[i,,] <- sapply(words, function(x){
as.integer(x == dataset$sentece[[i]])
})
y[i,] <- as.integer(words == dataset$next_char[[i]])
}
Error: cannot allocate vector of size 103.5 Gb
现在与角色示例相比,我的词汇量比词汇中的字符要多得多,这可能是我遇到矢量大小问题的原因,但我怎样才能预先处理字级别文本数据适合rnn?这是通过嵌入层以某种方式完成的吗?我是否需要删除一些停用词/词汇来使词汇量下降?
修改:我仍在寻找此问题的解决方案,但下面提供了一些其他背景和想法:https://github.com/rstudio/keras/issues/161