我使用LDA进行主题建模:
dtm <- DocumentTermMatrix(docs)
但是,有些行dtm
中的所有元素都为零。所以我按照here
ui = unique(dtm$i)
dtm.new = dtm[ui,]
然后,LDA工作,我有主题和一切。我的下一次尝试是按照here中的建议使用LDAvis。源代码:
topicmodels_json_ldavis <- function(fitted, corpus, doc_term){
# Required packages
library(topicmodels)
library(dplyr)
library(stringi)
library(tm)
library(LDAvis)
# Find required quantities
phi <- posterior(fitted)$terms %>% as.matrix
theta <- posterior(fitted)$topics %>% as.matrix
vocab <- colnames(phi)
doc_length <- vector()
for (i in 1:length(corpus)) {
temp <- paste(corpus[[i]]$content, collapse = ' ')
doc_length <- c(doc_length, stri_count(temp, regex = '\\S+'))
}
temp_frequency <- inspect(doc_term)
freq_matrix <- data.frame(ST = colnames(temp_frequency),
Freq = colSums(temp_frequency))
rm(temp_frequency)
# Convert to json
json_lda <- LDAvis::createJSON(phi = phi, theta = theta,
vocab = vocab,
doc.length = doc_length,
term.frequency = freq_matrix$Freq)
return(json_lda)
}
当我调用topicmodels_json_ldavis
函数时,我收到此错误:
Length of doc.length not equal to the number of rows in theta;
both should be equal to the number of documents in the data.
我检查了theta
和doc.length
的长度。它们是不同的。我假设是因为我传递了语料库(docs
),这使得dtm
(至少)为零行。为了使语料库与doc_term_matrix匹配,我决定按照here中的建议从dtm.new
创建一个新的语料库。源代码:
dtm2list <- apply(dtm, 1, function(x) {
paste(rep(names(x), x), collapse=" ")
})
myCorp <- VCorpus(VectorSource(dtm2list))
我甚至用dtm.new创建了一个新的ldaOut并将以下参数传递给topicmodels_json_ldavis
:ldaOut22, myCorp, dtm.new
我仍然收到theta
和doc.length
必须具有相同长度的错误消息。
答案 0 :(得分:1)
我遇到了完全相同的问题,我能够删除所有零向量的行以进行LDA分析,但随后陷入了稀疏矩阵的行数不再与LDAvis的Documents行数匹配的问题。我已经解决了它,不幸的是仅针对Python,但是您可以使用以下方法作为起点:
让我们看看我首先得到的:
print(f'The tf matrix:\n {cvz.toarray()[:100]}\n')
sparseCountMatrix = np.array(cvz.toarray())
print(f'Number of non-zero vectors: {len(x[x>0])} Number of zero vectors: {len(x[x==0])}\n')
print(f'Have a look at the non-zero vectors:\n{x[x>0][:200]}\n')
print(f'This is our sparse matrix with {x.shape[0]} (# of documents) by {x.shape[1]} (# of terms in the corpus):\n{x.shape}')
输出:
The tf matrix:
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
Number of non-zero vectors: 4721 Number of zero vectors: 232354
Have a look at the non-zero vectors:
[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
This is our sparse matrix with 545 (# of documents) by 435 (# of terms in the corpus):
(545, 435)
多少行包含所有零向量?
len(list(np.array(sparseCountMatrix[(sparseCountMatrix==0).all(1)])))
输出:12
多少行包含至少一个非零向量?
len(list(np.array(sparseCountMatrix[~(sparseCountMatrix==0).all(1)])))
输出:533
删除包含LDA分析的所有零向量的12行:
cleanedSparseCountMatrix = np.array(sparseCountMatrix[~(sparseCountMatrix==0).all(1)])
还要从原始的Pandas系列(令牌)中删除这些文档,因此文档计数与稀疏矩阵行计数匹配,这对于使用pyLDAVis可视化LDA结果很重要:
首先,要获取所有零向量的行的索引位置,请使用np.where
:
indexesToDrop = np.where((sparseCountMatrix==0).all(1))
print(f"Indexes with all zero vectors: {indexesToDrop}\n")
输出:
Indexes with all zero vectors: (array([ 47, 77, 88, 95, 106, 109, 127, 244, 363, 364, 367, 369],
dtype=int64),)
第二,使用此索引列表以series.drop
删除Pandas系列中的原始行:
data_tokens_cleaned = data['tokens'].drop(data['tokens'].index[indexesToDrop])
已清除令牌的新长度(应与稀疏矩阵长度匹配!):
len(data_tokens_cleaned)
输出:
533
这是我们清洗后的稀疏矩阵,准备进行LDA分析:
print(cleanedSparseCountMatrix.shape)
输出:(533,435)