如何将tm_map()输出保存到csv文件?

时间:2016-08-02 01:03:26

标签: r csv lda topic-modeling

我正致力于分析mashable.com的新文章。我创建的数据看起来像(现在有14篇文章,因素很受欢迎或者不是非流行的)

id内容系数

1一些流行的文本数据

我想使用Jonathan Chang的LDA软件包对这些数据进行主题建模。我试图对数据进行一些预处理,这里是同一个

的脚本
require("ggplot2")
require("grid")
require("plyr")
library(reshape)
library(ScottKnott)
setwd("~/Desktop")
library(lda)
library(tm)
dataValues<- read.csv('Business.csv')

dim(dataValues)
## Text Pre-processing.
## Creating a Corpus from the Orginal Function
## interprets each element of the vector x as a document
CorpusObj<- VectorSource(dataValues$content);
CorpusObj<-Corpus(CorpusObj);
# remove \r and \n
remove.carrigae <- function(x) gsub("[\r\n]", "", x)
CorpusObj = tm_map(CorpusObj,remove.carrigae)
#remove Hyperlinks
removeURL <- function(x) gsub("http[[:alnum:]]*", "", x)
CorpusObj <- tm_map(CorpusObj, removeURL)
#remove special char
removeSPE <- function(x) gsub("[^a-zA-Z0-9]", " ", x)
CorpusObj <- tm_map(CorpusObj, removeSPE)
CorpusObj <- tm_map(CorpusObj, removePunctuation) 
CorpusObj <- tm_map(CorpusObj, removeNumbers)
#CorpusObj <- tm_map(CorpusObj, removeWords, stopwords("english")) 
CorpusObj <- tm_map(CorpusObj, stemDocument, language = "english") #Stemming the words 
CorpusObj<-tm_map(CorpusObj,stripWhitespace)
#CorpusObj <- tm_map(CorpusObj, tolower) # convert all text to lower case
inspect(CorpusObj[14])

CorpusObj <- tm_map(CorpusObj, PlainTextDocument)
#save in indiv text file
writeCorpus(CorpusObj, path = "~/Desktop/untitled_folder")
#write 1 file
writeLines(as.character(CorpusObj), con="mycorpus.txt")
inspect(CorpusObj[14])

我想保存

的输出
CorpusObj <- tm_map(CorpusObj, PlainTextDocument)

到.csv文件,并希望每行(单元格)为1个文档 函数writeCorpus(CorpusObj, path = "~/Desktop/untitled_folder") 只是将最后一个文档写入文本文件。

当我尝试使用函数corpusLDA <- lexicalize(CorpusObj )时 在PlaintextDocument之后,我得到以下输出It has all the docs in the [1:2,1:6007] and the other 2 list are empty

请指导我哪里出错了。谢谢。

1 个答案:

答案 0 :(得分:2)

当我检查此脚本创建的.txt文件时,我会看到所有不同的文档。然而,它们是人类不友好的格式。

enter image description here

这是我认为你想要的:

pacman::p_load("ggplot2", grid, plyr, reshape, ScottKnott, lda,tm)

dataValues <- read.csv("business.csv")
dim(dataValues)
## Text Pre-processing.
## Creating a Corpus from the Orginal Function
## interprets each element of the vector x as a document
CorpusObj<- VectorSource(dataValues$content);
CorpusObj<-Corpus(CorpusObj);
# remove \r and \n
remove.carrigae <- function(x) gsub("[\r\n]", "", x)
CorpusObj = tm_map(CorpusObj,remove.carrigae)
#remove Hyperlinks
removeURL <- function(x) gsub("http[[:alnum:]]*", "", x)
CorpusObj <- tm_map(CorpusObj, removeURL)
#remove special char
removeSPE <- function(x) gsub("[^a-zA-Z0-9]", " ", x)
CorpusObj <- tm_map(CorpusObj, removeSPE)
CorpusObj <- tm_map(CorpusObj, removePunctuation) 
CorpusObj <- tm_map(CorpusObj, removeNumbers)
#CorpusObj <- tm_map(CorpusObj, removeWords, stopwords("english")) 
CorpusObj <- tm_map(CorpusObj, stemDocument, language = "english") #Stemming the words 
CorpusObj<-tm_map(CorpusObj,stripWhitespace)
#CorpusObj <- tm_map(CorpusObj, tolower) # convert all text to lower case
inspect(CorpusObj[14])

CorpusObj <- tm_map(CorpusObj, PlainTextDocument)
#save in indiv text file
writeCorpus(CorpusObj)
#write 1 file
tmp <- CorpusObj[1]

dataframe<-data.frame(text=unlist(sapply(CorpusObj, `[`, "content")), stringsAsFactors=F)
write.csv(dataframe, "output.csv")