我正在使用制表符分隔的文件,如下所示:
foo1 <- function(data, by){
s <- substitute(by)
L <- split(data, data$study.name) ; L[[1]] <- NULL
lapply(L, function(x) do.call("subset", list(x, s))) ## What to use instead of `subset`
## to get the same output?
}
# EXAMPLE OF USE:
D <- read.csv("https://raw.githubusercontent.com/izeh/i/master/k.csv", header=TRUE) # DATA
foo1(D, ESL == 1)
我的目标是产生一个看起来像这样的数据框:
0 abch7619 Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. 42Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat…..........
1 uewl0928 Duis aute irure d21olor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excep3teur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
0 ahwb3612 Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur
1 llll2019 adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur???? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?
0 jdne2319 At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga.
1 asbq0918 Et harum quidem rerum facilis est et expedita distinctio................................ Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut
TSV的长文本字段中的ech单词显示为特征(列),其值为单词TFIDF。
我可以尝试手动进行此操作,但是我希望使用sklearn's TFIDFVECTORIZER
来完成此操作。但是,我需要对字段中的文本进行预处理,以遵循某些准则。
到目前为止,我可以读取classification ID word1 word2 word3 word4
foo foo foo foo foo foo
文件,创建数据框,并对文本进行预处理。我遇到的麻烦是将我的文本格式设置功能组合在一起,然后将其传递给.tsv
以下是我所拥有的:
TFIDFVECTORIZER
哪个会产生:
import nltk, string, csv, operator, re, collections, sys, struct, zlib, ast, io, math, time
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.corpus import stopwords
from collections import defaultdict, Counter
from bs4 import BeautifulSoup as soup
from math import sqrt
from itertools import islice
import pandas as pd
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def main():
tsv_file = "filepath"
print(tsv_file)
csv_table=pd.read_csv(tsv_file, sep='\t')
csv_table.columns = ['rating', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
print(vocab)
main()
但是,我不确定这是否是允许['decent', 'terribly', 'inconsistent', 'food', 'ive', 'great', 'dishes', 'terrible', 'ones', 'love', 'chaat', 'times', 'great', 'fried', 'greasy', 'mess', 'bad', 'way', 'good', 'way', 'usually', 'matar', 'paneer', 'great', 'oversalted', 'peas', 'plain', 'bad', 'dont', 'know', 'coinflip', 'good', 'food', 'oversalted', 'overcooked', 'bowl', 'either', 'way', 'portions', 'generous', 'looks', 'arent', 'everything', 'little', 'divito', 'looks', 'little', 'scary', 'looking', 'like', 'ive', 'said', 'cant', 'judge', 'book', 'cover', 'necessarily', 'kind', 'place', 'take', 'date', 'unless', 'shes', 'blind', 'hungry', 'man', 'oh', 'man', 'food', 'ever', 'good', 'ordered', 'breakfast', 'lunch', 'dinner', 'fantastico', 'make', 'homemade', 'corn', 'tortillas', 'several', 'salsas', 'breakfast', 'burritos', 'world', 'cost', 'mcdonalds', 'meal', 'family', 'eats', 'frequently', 'frankly', 'tired',
正常工作的正确格式。当我尝试使用它时,我使用了正确运行的以下代码:
TFIDFVECTORIZER
但是只是给我以下结果:
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
feature_matrix = tfidf.fit_transform(csv_table['text'])
df = pd.DataFrame(data=feature_matrix.todense(), columns=tfidf.get_feature_names())
print(df)
我不知道我在看什么。如何使用TFIDFVECTORIZER来实现我的目标,即使用TFIDF值创建每个单词的特征矩阵(在应用了清洗逻辑之后)?
答案 0 :(得分:1)
fit_transform的输出是一个稀疏矩阵,因此您需要将其转换为密集形式,并包括清洁步骤,您可以尝试:
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=X.todense(), columns=vectorizer.get_feature_names())
print(df)
基本上,您需要做的是对get_words
(csv_table['text']
中的元素)中的每个文档应用清理程序(s
),然后再将其传递给fit_transform
。