清理Twitter数据熊猫python

时间:2020-11-06 18:16:03

标签: python python-3.x pandas

尝试将twitter数据作为熊猫数据框进行清理。我似乎错过了一步。处理完所有推文之后,我想我会丢失旧推文上覆盖新推文的信息吗?保存文件时,在推文中看不到任何变化。我想念什么?

import pandas as pd
import re
import emoji
import nltk
nltk.download('words')
words = set(nltk.corpus.words.words())

trump_df = pd.read_csv('new_Trump.csv')
for tweet in trump_df['tweet']:
    tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
    tweet = re.sub(r"(?:\@|http?\://|https?\://|www)\S+", "", tweet) #Remove http links
    tweet = " ".join(tweet.split())
    tweet = ''.join(c for c in tweet if c not in emoji.UNICODE_EMOJI) #Remove Emojis
    tweet = tweet.replace("#", "").replace("_", " ") #Remove hashtag sign but keep the text
    tweet = " ".join(w for w in nltk.wordpunct_tokenize(tweet) \
         if w.lower() in words or not w.isalpha()) #Remove non-english tweets (not 100% success)
    print(tweet)
trump_df.to_csv('new_Trump.csv')

1 个答案:

答案 0 :(得分:2)

正如您所说的那样,您永远不会将数据存储回去,让我们创建一个完成所有工作的函数,然后使用map将其传递给数据框。比遍历数据帧中的每个值并将其存储到列表(选项B)更有效。

def cleaner(tweet):
    tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
    tweet = re.sub(r"(?:\@|http?\://|https?\://|www)\S+", "", tweet) #Remove http links
    tweet = " ".join(tweet.split())
    tweet = ''.join(c for c in tweet if c not in emoji.UNICODE_EMOJI) #Remove Emojis
    tweet = tweet.replace("#", "").replace("_", " ") #Remove hashtag sign but keep the text
    tweet = " ".join(w for w in nltk.wordpunct_tokenize(tweet) \
         if w.lower() in words or not w.isalpha())
    return tweet
trump_df['tweet'] = trump_df['tweet'].map(lambda x: cleaner(x))
trump_df.to_csv('') #specify location

这将用所做的修改覆盖tweet列。

选项B:

如上所述,这在我看来将效率更低一些,但是就像在for循环之前创建一个列表,并用每条干净的推文填充它一样简单。

clean_tweets = []
for tweet in trump_df['tweet']:
    tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
    ##Here's where all the cleaning takes place
    clean_tweets.append(tweet)
trump_df['tweet'] = clean_tweets
trump_df.to_csv('') #Specify location