Python Pandas处理字符串中的特殊字符

时间:2020-05-15 17:28:27

标签: python pandas

我编写了一个函数,稍后我想将其应用于数据框。


def get_word_count(text,df):
    #text is a lowercase list of words
    #df is a dataframe with 2 columns: word and count
    #this function updates the word  counts


    #f=open('stopwords.txt','r')
    #stopwords=f.read()
    stopwords='in the and an - '

    for word in text:
        if word not in stopwords:

            if df['word'].str.contains(word).any():
                df.loc[df['word']==word, 'count']=df['count']+1
            else:
                df.loc[0]=[word,1]
                df.index=df.index+1

    return df

然后我检查一下:


word_df=pd.DataFrame(columns=['word','count'])
sentence1='[first] - missing "" in the text [first] word'.split()
y=get_word_count(sentence1, word_df)
sentence2="error: wrong word in the [second]  text".split()
y=get_word_count(sentence2, word_df)
y

我得到以下结果:

 
Word     Count

[first]    2    
missing    1 
""         1
text       2
word       2
error:     1
wrong      1

那么 sentence2 中的 [second] 在哪里?
如果省略方括号之一,则会收到错误消息。如何处理带有特殊字符的单词?请注意,我不想摆脱它们,因为如果这样做,我会错过 sentence1 中的“”

3 个答案:

答案 0 :(得分:0)

问题出在那一行:

{
    "some_variable": "3a1821d0",
    "foo": "https://<some_variable>.ngrok.io/api/foo",
    "bar": "https://<some_variable>.ngrok.io/api/bar"
}

这将报告const fs = require('fs'); let fileContent = fs.readFileSync('input.json', "utf-8"); let content = JSON.parse(fileContent); const someVariable = content.some_variable; // I'm adding null and 4 to keep the file beautified let fileContentStr = JSON.stringify(content, null, 4); // This line replaces all ocurrences of <some_variable> by "some_variable" content fileContentStr = fileContentStr.split('<some_variable>').join(someVariable); // Write file again fs.writeFileSync('output.json', fileContentStr); 列中的任何单词包含给定的单词。当给出if df['word'].str.contains(word).any(): 并将其与具体的word进行比较时,来自df['word'].str.contains(word)的DataFrame报告True

为了快速解决,我将行更改为:

[second]

答案 1 :(得分:0)

不建议在这样的循环中创建DataFrame,您应该执行以下操作:

stopwords='in the and an - '
sentence = sentence1+sentence2
df = pd.DataFrame([sentence.split()]).T
df.rename(columns={0: 'Words'}, inplace=True)
df = df.groupby(by=['Words'])['Words'].size().reset_index(name='counts')
df = df[~df['Words'].isin(stopwords.split())]
print(df)

       Words  counts
0         ""       1
2    [first]       2
3   [second]       1
4     error:       1
6    missing       1
7       text       2
9       word       2
10     wrong       1

答案 2 :(得分:0)

我已经重新构建它,您可以添加句子并看到频率不断增加

from collections import Counter
from collections import defaultdict

import pandas as pd

def terms_frequency(corpus, stop_words=None):

    '''
    Takes in texts and returns a pandas DataFrame of words frequency

    '''
    corpus_ = corpus.split()

    # remove stop wors

    terms = [word for word in corpus_ if word not in stop_words]
    terms_freq = pd.DataFrame.from_dict(Counter(terms), orient='index').reset_index()

    terms_freq = terms_freq.rename(columns={'index':'word', 0:'count'}).sort_values('count',ascending=False)

    terms_freq.reset_index(inplace=True)
    terms_freq.drop('index',axis=1,inplace=True)

    return terms_freq


def get_sentence(sentence, storage, stop_words=None):
    storage['sentences'].append(sentence)
    corpus = ' '.join(s for s in storage['sentences'])
    return terms_frequency(corpus,stop_words)



# tests
STOP_WORDS = 'in the and an - '
storage = defaultdict(list)

S1 = '[first] - missing "" in the text [first] word'
print(get_sentence(S1,storage,STOP_WORDS))

print('\nNext S2')
S2 = 'error: wrong word in the [second]  text'

print(get_sentence(S2,storage,STOP_WORDS))