优化pandas列中的函数计算?

时间:2016-05-01 21:22:45

标签: python python-2.7 pandas nlp treetagger

假设我有以下pandas数据帧:

id |opinion
1  |Hi how are you?
...
n-1|Hello!

我想像这样创建一个新的pandas POS-tagged列:

id|     opinion   |POS-tagged_opinions
1 |Hi how are you?|hi\tUH\thi
                  how\tWRB\thow
                  are\tVBP\tbe
                  you\tPP\tyou
                  ?\tSENT\t?

.....

n-1|     Hello    |Hello\tUH\tHello
                   !\tSENT\t!

从文档教程中,我尝试了几种方法。特别:

df.apply(postag_cell, axis=1)

df['content'].map(postag_cell)

因此,我创建了这个POS标签单元功能:

import pandas as pd

df = pd.read_csv('/Users/user/Desktop/data2.csv', sep='|')
print df.head()


def postag_cell(pandas_cell):
    import pprint   # For proper print of sequences.
    import treetaggerwrapper
    tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
    #2) tag your text.
    y = [i.decode('UTF-8') if isinstance(i, basestring) else i for i in [pandas_cell]]
    tags = tagger.tag_text(y)
    #3) use the tags list... (list of string output from TreeTagger).
    return tags



#df.apply(postag_cell(), axis=1)

#df['content'].map(postag_cell())




df['POS-tagged_opinions'] = (df['content'].apply(postag_cell))

print df.head()

以上函数返回以下内容:

user:~/PycharmProjects/misc_tests$ time python tagging\ with\ pandas.py



id|     opinion   |POS-tagged_opinions
1 |Hi how are you?|[hi\tUH\thi
                  how\tWRB\thow
                  are\tVBP\tbe
                  you\tPP\tyou
                  ?\tSENT\t?]

.....

n-1|     Hello    |Hello\tUH\tHello
                   !\tSENT\t!

--- 9.53674316406e-07 seconds ---

real    18m22.038s
user    16m33.236s
sys 1m39.066s

问题在于,大量的opinions需要花费很多时间:

如何使用pandas和treetagger更有效地以更加pythonic的方式执行pos-tag?。我相信这个问题是由于我的熊猫知识有限,因为我用大熊猫数据框很快就用treetagger标记了这些意见。

1 个答案:

答案 0 :(得分:1)

可以进行一些明显的修改以获得合理的时间(从postag_cell函数中删除导入和TreeTagger类的实例化)。然后代码可以并行化。然而,大多数工作都是由treetagger本身完成的。由于我对这个软件一无所知,我无法判断它是否可以进一步优化。

最小工作代码:

import pandas as pd
import treetaggerwrapper

input_file = 'new_corpus.csv'
output_file = 'output.csv'

def postag_string(s):
    '''Returns tagged text from string s'''
    if isinstance(s, basestring):
       s = s.decode('UTF-8')
    return tagger.tag_text(s)

# Reading in the file
all_lines = []
with open(input_file) as f:
    for line in f:
        all_lines.append(line.strip().split('|', 1))

df = pd.DataFrame(all_lines[1:], columns = all_lines[0])

tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')

df['POS-tagged_content'] = df['content'].apply(postag_string)

# Format fix:
def fix_format(x):
    '''x - a list or an array'''
    # With encoding:
    out = list(tuple(i.encode().split('\t')) for i in x)
    # or without:
    # out = list(tuple(i.split('\t')) for i in x)
    return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)

df.to_csv(output_file, sep = '|')

我没有使用pd.read_csv(filename, sep = '|'),因为您的输入文件是"格式错误" - 它在某些文本意见中包含未转义的字符|

更新:)格式化修复后,输出文件如下所示:

$ cat output_example.csv 
|id|content|POS-tagged_content
0|cv01.txt|How are you?|[('How', 'WRB', 'How'), ('are', 'VBP', 'be'), ('you', 'PP', 'you'), ('?', 'SENT', '?')]
1|cv02.txt|Hello!|[('Hello', 'UH', 'Hello'), ('!', 'SENT', '!')]
2|cv03.txt|"She said ""OK""."|"[('She', 'PP', 'she'), ('said', 'VVD', 'say'), ('""', '``', '""'), ('OK', 'UH', 'OK'), ('""', ""''"", '""'), ('.', 'SENT', '.')]"

如果格式不完全符合您的要求,我们可以解决。

并行化代码

它可能会带来一些加速,但不会期待奇迹。来自多进程设置的开销甚至可能超过增益。您可以尝试进程数nproc(此处,默认情况下设置为CPU数量;设置超过此值的效率低)。

Treetaggerwrapper有自己的多进程class。我怀疑它与下面的代码不同,所以我没有尝试过。

import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp

input_file = 'new_corpus.csv'
output_file = 'output2.csv'

def postag_string_mp(s):
    '''
    Returns tagged text for string s.
    "pool_tagger" is a global name, defined in each subprocess.
    '''
    if isinstance(s, basestring):
       s = s.decode('UTF-8')
    return pool_tagger.tag_text(s)

''' Reading in the file '''
all_lines = []
with open(input_file) as f:
    for line in f:
        all_lines.append(line.strip().split('|', 1))

df = pd.DataFrame(all_lines[1:], columns = all_lines[0])

''' Multiprocessing '''

# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()

# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
    global pool_tagger
    pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')

# The actual job done in subprcesses:
def run(df):
    return df.apply(postag_string_mp)

# Splitting the input
lst_split = np.array_split(df['content'], nproc)

pool = mp.Pool(processes = nproc, initializer = init)
lst_out = pool.map(run, lst_split)
pool.close()
pool.join()

# Concatenating the output from subprocesses 
df['POS-tagged_content'] =  pd.concat(lst_out) 

# Format fix:
def fix_format(x):
    '''x - a list or an array'''
    # With encoding:
    out = list(tuple(i.encode().split('\t')) for i in x)
    # and without:
    # out = list(tuple(i.split('\t')) for i in x)
    return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)

df.to_csv(output_file, sep = '|')

<强>更新

在Python 3中,默认情况下所有字符串都是unicode,因此您可以通过解码/编码节省一些麻烦和时间。 (在下面的代码中,我也在子进程中使用纯numpy数组而不是数据框 - 但这种改变的影响是微不足道的。)

# Python3 code:
import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp

input_file = 'new_corpus.csv'
output_file = 'output3.csv'

''' Reading in the file '''
all_lines = []
with open(input_file) as f:
    for line in f:
        all_lines.append(line.strip().split('|', 1))

df = pd.DataFrame(all_lines[1:], columns = all_lines[0])

''' Multiprocessing '''

# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()

# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
    global pool_tagger
    pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')

# The actual job done in subprcesses:
def run(arr):
    out = np.empty_like(arr)
    for i in range(len(arr)):
        out[i] = pool_tagger.tag_text(arr[i])
    return out

# Splitting the input
lst_split = np.array_split(df.values[:,1], nproc)

with mp.Pool(processes = nproc, initializer = init) as p:
    lst_out = p.map(run, lst_split)

# Concatenating the output from subprocesses 
df['POS-tagged_content'] =  np.concatenate(lst_out) 

# Format fix:
def fix_format(x):
    '''x - a list or an array'''
    out = list(tuple(i.split('\t')) for i in x)
    return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)

df.to_csv(output_file, sep = '|')

单次运行后(因此,并非真正具有统计意义),我会在您的文件中获取这些时间:

$ time python2.7 treetagger_minimal.py 
real    0m59.783s
user    0m50.697s
sys     0m16.657s

$ time python2.7 treetagger_mp.py   
real    0m48.798s
user    1m15.503s
sys     0m22.300s

$ time python3 treetagger_mp3.py 
real    0m39.746s
user    1m25.340s
sys     0m21.157s

如果pandas dataframe pd的唯一用途是将所有内容保存回文件,那么下一步就是从代码中删除pandas。但同样,与treetagger的工作时间相比,收益微不足道。