重组python字典中的输出数据

时间:2013-11-18 11:14:20

标签: python python-2.7 vector transpose sparse-matrix

我需要创建稀疏向量,我想使用python进行尝试。 我已经拥有了创建向量所需的所有数据,因此我的任务基本上是重新格式化/重新排列我所拥有的信息。

我输入的文件是一个5GB的文件,有3个以制表符分隔的列,例如:

abandonment-n   about+n-the+v-know-v    1
abandonment-n   above+ns-j+vn-pass-continue-v   1
abandonment-n   after+n-the+n-a-j-stop-n    1
abandonment-n   as+n-the+ns-j-aid-n 1
cake-n  against+n-the+vg-restv  1
cake-n  as+n-a+vd-require-v 1
cake-n  as+n-a-j+vg-up-use-v    1
cake-n  as+n-the+ns-j-aid-n 2
dog-n   as+n-a-j+vg-up-use-v    7
dog-n   as+n-the+ns-j-aid-n 5

我想要的输出如下

2   7
1   1   1   1
1   1   1   2
7   5

其中,第一行指定尺寸(基本上是唯一的行// col) 第二行以稀疏格式开始实际矩阵。

我认为最有效的方法是python。 但是,由于我已经计算了数据的相应权重,我认为在这种情况下,numpy中的类或向量(例如找到的herehere中的类是不必要的。 那么,有没有人有任何见解我如何开始在python中解决这个重新排列问题?

我想要做的第一件事就是打开文件并将元素拆分成字典: 像这样:

mydict = {}
with open("sample_outputDM_ALL_COOC", 'r') as infile_A:
    for line in infile_A:
        lines_splitted = line.split()
        lemma = lines_splitted[0]
        feat = lines_splitted[1]
        weight = lines_splitted[2]
        mydict = [lemma], float(weight)
        #print mydict

        for x in mydict:
            if lemma == lemma:
                print weight + "\t"
            else:
                pass

我一直在努力解决这个问题,但我仍然无法做到。 我到目前为止所做的是将所有变量输入到字典中,并且我能够打印每个单独的引理和每行的每个单独的重量。

但是,我需要在同一行中具有与给定引理相对应的所有权重。 我已经尝试了groupby变量,但我不确定它是否是这种情况的最佳选择。 我相信解决方案是否为if else语句,但我无法弄清楚如何将两者联系起来。

因此,该方法应该遵循:对于每个target,对于每个唯一freq,在一行中打印slotfiller target

1 个答案:

答案 0 :(得分:1)

这是作业吗?如果没有,请查看scipy.sparse中的可用工具或scikits.learnPython NLTKe.g. this example)的混合工具。

<强>加 基于评论和重新阅读问题,我还可以想象使用Pandas.DataFrame来实现这一点,但我不确定它是否会在数据大小的情况下令人满意。一种选择是将数据加载到多个块中,因为它似乎可以在第一列的唯一项上并行化。 (有关详情,请参阅我的comment below。)

def sparse_vec(df):
    return (df['Col3'].values[None,:],)

# Obviously these would be chunk-specific, and you'd need to do
# another pass to get the global sum of unique ids from Col1 and the
# global max of the number of unique rows-per-id.
n_cols = len(df.Col2.unique())
n_rows = len(df.Col1.unique())


vecs = df.groupby("Col1").apply(sparse_vec)
print vecs

在你给出的样本数据上使用它,在IPython中,我看到了:

In [17]: data = """
   ....: abandonment-n   about+n-the+v-know-v    1
   ....: abandonment-n   above+ns-j+vn-pass-continue-v   1
   ....: abandonment-n   after+n-the+n-a-j-stop-n    1
   ....: abandonment-n   as+n-the+ns-j-aid-n 1
   ....: cake-n  against+n-the+vg-restv  1
ake-   ....: cake-n  as+n-a+vd-require-v 1
   ....: cake-n  as+n-a-j+vg-up-use-v    1
   ....: cake-n  as+n-the+ns-j-aid-n 2
   ....: dog-n   as+n-a-j+vg-up-use-v    7
dog-   ....: dog-n   as+n-the+ns-j-aid-n 5"""

In [18]: data
Out[18]: '\nabandonment-n   about+n-the+v-know-v    1\nabandonment-n   above+ns-j+vn-pass-continue-v   1\nabandonment-n   after+n-the+n-a-j-stop-n    1\nabandonment-n   as+n-the+ns-j-aid-n 1\ncake-n  against+n-the+vg-restv  1\ncake-n  as+n-a+vd-require-v 1\ncake-n  as+n-a-j+vg-up-use-v    1\ncake-n  as+n-the+ns-j-aid-n 2\ndog-n   as+n-a-j+vg-up-use-v    7\ndog-n   as+n-the+ns-j-aid-n 5'

In [19]: data.split("\n")
Out[19]:
['',
 'abandonment-n   about+n-the+v-know-v    1',
 'abandonment-n   above+ns-j+vn-pass-continue-v   1',
 'abandonment-n   after+n-the+n-a-j-stop-n    1',
 'abandonment-n   as+n-the+ns-j-aid-n 1',
 'cake-n  against+n-the+vg-restv  1',
 'cake-n  as+n-a+vd-require-v 1',
 'cake-n  as+n-a-j+vg-up-use-v    1',
 'cake-n  as+n-the+ns-j-aid-n 2',
 'dog-n   as+n-a-j+vg-up-use-v    7',
 'dog-n   as+n-the+ns-j-aid-n 5']

In [20]: data_lines = [x for x in data.split("\n") if x]

In [21]: data_lines
Out[21]:
['abandonment-n   about+n-the+v-know-v    1',
 'abandonment-n   above+ns-j+vn-pass-continue-v   1',
 'abandonment-n   after+n-the+n-a-j-stop-n    1',
 'abandonment-n   as+n-the+ns-j-aid-n 1',
 'cake-n  against+n-the+vg-restv  1',
 'cake-n  as+n-a+vd-require-v 1',
 'cake-n  as+n-a-j+vg-up-use-v    1',
 'cake-n  as+n-the+ns-j-aid-n 2',
 'dog-n   as+n-a-j+vg-up-use-v    7',
 'dog-n   as+n-the+ns-j-aid-n 5']

In [22]: split_lines = [x.split() for x in data_lines]

In [23]: split_lines
Out[23]:
[['abandonment-n', 'about+n-the+v-know-v', '1'],
 ['abandonment-n', 'above+ns-j+vn-pass-continue-v', '1'],
 ['abandonment-n', 'after+n-the+n-a-j-stop-n', '1'],
 ['abandonment-n', 'as+n-the+ns-j-aid-n', '1'],
 ['cake-n', 'against+n-the+vg-restv', '1'],
 ['cake-n', 'as+n-a+vd-require-v', '1'],
 ['cake-n', 'as+n-a-j+vg-up-use-v', '1'],
 ['cake-n', 'as+n-the+ns-j-aid-n', '2'],
 ['dog-n', 'as+n-a-j+vg-up-use-v', '7'],
 ['dog-n', 'as+n-the+ns-j-aid-n', '5']]

In [24]: df = pandas.DataFrame(split_lines, columns=["Col1", "Col2", "Col3"])

In [25]: df
Out[25]:
            Col1                           Col2 Col3
0  abandonment-n           about+n-the+v-know-v    1
1  abandonment-n  above+ns-j+vn-pass-continue-v    1
2  abandonment-n       after+n-the+n-a-j-stop-n    1
3  abandonment-n            as+n-the+ns-j-aid-n    1
4         cake-n         against+n-the+vg-restv    1
5         cake-n            as+n-a+vd-require-v    1
6         cake-n           as+n-a-j+vg-up-use-v    1
7         cake-n            as+n-the+ns-j-aid-n    2
8          dog-n           as+n-a-j+vg-up-use-v    7
9          dog-n            as+n-the+ns-j-aid-n    5

In [26]: df.groupby("Col1").apply(lambda x: (x.Col3.values[None,:],))
Out[26]:
Col1
abandonment-n    (array([[1, 1, 1, 1]], dtype=object),)
cake-n           (array([[1, 1, 1, 2]], dtype=object),)
dog-n                  (array([[7, 5]], dtype=object),)

In [27]: n_rows = len(df.Col1.unique())

In [28]: n_cols = len(df.Col2.unique())

In [29]: n_rows, n_cols
Out[29]: (3, 7)