我有134个节点。已经执行了十四种不同的分析(运行)。对于所有134个节点,每个运行都有保存为字典的值。每个节点都保存了150个时间步长的值(每个节点150个值)。例如,运行1被保存为字典(10个时间步长),即节点A,(0,1,0,5,6,7,8, 1,0,6)和节点B,(1,2,3,4,5,7,6,8,9,1)。同样,运行2将另存为字典。我可以将这些值导出到excel工作表,但这些值将另存为(0,1,0,5,6,7,8,1,1,0,6)。我只希望将每个节点的前三个值导出到三个单独的列中的excel工作表(而不是所有10个值)
如何导出运行1和运行2的每一列中的各个值,并将其保存在Excel工作表中?
代码,该代码将保存一个Excel工作表,其中所有值都列在一栏中:
run1, run2, run3, run4, run5, run6, run7, run8, run9, run10, run11, run12, run13, run14 = data # each run has 5 values for 2 variables
df = pd.DataFrame.from_dict(data)
df.to_excel("data.xlsx")
运行此代码时,df_1= df.loc[:, pd.IndexSlice[:, ['Value 1', 'Value 3', 'Value 5']]]
我遇到以下错误:
TypeError Traceback (most recent call last)
<ipython-input-84-8d2d90289161> in <module>()
----> 1 df_1= df.loc[:, pd.IndexSlice[:, ['Value 1', 'Value 3', 'Value 5']]]
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.py in __getitem__(self, key)
1308
1309 if type(key) is tuple:
-> 1310 return self._getitem_tuple(key)
1311 else:
1312 return self._getitem_axis(key, axis=0)
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.py in _getitem_tuple(self, tup)
794 def _getitem_tuple(self, tup):
795 try:
--> 796 return self._getitem_lowerdim(tup)
797 except IndexingError:
798 pass
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.py in _getitem_lowerdim(self, tup)
920 for i, key in enumerate(tup):
921 if is_label_like(key) or isinstance(key, tuple):
--> 922 section = self._getitem_axis(key, axis=i)
923
924 # we have yielded a scalar ?
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis)
1470 raise ValueError('Cannot index with multidimensional key')
1471
-> 1472 return self._getitem_iterable(key, axis=axis)
1473
1474 # nested tuple slicing
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.py in _getitem_iterable(self, key, axis)
1034 def _getitem_iterable(self, key, axis=0):
1035 if self._should_validate_iterable(axis):
-> 1036 self._has_valid_type(key, axis)
1037
1038 labels = self.obj._get_axis(axis)
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/core/indexing.py in _has_valid_type(self, key, axis)
1390
1391 # TODO: don't check the entire key unless necessary
-> 1392 if len(key) and np.all(ax.get_indexer_for(key) < 0):
1393
1394 raise KeyError("None of [%s] are in the [%s]" %
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/indexes/base.py in get_indexer_for(self, target, **kwargs)
2384 """ guaranteed return of an indexer even when non-unique """
2385 if self.is_unique:
-> 2386 return self.get_indexer(target, **kwargs)
2387 indexer, _ = self.get_indexer_non_unique(target, **kwargs)
2388 return indexer
/home/MBIAL/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/indexes/base.py in get_indexer(self, target, method, limit, tolerance)
2284 'backfill or nearest reindexing')
2285
-> 2286 indexer = self._engine.get_indexer(target._values)
2287
2288 return _ensure_platform_int(indexer)
pandas/index.pyx in pandas.index.IndexEngine.get_indexer (pandas/index.c:6077)()
pandas/src/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.lookup (pandas/hashtable.c:14050)()
TypeError: unhashable type
谢谢
Priya
答案 0 :(得分:3)
对concat
使用字典理解,然后按slicers对列中的<main id="content" role="main">
<div class="container" style="background-color: bisque; height: 1000px;"></div>
<div style="text-align: center">
<img src="assets/header-img.jpg">
</div>
</main>
进行过滤:
MultiIndex
答案 1 :(得分:1)
这将起作用:
通过以下方法将一列中的值列表拆分为多列:
df[['Value1','Value2','Value3','Value4','Value5','Value6']] = pd.DataFrame(df.A.values.tolist(), index= df.index)
选择所需的列:
df = df[['Value1','Value3','Value6']]
写入csv
import pandas as pd
df.to_csv("Output.csv")
答案 2 :(得分:0)
由于我没有标题的值,因此使用以下代码找到了我感兴趣的节点的值位置:
df_1 = df_1.iloc [:, [节点1]]
运行此代码时,它给出了节点1的开始和停止位置。因此,我在jezrael提供的上述代码中给出了这一行,并将结果保存到excel工作表中。
在jezrael代码中将此行替换为上面的行:
df = df.loc [:, pd.IndexSlice [:, ['Value 1','Value 3','Value 5']]]
@jezrael和Rahul Agarwal谢谢您的所有帮助