我有一些包含多个类别列的大文件。类别也是一种慷慨的词,因为它们基本上是描述/部分句子。
以下是每个类别的唯一值:
Category 1 = 15
Category 2 = 94
Category 3 = 294
Category 4 = 401
Location 1 = 30
Location 2 = 60
然后甚至有用户都有重复数据(名字,姓氏,ID等)。
我正在考虑使用以下解决方案来缩小文件大小:
1)使用唯一整数
创建一个匹配每个类别的文件2)创建一个地图(有没有办法通过读取另一个文件来做到这一点?就像我会创建一个.csv并将其作为另一个数据帧加载然后匹配它?或者我最初必须输入它? )
OR
3)基本上做一个连接(VLOOKUP),然后使用长对象名称删除旧列
pd.merge(df1, categories, on = 'Category1', how = 'left')
del df1['Category1']
在这种情况下,人们通常做什么?这些文件非常庞大。 60列和大多数数据都很长,重复的类别和时间戳。根本没有数字数据。这对我来说很好,但由于共享驱动器空间分配超过几个月,共享文件几乎是不可能的。
答案 0 :(得分:4)
要在保存到csv时从Categorical
dtype中受益,您可能需要遵循以下过程:
当您需要再次使用它们时:
说明这个过程:
制作示例数据框:
df = pd.DataFrame(index=pd.np.arange(0,100000))
df.index.name = 'index'
df['categories'] = 'Category'
df['locations'] = 'Location'
n1 = pd.np.tile(pd.np.arange(1,5), df.shape[0]/4)
n2 = pd.np.tile(pd.np.arange(1,3), df.shape[0]/2)
df['categories'] = df['categories'] + pd.Series(n1).astype(str)
df['locations'] = df['locations'] + pd.Series(n2).astype(str)
print df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100000 entries, 0 to 99999
Data columns (total 2 columns):
categories 100000 non-null object
locations 100000 non-null object
dtypes: object(2)
memory usage: 2.3+ MB
None
请注意大小:2.3+ MB
- 这大约是csv文件的大小。
现在将这些数据转换为Categorical
:
df['categories'] = df['categories'].astype('category')
df['locations'] = df['locations'].astype('category')
print df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100000 entries, 0 to 99999
Data columns (total 2 columns):
categories 100000 non-null category
locations 100000 non-null category
dtypes: category(2)
memory usage: 976.6 KB
None
请注意内存使用量下降至976.6 KB
但是如果你现在将它保存到csv:
df.to_csv('test1.csv')
...你会在文件中看到这个:
index,categories,locations
0,Category1,Location1
1,Category2,Location2
2,Category3,Location1
3,Category4,Location2
这意味着'Categorical'已转换为字符串以便在csv中保存。
因此,在保存定义后,让我们摆脱Categorical
数据中的标签:
categories_details = pd.DataFrame(df.categories.drop_duplicates(), columns=['categories'])
print categories_details
categories
index
0 Category1
1 Category2
2 Category3
3 Category4
locations_details = pd.DataFrame(df.locations.drop_duplicates(), columns=['locations'])
print locations_details
index
0 Location1
1 Location2
现在将Categorical
隐藏为int
dtype:
for col in df.select_dtypes(include=['category']).columns:
df[col] = df[col].cat.codes
print df.head()
categories locations
index
0 0 0
1 1 1
2 2 0
3 3 1
4 0 0
print df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100000 entries, 0 to 99999
Data columns (total 2 columns):
categories 100000 non-null int8
locations 100000 non-null int8
dtypes: int8(2)
memory usage: 976.6 KB
None
将转换后的数据保存到csv
并注意该文件现在只有没有标签的数字。
文件大小也会反映此更改。
df.to_csv('test2.csv')
index,categories,locations
0,0,0
1,1,1
2,2,0
3,3,1
保存定义:
categories_details.to_csv('categories_details.csv')
locations_details.to_csv('locations_details.csv')
当您需要恢复文件时,请从csv
个文件加载文件:
df2 = pd.read_csv('test2.csv', index_col='index')
print df2.head()
categories locations
index
0 0 0
1 1 1
2 2 0
3 3 1
4 0 0
print df2.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100000 entries, 0 to 99999
Data columns (total 2 columns):
categories 100000 non-null int64
locations 100000 non-null int64
dtypes: int64(2)
memory usage: 2.3 MB
None
categories_details2 = pd.read_csv('categories_details.csv', index_col='index')
print categories_details2.head()
categories
index
0 Category1
1 Category2
2 Category3
3 Category4
print categories_details2.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 4 entries, 0 to 3
Data columns (total 1 columns):
categories 4 non-null object
dtypes: object(1)
memory usage: 64.0+ bytes
None
locations_details2 = pd.read_csv('locations_details.csv', index_col='index')
print locations_details2.head()
locations
index
0 Location1
1 Location2
print locations_details2.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2 entries, 0 to 1
Data columns (total 1 columns):
locations 2 non-null object
dtypes: object(1)
memory usage: 32.0+ bytes
None
现在使用map
将int
编码数据替换为类别说明,并将其转换为Categorical
:
df2['categories'] = df2.categories.map(categories_details2.to_dict()['categories']).astype('category')
df2['locations'] = df2.locations.map(locations_details2.to_dict()['locations']).astype('category')
print df2.head()
categories locations
index
0 Category1 Location1
1 Category2 Location2
2 Category3 Location1
3 Category4 Location2
4 Category1 Location1
print df2.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100000 entries, 0 to 99999
Data columns (total 2 columns):
categories 100000 non-null category
locations 100000 non-null category
dtypes: category(2)
memory usage: 976.6 KB
None
请注意内存使用情况,回到我们第一次将数据转换为Categorical
时的情况。
如果你需要多次重复这个过程,那么自动化这个过程应该不难。
答案 1 :(得分:3)
答案 2 :(得分:0)
这是一种在单个.csv中保存带有分类列的数据框的方法:
Example:
------ -------
Fatcol Thincol: unique strings once, then numbers
------ -------
"Alberta" "Alberta"
"BC" "BC"
"BC" 2 -- string 2
"Alberta" 1 -- string 1
"BC" 2
...
The "Thincol" on the right can be saved as is in a .csv file,
and expanded to the "Fatcol" on the left after reading it in;
this can halve the size of big .csv s with repeated strings.
Functions
---------
fatcol( col: Thincol ) -> Fatcol, list[ unique str ]
thincol( col: Fatcol ) -> Thincol, dict( unique str -> int ), list[ unique str ]
Here "Fatcol" and "Thincol" are type names for iterators, e.g. lists:
Fatcol: list of strings
Thincol: list of strings or ints or NaN s
If a `col` is a `pandas.Series`, its `.values` are used.
这减少了700M .csv到248M - 但write_csv
在我的imac上以~1 MB /秒的速度运行。