我使用SAP的数据输出,但它既不是CSV,因为它不引用包含其分隔符的字符串,也不引用固定宽度,因为它具有多字节字符。它具有一种“固定宽度”的特征。
要将它放入pandas我当前读取文件,获取分隔符位置,切割分隔符周围的每一行,然后将其保存为正确的CSV,我可以毫无困难地阅读。
我看到pandas read_csv可以获得文件缓冲区。如何在不保存csv文件的情况下直接将流传递给它?我应该制作发电机吗?我可以在不给它文件句柄的情况下获得csv.writer.writerow输出吗?
这是我的代码:
import pandas as pd
caminho= r'C:\Users\user\Documents\SAP\Tests\\'
arquivo = "ExpComp_01.txt"
tipo_dado = {"KEY_GUID":"object", "DEL_IND":"object", "HDR_GUID":"object", , "PRICE":"object", "LEADTIME":"int16", "MANUFACTURER":"object", "LOAD_TIME":"object", "APPR_TIME":"object", "SEND_TIME":"object", "DESCRIPTION":"object"}
def desmembra(linha, limites):
# This functions receives each delimiter's index and cuts around it
posicao=limites[0]
for limite in limites[1:]:
yield linha[posicao+1:limite]
posicao=limite
def pre_processa(arquivo):
import csv
import os
# Translates SAP output in standard CSV
with open(arquivo,"r", encoding="mbcs") as entrada, open(arquivo[:-3] +
"csv", "w", newline="", encoding="mbcs") as saida:
escreve=csv.writer(saida,csv.QUOTE_MINIMAL, delimiter=";").writerow
for line in entrada:
# Find heading
if line[0]=="|":
delimitadores = [x for x, v in enumerate(line) if v == '|']
if line[-2] != "|":
delimitadores.append(None)
cabecalho_teste=line[:50]
escreve([campo.strip() for campo in desmembra(line,delimitadores)])
break
for line in entrada:
if line[0]=="|" and line[:50]!=cabecalho_teste:
escreve([campo.strip() for campo in desmembra(line, delimitadores)])
pre_processa(caminho+arquivo)
dados = pd.read_csv(caminho + arquivo[:-3] + "csv", sep=";",
header=0, encoding="mbcs", dtype=tipo_dado)
另外,如果您可以分享最佳做法:
我有奇怪的日期时间字符串作为20.120.813.132.432
我可以使用
dados["SEND_TIME"]=pd.to_datetime(dados["SEND_TIME"], format="%Y%m%d%H%M%S")
dados["SEND_TIME"].replace(regex=False,inplace=True,to_replace=r'.',value=r'')
我无法为它编写解析器,因为我将日期存储在不同的字符串格式中。指定转换器在导入过程中执行此操作或者让pandas最终按列进行操作会更快吗?
我的代码99999999
存在类似问题,我必须在99.999.999
添加点。我不知道我是否应该编写转换器或等到导入后才能执行df.replace
编辑 - 示例数据:
| KEY_GUID|DEL_IND| HDR_GUID|Prod_CD |DESCRIPTION | PRICE|LEADTIME|MANUFACTURER| LOAD_TIME|APPR_TIME | SEND_TIME|
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|000427507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83|75123636|Vneráéíoaeot.sadot.m | 29,55 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.157 |
|000527507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83|75123643|Tnerasodaeot|sadot.m | 122,91 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.141 |
|0005DB50112F9E69E10000000A1D2028| |384BB350BF56315DE20062700D627978|75123676|Dnerasodáeot.sadot.m |252.446,99 |3 |POLAND |20.121.226.175.640 |20121226183608|20.121.222.000.015 |
|000627507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83|75123652|Pner|sodaeot.sadot.m | 657,49 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.128 |
|000727507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83| |Rnerasodaeot.sadot.m | 523,63 |30 | |20.120.813.132.432 |20120813132929|20.120.707.010.119 |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| KEY_GUID|DEL_IND| HDR_GUID|Prod_CD |DESCRIPTION | PRICE|LEADTIME|MANUFACTURER| LOAD_TIME|APPR_TIME | SEND_TIME|
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |000827507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83|75123603|Inerasodéeot.sadot.m | 2.073,63 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.127 |
|000927507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83|75123662|Ane|asodaeot.sadot.m | 0,22 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.135 |
|000A27507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83|75123626|Pneraíodaeot.sadot.m | 300,75 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.140 |
|000B27507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83| |Aneraéodaeot.sadot.m | 1,19 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.131 |
|000C27507E64FB29E2006281548EB186| |4C1AD7E25DC50D61E10000000A19FF83|75123613|Cnerasodaeot.sadot.m | 30,90 |30 | |20.120.813.132.432 |20120813132929|20.120.505.010.144 |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
我将与其他领域处理其他表格。所有都是这种一般形式。我只能相信标题中的分隔符。我也可能在数据中重复标题。它看起来像是一个matricial打印输出。
答案 0 :(得分:2)
如果您想在不首先写入CSV的情况下构建DataFrame,那么您不需要
需要pd.read_csv
。虽然可以使用io.BytesIO
或
cString.StringIO
写入内存中的文件类对象,但它没有
转换可迭代值的意义(如desmembra(line, delimitadores)
)
单个字符串只是用pd.read_csv
重新解析它。
相反,使用pd.DataFrame
更直接,因为pd.DataFrame
可以接受行数据的迭代器。
使用普通Python逐个操作值通常不是最快的方法。通常,在整列上使用Pandas函数会更快。因此,我首先将arquivo
解析为字符串的DataFrame,然后使用Pandas函数将列后处理为正确的dtype和值。
import pandas as pd
import os
import csv
import io
caminho = r'C:\Users\u5en\Documents\SAP\Testes\\'
arquivo = os.path.join(caminho, "ExpComp_01.txt")
arquivo_csv = os.path.splitext(arquivo)[0] + '.csv'
def desmembra(linha, limites):
# This functions receives each delimiter's index and cuts around it
return [linha[limites[i]+1:limites[i+1]].strip()
for i in range(len(limites[:-1]))]
def pre_processa(arquivo, enc):
# Translates SAP output into an iterator of lists of strings
with io.open(arquivo, "r", encoding=enc) as entrada:
for line in entrada:
# Find heading
if line[0] == "|":
delimitadores = [x for x, v in enumerate(line) if v == '|']
if line[-2] != "|":
delimitadores.append(None)
cabecalho_teste = line[:50]
yield desmembra(line, delimitadores)
break
for line in entrada:
if line[0] == "|" and line[:50] != cabecalho_teste:
yield desmembra(line, delimitadores)
def post_process(dados):
dados['LEADTIME'] = dados['LEADTIME'].astype('int16')
for col in ('SEND_TIME', 'LOAD_TIME', 'PRICE'):
dados[col] = dados[col].str.replace(r'.', '')
for col in ('SEND_TIME', 'LOAD_TIME', 'APPR_TIME'):
dados[col] = pd.to_datetime(dados[col], format="%Y%m%d%H%M%S")
return dados
enc = 'mbcs'
saida = pre_processa(arquivo, enc)
header = next(saida)
dados = pd.DataFrame(saida, columns=header)
dados = post_process(dados)
print(dados)
产量
KEY_GUID DEL_IND HDR_GUID \
0 000427507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
1 000527507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
2 0005DB50112F9E69E10000000A1D2028 384BB350BF56315DE20062700D627978
3 000627507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
4 000727507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
5 000927507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
6 000A27507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
7 000B27507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
8 000C27507E64FB29E2006281548EB186 4C1AD7E25DC50D61E10000000A19FF83
Prod_CD DESCRIPTION PRICE LEADTIME MANUFACTURER \
0 75123636 Vneráéíoaeot.sadot.m 29,55 30
1 75123643 Tnerasodaeot|sadot.m 122,91 30
2 75123676 Dnerasodáeot.sadot.m 252446,99 3 POLAND
3 75123652 Pner|sodaeot.sadot.m 657,49 30
4 Rnerasodaeot.sadot.m 523,63 30
5 75123662 Ane|asodaeot.sadot.m 0,22 30
6 75123626 Pneraíodaeot.sadot.m 300,75 30
7 Aneraéodaeot.sadot.m 1,19 30
8 75123613 Cnerasodaeot.sadot.m 30,90 30
LOAD_TIME APPR_TIME SEND_TIME
0 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-05-05 01:01:57
1 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-05-05 01:01:41
2 2012-12-26 17:56:40 2012-12-26 18:36:08 2012-12-22 00:00:15
3 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-05-05 01:01:28
4 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-07-07 01:01:19
5 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-05-05 01:01:35
6 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-05-05 01:01:40
7 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-05-05 01:01:31
8 2012-08-13 13:24:32 2012-08-13 13:29:29 2012-05-05 01:01:44