我不知道如何标题这个问题所以如果它应该重命名请通知我,我会。
我正在阅读一个csv
文件,这是我从一台用于测量的设备中拯救出来的。数据和各种其他关键信息都被保存起来。我整天都在这方面工作,我无法弄清楚如何正确地从这个文件中检索每一条信息。我需要访问每一条数据/信息,然后绘制数据并读取其他各种信息,如data / timestamp / timezone / modelnumber / serialnumber等等......如果这个问题过于通用,我会道歉我迷失在如何解决这个问题上。
我输入了几个版本的代码,所以我只列出了我能够工作的内容。我不知道为什么我必须使用sep=delimiter
。我认为delimiter =','会起作用,但事实并非如此。我从研究中发现header=none
,因为我的文件没有标题。
Canopy告诉我引擎'C'不起作用,因此我指定'python'。从这段代码的输出看来它似乎捕获了一切。但它告诉我,我只有一个专栏,不知道如何分离所有这些信息。
这是我的csv文件的一部分。
! FILETYPE CSV
! VERSION 1.0 1
! TIMESTAMP Friday 15 April 2016 04:50:05
! TIMEZONE (GMT+08:00) Kuala Lumpur Singapore
! NAME Keysight Technologies
! MODEL N9917A
! SERIAL US52240515
! FIRMWARE_VERSION A.08.05
! CORRECTION
! Application SA
! Trace TIMESTAMP: 2016-04-15 04:50:05Z
! Trace GPS Info...
! GPS Latitude:
! GPS Longitude:
! GPS Seconds Since Last Read: 0
! CHECKSUM 1559060681
! DATA Freq SA Max Hold SA Blank SA Blank SA Blank
! FREQ UNIT Hz
! DATA UNIT dBm
BEGIN
2000000000,-62.6893499803169,0,0,0
2040000000,-64.1528386206532,0,0,0
2080000000,-63.7751897198055,0,0,0
2120000000,-63.663056855945,0,0,0
2160000000,-64.227155790167,0,0,0
2200000000,-63.874804848758,0,0,0
END
这是我的代码:
import pandas as pd
df = pd.read_csv('/Users/_XXXXXXXXX_/Desktop/TP1_041416_C.csv',
sep='delimter',
header=None,
engine='python')
答案 0 :(得分:4)
UPDATE: - 此版本将首先读取并解析标头部分,它将从解析的标头生成info
字典,并为{{skiprows
列表做准备1}}功能
pd.read_csv()
旧版本: - 读取内存中的整个文件并解析它。它可能会导致大文件出现问题,因为它会为整个文件内容加上分配内存:
from collections import defaultdict
import io
import re
import pandas as pd
import pprint as pp
fn = r'D:\temp\.data\36671176.data'
def parse_header(filename):
"""
parses useful (check `header_flt` variable) information from the header
and saves it into `defaultdict`,
generates `skiprow` list for `pd.read_csv()`,
breaks after parsing the header, so the data block will NOT be read
returns: parsed info as defaultdict obj, skiprows list
"""
# useful header information that will be saved into defaultdict
header_flt = r'TIMESTAMP|TIMEZONE|NAME|MODEL|SERIAL|FIRMWARE_VERSION|Trace TIMESTAMP:'
with open(fn) as f:
d = defaultdict(list)
i = 0
skiprows = []
for line in f:
line = line.strip()
if line.startswith('!'):
skiprows.append(i)
# parse: `key` (first RegEx group)
# and `value` (second RegEx group)
m = re.search(r'!\s+(' + header_flt + ')\s+(.*)\s*', line)
if m:
# print(m.group(1), '->', m.group(2))
# save parsed `key` and `value` into defaultdict
d[m.group(1)] = m.group(2)
elif line.startswith('BEGIN'):
skiprows.append(i)
else:
# stop loop if line doesn't start with: '!' or 'BEGIN'
break
i += 1
return d, skiprows
info, skiprows = parse_header(fn)
# parses data block, the header part will be skipped
# NOTE: `comment='E'` - will skip the footer row: "END"
df = pd.read_csv(fn, header=None, usecols=[0,1], names=['freq', 'dBm'],
skiprows=skiprows, skipinitialspace=True, comment='E',
error_bad_lines=False)
print(df)
print('-' * 60)
pp.pprint(info)
输出:
from collections import defaultdict
import io
import re
import pandas as pd
import pprint as pp
fn = r'D:\temp\.data\36671176.data'
header_pat = r'(TIMESTAMP|TIMEZONE|NAME|MODEL|SERIAL|FIRMWARE_VERSION)\s+([^\r\n]*?)\s*[\r\n]+'
def parse_file(filename):
with open(fn) as f:
txt = f.read()
m = re.search(r'BEGIN\s*[\r\n]+(.*)[\n\r]+END', txt, flags=re.DOTALL|re.MULTILINE)
if m:
data = m.group(1)
df = pd.read_csv(io.StringIO(data), header=None, usecols=[0,1], names=['freq', 'dBm'])
else:
df = pd.DataFrame()
d = defaultdict(list)
for m in re.finditer(header_pat, txt, flags=re.S|re.M):
d[m.group(1)] = m.group(2)
return df, d
df, info = parse_file(fn)
print(df)
pp.pprint(info)