在python

时间:2017-03-26 15:00:52

标签: python matlab pandas numpy

我尝试将Matlab中编写的代码转换为python。 我正在尝试读取dat文件(它是一个csv文件)。该文件有大约30列和数千行包含(仅!)十进制数字数据(在Matlab中它被读入双矩阵)。 我要求以最快的方式读取dat文件和最相似的对象/数组/ ...以保存数据。

我尝试通过以下两种方式阅读文件:

my_data1 = numpy.genfromtxt('FileName.dat', delimiter=',' )
my_data2 = pd.read_csv('FileName.dat',delimiter=',')

还有更好的选择吗?

1 个答案:

答案 0 :(得分:1)

pd.read_csv非常高效。为了加快速度,您可以尝试使用多个内核并行加载数据。下面是一些代码示例,当我需要使用joblib加载数据并更快地处理数据时,我使用了pd.read_csv

from os import listdir
from os.path import dirname, abspath, isfile, join
import pandas as pd
import sys
import time
from datetime import datetime
# Multi-threading
from joblib import Parallel, delayed
import multiprocessing
# Garbage collector
import gc

# Number of cores
TOTAL_NUM_CORES = multiprocessing.cpu_count()
# Path of this script's file
DATA_PATH = 'D:\\'
# Path to save the processed files
TARGET_PATH = 'C:\\'

def read_and_convert(f,num_files):
    #global i
    # Read the file
    dataframe = pd.read_csv(DATA_PATH + f, low_memory=False, header=None, names=['Symbol', 'Date_Time', 'Bid', 'Ask'], index_col=1, parse_dates=True)
    # Process the data
    data_ask_bid = process_data(dataframe)
    # Store processed data in target folder
    data_ask_bid.to_csv(TARGET_PATH + f)
    print(f)
    # Garbage collector. I needed to use this, otherwise my memory would get full after a few files, but you might not need it.
    gc.collect()

def main():
    # Counter for converted files
    global i
    i = 0
    start_time = time.time()
    # Get the paths for all the data files
    files_names = [f for f in listdir(DATA_PATH) if isfile(join(DATA_PATH, f))]

    # Load and process files in parallel
    Parallel(n_jobs=TOTAL_NUM_CORES)(delayed(read_and_convert)(f,len(files_names)) for f in files_names)
    # for f in files_names: read_and_convert(f,len(files_names)) # non-parallel
    print("\nTook %s seconds." % (time.time() - start_time))

if __name__ == "__main__":
    main()