使用Python而不是Shell脚本来降低循环速度

时间:2018-11-14 14:02:58

标签: python shell loops memory slowdown

我有两个Python脚本network.pyanalysis.py,其中第一个脚本在时间序列中读取为np.array(约180 mb),并使用一些{{1 }},创建parameters并将其与predictions一起保存为.txt文件,以备日后检查(两个文件的最大长度约为160mb)。然后,第二个脚本读取这些文件,第二个脚本生成存储在多维truth中的值,维的大小取决于前面提到的np.array的数量。这些以前是使用parameters通过shell脚本作为参数传递的。

我想在整个参数空间上检索值,并为此使用了一个shell脚本循环。我让它运行一整夜,并且没有测量花费的时间,但是可以估计它可以运行约12个小时。

为了更快地执行此操作,我执行了以下操作:我代替了shell脚本,使用了另一个python脚本,导入了sysnetwork.py的必要功能,并对其进行了循环。我无需保存和读取就将analysis.pytruth从一个脚本传递到了另一个脚本,并且摆脱了必须在每个循环中读取时间序列(一开始就提到)的麻烦。

我让它再次运行一整夜,大约20小时后,它只完成了大约65%,进度非常缓慢,比开始时要慢得多。使用predictions,我看到该进程占用了大约26g的虚拟内存,这对我来说似乎简直是疯狂。

我错过了一些基本的东西吗?有一个常见的错误吗?我找不到答案了。非常感谢您的帮助!

这是完整的脚本。 很抱歉,他们有点混乱,所有工作都在进行中...

循环:

top

#!/usr/bin/env python3 import numpy as np from kerasNetwork import network from analysis import analysis thesis_home = '/home/r/Raphael.Kriegmair/uni/master/thesis' n = 1 size = '10000' print("INFO: Reading training data") inputStates = np.loadtxt(thesis_home + '/trainingData/modifiedShallowWater/mswOutput_' + size + '.txt') for variables in 'u', 'h', 'r', 'uh', 'ur', 'hr', 'uhr': for hiddenLayers in '1', '2', '3', '4', '5': for nodesPerLayer in '50', '100', '250', '500', '750': for train in '0.1', '0.3', '0.5', '0.7', '0.9': truth, predictions = network(inputStates, size, variables, hiddenLayers, nodesPerLayer, train) analysis(truth, predictions, size, variables, hiddenLayers, nodesPerLayer,train) print('CURRENTLY: ' + str(n) + ' / 875' ) n = n+1

network.py

#!/usr/bin/env python3 from keras.models import Sequential from keras.layers import Dense, Conv1D#, Activation, BatchNormalization #from keras.preprocessing.sequence import TimeseriesGenerator import numpy as np def network(inputStates, size, variables, hiddenLayers, nodesPerLayer, train): nodesPerLayer = int(nodesPerLayer) train = float(train) print("INFO: Preprocessing training data") # input/output state pairs share time index outputStates = inputStates[:, 1:] inputStates = inputStates[:, :-1] # split variables u_input = inputStates[0:250,:] h_input = inputStates[250:500,:] r_input = inputStates[500:750,:] u_output = outputStates[0:250,:] h_output = outputStates[250:500,:] r_output = outputStates[500:750,:] numStates = len(inputStates[0,:]) # normalize data u_mean = np.mean(u_input) h_mean = np.mean(h_input) r_mean = np.mean(r_input) u_sigma = np.std(u_input) h_sigma = np.std(h_input) r_sigma = np.std(r_input) u_input, u_output = (u_input - u_mean)/u_sigma, (u_output - u_mean)/u_sigma h_input, h_output = (h_input - h_mean)/h_sigma, (h_output - h_mean)/h_sigma r_input, r_output = (r_input - r_mean)/r_sigma, (r_output - r_mean)/r_sigma # choose variables if variables == 'uhr': trainInput = np.concatenate((u_input, h_input, r_input, ), axis = 0) trainOutput = np.concatenate((u_output, h_output, r_output ), axis = 0) elif variables == 'uh': trainInput = np.concatenate((u_input, h_input ), axis = 0) trainOutput = np.concatenate((u_output, h_output ), axis = 0) elif variables == 'ur': trainInput = np.concatenate((u_input, r_input ), axis = 0) trainOutput = np.concatenate((u_output, r_output ), axis = 0) elif variables == 'hr': trainInput = np.concatenate((h_input, r_input ), axis = 0) trainOutput = np.concatenate((h_output, r_output ), axis = 0) elif variables == 'u': trainInput = u_input trainOutput = u_output elif variables == 'h': trainInput = h_input trainOutput = h_output elif variables == 'r': trainInput = r_input trainOutput = r_output else: print('ARGUMENT ERROR: INVALID VARIABLE COMBINATION') dim = len(trainInput[:,0]) print("INFO: Initializing model") # activations: relu, sigmoid, ... model = Sequential() model.add(Dense(nodesPerLayer, activation='relu', input_dim=dim)) #BatchNormalization() for layers in range(int(hiddenLayers)-1): model.add(Dense(nodesPerLayer, activation='relu')) model.add(Dense(dim, activation='linear')) model.compile(optimizer='adam', loss='mse', metrics=['accuracy']) # Train the model, iterating on the data print("INFO: Training") val = 1 - train model.fit(np.swapaxes(trainInput,0,1), np.swapaxes(trainOutput,0,1), epochs=15, validation_split=val) print("INFO: Generating predictions") # generate predictions predictionNumStates = int(numStates*val) trainNumStates = int(numStates*train) predictions = np.empty((1,) + trainInput[:,:predictionNumStates].shape) print("Predictions shape: ", predictions.shape) predictions[0,:,:] = trainInput[:,trainNumStates+1:] for n in range(predictionNumStates-1): #print(predictions[:,n].shape) #, steps=1 # TODO: why do I need this extra index here??? predictions[:,:,n] = model.predict(predictions[:,:,n]) print("INFO: Saving results") # compare truth = trainOutput[:,trainNumStates+1:] predictions = predictions[0,:,:] #difference = np.square(truth - predictions[0,:,:]) return truth, predictions # predictions_filename = 'predictions_' + size + '_' + variables + '_' + hiddenLayers + '_' + str(nodesPerLayer) + '_' + str(train) + '.txt' # truth_filename = 'truth_' + size + '_' + variables + '_' + str(train) + '.txt' # # # these files are moved to "work" directory afterwards by experiments' shell script # np.savetxt(thesis_home + '/temporary/' + predictions_filename, predictions) # np.savetxt(thesis_home + '/temporary/' + truth_filename, truth)

analysis.py

0 个答案:

没有答案