Scikit Optimize中的退出代码134(被信号6:SIGABRT中断)结束了过程

时间:2019-02-17 15:54:54

标签: python-3.x tensorflow optimization deep-learning hyperparameters

我正在使用scikit optimize library的gp_minimize软件包在LSTM上执行超参数优化任务。我的代码逻辑没有任何错误,并且可以独立运行。当我运行带有优化的代码时,它需要花费一些时间来处理,并且存在错误消息:

  

OMP:错误#15:初始化libiomp5.so,但找到libiomp5.so   已经初始化。
OMP:提示:这意味着该副本的多个副本   OpenMP运行时已链接到程序中。那很危险   因为它会降低性能或导致错误的结果。最好的   要做的是确保仅链接一个OpenMP运行时   进入流程,例如通过避免静态链接OpenMP   任何库中的运行时。作为不安全,不受支持,没有文件证明   解决方法,您可以设置环境变量   KMP_DUPLICATE_LIB_OK = TRUE允许程序继续执行,   但这可能会导致崩溃或以静默方式产生不正确的结果。对于   更多信息,请参阅   http://www.intel.com/software/products/support/

     

以退出代码134(被信号6:SIGABRT中断)结束的过程

我似乎找不到原因。任何帮助深表感谢。请注意,代码是使用纯TensorFlow完成的。

我的完整代码如下所示:

import skopt
from skopt import gp_minimize
from skopt.space import Real, Integer
from skopt.utils import use_named_args
import tensorflow as tf
import numpy as np
import pandas as pd
from math import sqrt
import atexit
from time import time, strftime, localtime
from datetime import timedelta
from sklearn.metrics import mean_squared_error
from  skopt.plots import plot_convergence



randomState = 46
np.random.seed(randomState)
tf.set_random_seed(randomState)


input_size=1
num_layers=1
features = 2
column_min_max = [[0, 2000],[0,500000000]]
columns = ['Close', 'Volume']


num_steps = None
lstm_size = None
batch_size = None
init_learning_rate = None
learning_rate_decay = None
init_epoch = None
max_epoch = None
dropout_rate = None

lstm_num_steps = Integer(low=2, high=14, name='lstm_num_steps')
size = Integer(low=8, high=200, name='size')
lstm_learning_rate_decay = Real(low=0.7, high=0.99, prior='uniform', name='lstm_learning_rate_decay')
lstm_max_epoch = Integer(low=20, high=200, name='lstm_max_epoch')
lstm_init_epoch = Integer(low=2, high=50, name='lstm_init_epoch')
lstm_batch_size = Integer(low=5, high=100, name='lstm_batch_size')
lstm_dropout_rate = Real(low=0.1, high=0.9, prior='uniform', name='lstm_dropout_rate')
lstm_init_learning_rate = Real(low=1e-4, high=1e-1, prior='log-uniform', name='lstm_init_learning_rate')


dimensions = [lstm_num_steps, size, lstm_init_epoch, lstm_max_epoch,
              lstm_learning_rate_decay, lstm_batch_size, lstm_dropout_rate, lstm_init_learning_rate]

default_parameters = [2,128,3,30,0.99,64,0.2,0.001]

#------------ to log execution time ---------------------------------

def secondsToStr(elapsed=None):
    if elapsed is None:
        return strftime("%Y-%m-%d %H:%M:%S", localtime())
    else:
        return str(timedelta(seconds=elapsed))


def logger(s, elapsed=None):
    line = "=" * 40
    print(line)
    print(secondsToStr(), '-', s)
    if elapsed:
        print("Elapsed time:", elapsed)
    print(line)
    print()


def endlog():
    end = time()
    elapsed = end - start
    logger("End Program", secondsToStr(elapsed))

#--------------------------------------------------------------------

def generate_batches(train_X, train_y, batch_size):
    num_batches = int(len(train_X)) // batch_size
    if batch_size * num_batches < len(train_X):
        num_batches += 1

    batch_indices = range(num_batches)
    for j in batch_indices:
        batch_X = train_X[j * batch_size: (j + 1) * batch_size]
        batch_y = train_y[j * batch_size: (j + 1) * batch_size]
        yield batch_X, batch_y


def segmentation(data):

    seq = [price for tup in data[['Close', 'Volume']].values for price in tup]

    seq = np.array(seq)

    # split into items of features
    seq = [np.array(seq[i * features: (i + 1) * features])
           for i in range(len(seq) // features)]

    # split into groups of num_steps
    X = np.array([seq[i: i + num_steps] for i in range(len(seq) - num_steps)])

    y = np.array([seq[i + num_steps] for i in range(len(seq) - num_steps)])

    # get only sales value
    y = [[y[i][0]] for i in range(len(y))]

    y = np.asarray(y)

    return X, y

def scale(data):
    for i in range(len(column_min_max)):
        data[columns[i]] = (data[columns[i]] - column_min_max[i][0]) / ((column_min_max[i][1]) - (column_min_max[i][0]))

    return data


def rescle(test_pred):
    prediction = [(pred * (column_min_max[0][1] - column_min_max[0][0])) + column_min_max[0][0] for pred in test_pred]
    return prediction


def mean_absolute_percentage_error(y_true, y_pred):
    y_true, y_pred = np.array(y_true), np.array(y_pred)
    itemindex = np.where(y_true == 0)
    y_true = np.delete(y_true, itemindex)
    y_pred = np.delete(y_pred, itemindex)
    return np.mean(np.abs((y_true - y_pred) / y_true)) * 100

def RMSPE(y_true, y_pred):
    y_true, y_pred = np.array(y_true), np.array(y_pred)
    itemindex = np.where(y_true == 0)
    y_true = np.delete(y_true, itemindex)
    y_pred = np.delete(y_pred, itemindex)
    return np.sqrt(np.mean(np.square(((y_true - y_pred) / y_true)), axis=0))


def pre_process():

    stock_data = pd.read_csv('AIG.csv')
    stock_data = stock_data.reindex(index=stock_data.index[::-1])

    vali_ratio = 0.2
    test_ratio = 0.5

    train_size = int(len(stock_data) * (1.0 - vali_ratio))
    temp_len = len(stock_data)-train_size
    validation_len = int(temp_len * (1.0 - test_ratio))
    #the final 5% of the data will be for the test set

    train_data = stock_data[:train_size]
    validation_data = stock_data[(train_size - num_steps): validation_len + train_size]
    original_val_data = validation_data.copy()

    # -------------- processing train data---------------------------------------
    scaled_train_data = scale(train_data)
    train_X, train_y = segmentation(scaled_train_data)

    # -------------- processing validation data---------------------------------------
    scaled_validation_data = scale(validation_data)
    val_X, val_y = segmentation(scaled_validation_data)

    # ----segmenting original validation data-----------------------------------------------
    nonescaled_val_X, nonescaled_val_y = segmentation(original_val_data)


    return train_X, train_y, val_X, val_y, nonescaled_val_y


def setupRNN(inputs,model_dropout_rate):


    cell = tf.contrib.rnn.LSTMCell(lstm_size, state_is_tuple=True, activation=tf.nn.tanh,use_peepholes=True)

    val1, _ = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)

    val = tf.transpose(val1, [1, 0, 2])

    last = tf.gather(val, int(val.get_shape()[0]) - 1, name="last_lstm_output")

    dropout = tf.layers.dropout(last, rate=model_dropout_rate, training=True,seed=46)

    weight = tf.Variable(tf.truncated_normal([lstm_size, input_size]))
    bias = tf.Variable(tf.constant(0.1, shape=[input_size]))

    prediction = tf.matmul(dropout, weight) + bias

    return prediction


@use_named_args(dimensions=dimensions)
def fitness(lstm_num_steps, size, lstm_init_epoch, lstm_max_epoch,
              lstm_learning_rate_decay, lstm_batch_size, lstm_dropout_rate, lstm_init_learning_rate):

    global  iteration, num_steps, lstm_size, init_epoch, max_epoch, learning_rate_decay, dropout_rate, init_learning_rate, batch_size

    num_steps = np.int32(lstm_num_steps)
    lstm_size = np.int32(size)
    batch_size = np.int32(lstm_batch_size)
    learning_rate_decay = np.float32(lstm_learning_rate_decay)
    init_epoch = np.int32(lstm_init_epoch)
    max_epoch = np.int32(lstm_max_epoch)
    dropout_rate = np.float32(lstm_dropout_rate)
    init_learning_rate = np.float32(lstm_init_learning_rate)

    tf.reset_default_graph()
    tf.set_random_seed(randomState)
    sess = tf.Session()

    train_X, train_y, val_X, val_y,  nonescaled_val_y = pre_process()

    inputs = tf.placeholder(tf.float32, [None, num_steps, features], name="inputs")
    targets = tf.placeholder(tf.float32, [None, input_size], name="targets")
    model_learning_rate = tf.placeholder(tf.float32, None, name="learning_rate")
    model_dropout_rate = tf.placeholder_with_default(0.0, shape=())
    global_step = tf.Variable(0, trainable=False)

    prediction = setupRNN(inputs,model_dropout_rate)

    model_learning_rate = tf.train.exponential_decay(learning_rate=model_learning_rate, global_step=global_step, decay_rate=learning_rate_decay,
                                               decay_steps=init_epoch, staircase=False)

    with tf.name_scope('loss'):
        model_loss = tf.losses.mean_squared_error(targets, prediction)

    with tf.name_scope('adam_optimizer'):
        train_step = tf.train.AdamOptimizer(model_learning_rate).minimize(model_loss,global_step=global_step)


    sess.run(tf.global_variables_initializer())

    for epoch_step in range(max_epoch):

        for batch_X, batch_y in generate_batches(train_X, train_y, batch_size):
            train_data_feed = {
                inputs: batch_X,
                targets: batch_y,
                model_learning_rate: init_learning_rate,
                model_dropout_rate: dropout_rate
            }
            sess.run(train_step, train_data_feed)

    val_data_feed = {
        inputs: val_X,
    }
    vali_pred = sess.run(prediction, val_data_feed)

    vali_pred_vals = rescle(vali_pred)

    vali_pred_vals = np.array(vali_pred_vals)

    vali_pred_vals = (np.round(vali_pred_vals, 0)).astype(np.int32)

    vali_pred_vals = vali_pred_vals.flatten()

    vali_pred_vals = vali_pred_vals.tolist()

    vali_nonescaled_y = nonescaled_val_y.flatten()

    vali_nonescaled_y = vali_nonescaled_y.tolist()

    val_error = sqrt(mean_squared_error(vali_nonescaled_y, vali_pred_vals))

    return val_error


if __name__ == '__main__':

    start = time()

    search_result = gp_minimize(func=fitness,
                                dimensions=dimensions,
                                acq_func='EI',  # Expected Improvement.
                                n_calls=11,
                                x0=default_parameters,
                                random_state=46)

    print(search_result.x)
    print(search_result.fun)
    plot = plot_convergence(search_result,yscale="log")


    atexit.register(endlog)
    logger("Start Program")

完整的堆栈跟踪如下所示:

  

/home/suleka/anaconda3/lib/python3.6/site-packages/sklearn/ensemble/weight_boosting.py:29:   DeprecationWarning:numpy.core.umath_tests是一个内部NumPy模块   并且不应该导入。它将在以后的NumPy中删除   发布。从numpy.core.umath_tests导入inner1d   /home/suleka/anaconda3/lib/python3.6/site-packages/h5py/init.py:36:   FutureWarning:将issubdtype的第二个参数转换为   已弃用floatnp.floating。将来会被治疗   为np.float64 == np.dtype(float).type。从._conv导入   register_converters作为_register_converters 2019-02-17   20:22:25.951102:我tensorflow / core / platform / cpu_feature_guard.cc:141]   您的CPU支持该TensorFlow二进制文件未包含的指令   编译使用:SSE4.1 SSE4.2 AVX 2019-02-17 20:22:25.952049:I   tensorflow / core / common_runtime / process_util.cc:69]创建新线程   具有默认互操作设置的池:2.使用   inter_op_parallelism_threads以获得最佳性能。   / home / suleka / Documents / RNN模型/skopt_stock.py:114:   SettingWithCopyWarning:试图在一个副本上设置一个值   从DataFrame切片。尝试使用.loc [row_indexer,col_indexer] =   值代替

     

请参阅文档中的警告:   http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy   data [columns [i]] =(data [columns [i]]-column_min_max [i] [0])/   ((column_min_max i)-(column_min_max [i] [0]))   /home/suleka/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:100:   UserWarning:将稀疏的IndexedSlices转换为一个密集的Tensor   形状未知。这可能会占用大量内存。
  “将稀疏的IndexedSlices转换为形状未知的密集Tensor。”   / home / suleka / Documents / RNN模型/skopt_stock.py:114:   SettingWithCopyWarning:试图在一个副本上设置一个值   从DataFrame切片。尝试使用.loc [row_indexer,col_indexer] =   值代替

     

请参阅文档中的警告:   http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy   data [columns [i]] =(data [columns [i]]-column_min_max [i] [0])/   ((column_min_max i)-(column_min_max [i] [0]))OMP:错误#15:   初始化libiomp5.so,但发现libiomp5.so已初始化。   OMP:提示:这意味着OpenMP运行时的多个副本具有   已链接到该程序。这很危险,因为它可能会降级   性能或导致不正确的结果。最好的办法是   确保只有一个OpenMP运行时链接到该流程,   例如通过避免在任何库中静态链接OpenMP运行时。   作为不安全,不受支持,未记录的解决方法,您可以设置   环境变量KMP_DUPLICATE_LIB_OK = TRUE,以允许程序执行以下操作:   继续执行,但是可能会导致崩溃或无声产生   错误的结果。有关更多信息,请参见   http://www.intel.com/software/products/support/

     

以退出代码134(被信号6:SIGABRT中断)结束的过程

0 个答案:

没有答案