DataLossError:无法在Tensorflow-Python

时间:2018-12-05 08:31:11

标签: python firebase tensorflow deep-learning tensorflow-lite

我已经开发了一种DNN,用于使用Tensorflow后端识别5种图像类别。我想将此Tensorflow模型转换为Tendorflow Lite。当我尝试将其转换时,发生以下错误。

  

提高类型(e)(node_def,op,消息)

     

DataLossError :无法打开表文件D:\ My Projects \ FinalProject_Vr_02 \ snakes-0.001-2conv-basic.model.meta:数据   丢失:不是稳定的(错误的魔术数字):也许您的文件位于   不同的文件格式,您需要使用其他还原   操作员? [[节点save_1 / RestoreV2(在   C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tflearn \ helpers \ trainer.py:147)   = RestoreV2 [dtypes = [DT_FLOAT,DT_FLOAT,DT_FLOAT,DT_FLOAT,DT_FLOAT,...,DT_FLOAT,DT_FLOAT,DT_BOOL,DT_FLOAT,DT_FLOAT],   _device =“ / job:localhost /副本:0 / task:0 / device:CPU:0”](_ arg_save_1 / Const_0_0,   save_1 / RestoreV2 / tensor_names,save_1 / RestoreV2 / shape_and_slices)]]

     

由op'save_1 / RestoreV2'引起,定义于:文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ runpy.py”,第193行,在   _run_module_as_main       “ 主要”,mod_spec)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ runpy.py”,第85行,用_run_code       exec(代码,run_globals)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ spyder_kernels \ console__main __。py”,   第11行       start.main()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ spyder_kernels \ console \ start.py”,   主线310       kernel.start()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ ipykernel \ kernelapp.py”,   505行,开始       self.io_loop.start()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ platform \ asyncio.py”,   第132行,开始时       self.asyncio_loop.run_forever()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ asyncio \ base_events.py”,第427行,在   run_forever       self._run_once()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ asyncio \ base_events.py”,行1440,在   _run_once       handle._run()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ asyncio \ events.py”,第145行,在_run中       self._callback(* self._args)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ ioloop.py”,行   758,在_run_callback中       ret = callback()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ stack_context.py”,   第300行,在null_wrapper中       返回fn(* args,** kwargs)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ gen.py”,行1233,   在内部       self.run()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ gen.py”,行1147,   运行中       产生= self.gen.send(值)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ ipykernel \ kernelbase.py”,   第357行,在process_one中       产生gen.maybe_future(dispatch(* args))文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ gen.py”,行326,   在包装       产生=下一个(结果)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ ipykernel \ kernelbase.py”,   第267行,在dispatch_shell中       产生gen.maybe_future(handler(stream,idents,msg))文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ gen.py”,第326行,   在包装       产生=下一个(结果)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ ipykernel \ kernelbase.py”,   第534行,在execute_request中       user_expressions,allow_stdin,文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tornado \ gen.py”,第326行,   在包装       产生=下一个(结果)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ ipykernel \ ipkernel.py”,   第294行,在do_execute中       res = shell.run_cell(代码,store_history = store_history,silent =静音)文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ ipykernel \ zmqshell.py”,   第536行,在run_cell中       返回super(ZMQInteractiveShell,self).run_cell(* args,** kwargs)文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ IPython \ core \ interactiveshell.py”,   第2819行,在run_cell中       raw_cell,store_history,silent,shell_futures)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ IPython \ core \ interactiveshell.py”,   _run_cell中的第2845行       返回运行程序(coro)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ IPython \ core \ async_helpers.py”,   _pseudo_sync_runner中的第67行       coro.send(无)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ IPython \ core \ interactiveshell.py”,   第3020行,在run_cell_async中       交互性=交互性,编译器=编译器,结果=结果)文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ IPython \ core \ interactiveshell.py”,   第3191行,在run_ast_nodes中       如果(从self.run_code(代码,结果)产生):文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ IPython \ core \ interactiveshell.py”,   第3267行,在run_code中       exec(code_obj,self.user_global_ns,self.user_ns)文件“”,第1行,在       runfile('D:/ My Projects / FinalProject_Vr_02 / cnn.py',wdir ='D:/ My Projects / FinalProject_Vr_02')文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ spyder_kernels \ customize \ spydercustomize.py”,   运行文件中的第704行       execfile(文件名,名称空间)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ spyder_kernels \ customize \ spydercustomize.py”,   execfile中的第108行       exec(compile(f.read(),filename,'e​​xec'),namespace)文件“ D:/ My Projects / FinalProject_Vr_02 / cnn.py”,第99行,在       模型= tflearn.DNN(convnet,tensorboard_dir ='log')文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tflearn \ models \ dnn.py”,   第65行,初始化       best_val_accuracy = best_val_accuracy)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tflearn \ helpers \ trainer.py”,   第147行, init       allow_empty = True)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ saver.py”,   第1102行,在 init       self.build()文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ saver.py”,   1114行,正在构建       self._build(self._filename,build_save = True,build_restore = True)文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ saver.py”,   _build中的第1151行       build_save = build_save,build_restore = build_restore)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ saver.py”,   _build_internal中的第795行       restore_sequentially,重塑形状)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ saver.py”,   _AddRestoreOps中的第406行       依次还原)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ saver.py”,   862行,在bulk_restore中       返回io_ops.restore_v2(文件名_张量,名称,切片,dtypes)文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ gen_io_ops.py”,   restore_v2中的第1465行       shape_and_slices = shape_and_slices,dtypes = dtypes,name = name)文件   “ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ framework \ op_def_library.py”,   _apply_op_helper中的第787行       op_def = op_def)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ util \ deprecation.py”,   第488行,在new_func中       返回func(* args,** kwargs)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ framework \ ops.py”,   第3274行,在create_op中       op_def = op_def)文件“ C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ framework \ ops.py”,   第1770行,在 init 中       self._traceback = tf_stack.extract_stack()

     

DataLossError (请参阅上面的回溯):无法打开表文件D:\ My Projects \ FinalProject_Vr_02 \ snakes-0.001-2conv-basic.model.meta:   数据丢失:不是一个稳定的(错误的魔术数字):也许您的文件在   不同的文件格式,您需要使用其他还原   操作员? [[节点save_1 / RestoreV2(在   C:\ Users \ Asus \ Anaconda3 \ lib \ site-packages \ tflearn \ helpers \ trainer.py:147)   = RestoreV2 [dtypes = [DT_FLOAT,DT_FLOAT,DT_FLOAT,DT_FLOAT,DT_FLOAT,...,DT_FLOAT,DT_FLOAT,DT_BOOL,DT_FLOAT,DT_FLOAT],   _device =“ / job:localhost /副本:0 / task:0 / device:CPU:0”](_ arg_save_1 / Const_0_0,   save_1 / RestoreV2 / tensor_names,save_1 / RestoreV2 / shape_and_slices)]]

     

我该如何解决?

    import cv2                
    import numpy as np    
    import os             
    from random import shuffle
    from tqdm import tqdm     

    TRAIN_DIR = 'D:\\My Projects\\Dataset\\dataset5_for_testing\\train'
    TEST_DIR = 'D:\\My Projects\\Dataset\\dataset5_for_testing\\test'
    IMG_SIZE = 50
    LR = 1e-3

    MODEL_NAME = 'snakes-{}-{}.model'.format(LR, '2conv-basic')

    def label_img(img):
        print("\nimg inside label_img",img)
        print("\n",img.split('.')[-2])
        temp_name= img.split('.')[-2]
        #print("\n",temp_name[0:3])
        #temp_name=temp_name[0:3]
        print("\n",temp_name[:1])
        temp_name=temp_name[:1]
        #word_label = img.split('.')[-3]
        word_label = temp_name

       # word_label = img[0]

        if word_label == 'A': return [0,0,0,0,1]   

        elif word_label == 'B': return [0,0,0,1,0] 

        elif word_label == 'C': return [0,0,1,0,0]  

        elif word_label == 'D': return [0,1,0,0,0]

        elif word_label == 'E' : return [1,0,0,0,0]

    def create_train_data():
        training_data = []
        for img in tqdm(os.listdir(TRAIN_DIR)):
            label = label_img(img)
            path = os.path.join(TRAIN_DIR,img)
            img = cv2.imread(path,cv2.IMREAD_GRAYSCALE)
            img = cv2.resize(img, (IMG_SIZE,IMG_SIZE))
            training_data.append([np.array(img),np.array(label)])
        shuffle(training_data)
        np.save('train_data.npy', training_data)
        return training_data


    def process_test_data():
        testing_data = []
        for img in tqdm(os.listdir(TEST_DIR)):
            path = os.path.join(TEST_DIR,img)
            img_num = img.split('.')[0]
            img = cv2.imread(path,cv2.IMREAD_GRAYSCALE)
            img = cv2.resize(img, (IMG_SIZE,IMG_SIZE))
            testing_data.append([np.array(img), img_num])
        shuffle(testing_data)
        np.save('test_data.npy', testing_data)
        return testing_data

    train_data = create_train_data()


    import tflearn
    from tflearn.layers.conv import conv_2d, max_pool_2d
    from tflearn.layers.core import input_data, dropout, fully_connected
    from tflearn.layers.estimator import regression




    import tensorflow as tf
    tf.reset_default_graph()

    convnet = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 1], name='input')

    convnet = conv_2d(convnet, 32, 5, activation='relu')
    convnet = max_pool_2d(convnet, 5)

    convnet = conv_2d(convnet, 64, 5, activation='relu')
    convnet = max_pool_2d(convnet, 5)

    convnet = conv_2d(convnet, 128, 5, activation='relu')
    convnet = max_pool_2d(convnet, 5)

    convnet = conv_2d(convnet, 64, 5, activation='relu')
    convnet = max_pool_2d(convnet, 5)

    convnet = conv_2d(convnet, 32, 5, activation='relu')
    convnet = max_pool_2d(convnet, 5)

    convnet = fully_connected(convnet, 1024, activation='relu')
    convnet = dropout(convnet, 0.8)

    convnet = fully_connected(convnet, 5, activation='softmax')
    convnet = regression(convnet, optimizer='adam', learning_rate=LR, loss='categorical_crossentropy', name='targets')

    model = tflearn.DNN(convnet, tensorboard_dir='log')



    if os.path.exists('{}.meta'.format(MODEL_NAME)):
        model.load(MODEL_NAME)
        print('model loaded!')

    #train = train_data[:-500]
    #test = train_data[-500:]

    train = train_data[:-200]
    test = train_data[-200:]

    X = np.array([i[0] for i in train]).reshape(-1,IMG_SIZE,IMG_SIZE,1)
    Y = [i[1] for i in train]

    test_x = np.array([i[0] for i in test]).reshape(-1,IMG_SIZE,IMG_SIZE,1)
    test_y = [i[1] for i in test]

    model.fit({'input': X}, {'targets': Y}, n_epoch=3, validation_set=({'input': test_x}, {'targets': test_y}), 
        snapshot_step=500, show_metric=True, run_id=MODEL_NAME)


    model.save(MODEL_NAME)



    meta_path = 'snakes-0.001-2conv-basic.model.meta' # Your .meta file
    #meta_path = os.path.join(MODEL_NAME)
    #meta_path = os.path.join('snakes-{}-{}.model') # Your .meta file

    with tf.Session() as sess:

        # Restore the graph
        #saver = tf.train.import_meta_graph(meta_path)
        saver = model.load(meta_path)
        # Load weights
        saver.restore(sess,tf.train.latest_checkpoint('.'))

        # Output nodes
        output_node_names =[n.name for n in tf.get_default_graph().as_graph_def().node]

        # Freeze the graph
        frozen_graph_def = tf.graph_util.convert_variables_to_constants(
            sess,
            sess.graph_def,
            output_node_names)

        # Save the frozen graph
        with open('output_graph.pb', 'wb') as f: 
            f.write(frozen_graph_def.SerializeToString())


    model.save('output_graph.pb')
    tf.contrib.lite.TFLiteConverter
    converter = tf.contrib.lite.TFLiteConverter.from_saved_model('output_graph.pb')
    #converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_NAME)
    tflite_model = converter.convert()
    open("converted_model.tflite", "wb").write(tflite_model)

    #file_writer = tf.summary.FileWriter('/path/to/logs', sess.graph)

0 个答案:

没有答案