如何处理我的keras代码中的内存过多?

时间:2019-01-15 08:35:55

标签: python memory split keras

我现在在图像检索项目上,但是查询图像和参考图像超出了GPU内存(24 GB) 我的代码在下面。

function openerp_field_mask_widgets(instance) {



    instance.web.form.FieldMask = instance.web.form.FieldChar.extend({
    template : "FieldMask",


    render_value: function() {

        var show_value = this.get_value();
        var field = this;
        if (!field.get("effective_readonly")) {

            field.$el.find('input').val(show_value);

            var mask = field.node.attrs.mask;
            field.$el.find('input').inputmask(mask);


        } else {

              field.$(".oe_form_char_content").text(show_value);

        }


    },


    get_value: function() {
        val = this.get('value');
        if (!val) {
            return '';
        }

        return  val;
    },

});

instance.web.form.widgets.add('mask', 'instance.web.form.FieldMask');
 }

  openerp.field_mask = function(openerp) {
openerp.field_mask = openerp.field_mask || {};
openerp_field_mask_widgets(openerp);
 }
def infer(queries, db):

        # Query number: 195
        # Reference(DB) number: 1,127
        # Total (query + reference): 1,322

        queries, query_img, references, reference_img = preprocess(queries, db)

        print('test data load queries {} query_img {} references {} reference_img {}'.
              format(len(queries), len(query_img), len(references), len(reference_img)))

        queries = np.asarray(queries)
        query_img = np.asarray(query_img)
        references = np.asarray(references)
        reference_img = np.asarray(reference_img)

        query_img = query_img.astype('float32')
        query_img /= 255
        reference_img = reference_img.astype('float32')
        reference_img /= 255

        get_feature_layer = K.function([model.layers[0].input] + [K.learning_phase()], [model.layers[-2].output])

        print('inference start')

        # inference
        query_vecs = get_feature_layer([query_img, 0])[0]

        # caching db output, db inference
        db_output = './db_infer.pkl'
        if os.path.exists(db_output):
            with open(db_output, 'rb') as f:
                reference_vecs = pickle.load(f)
        else:
            reference_vecs = get_feature_layer([reference_img, 0])[0]
            with open(db_output, 'wb') as f:
                pickle.dump(reference_vecs, f)

        # l2 normalization
        query_vecs = l2_normalize(query_vecs)
        reference_vecs = l2_normalize(reference_vecs)

        # Calculate cosine similarity
        sim_matrix = np.dot(query_vecs, reference_vecs.T)

        retrieval_results = {}

        for (i, query) in enumerate(queries):
            query = query.split('/')[-1].split('.')[0]
            sim_list = zip(references, sim_matrix[i].tolist())
            sorted_sim_list = sorted(sim_list, key=lambda x: x[1], reverse=True)

            ranked_list = [k.split('/')[-1].split('.')[0] for (k, v) in sorted_sim_list]  # ranked list

            retrieval_results[query] = ranked_list
        print('done')

        return list(zip(range(len(retrieval_results)), retrieval_results.items()))

我搜索了google并找到了一个很好的答案,但我不是tenforflow用户... 我在这个领域非常新手,只使用Keras。 我想将样本分成几批放入模型中。 如何在python keras中表示拆分代码以修复infer()? 我坚持了三天……请帮助我……我真的很感谢您。

0 个答案:

没有答案