如何将某些预处理步骤纳入Tensorflow服务模型

时间:2020-11-02 16:42:27

标签: python tensorflow keras tensorflow-serving feature-engineering

我建立了一个具有不同功能的模型。对于预处理,我主要使用了feature_columns。例如,用于对GEO信息进行存储桶化或嵌入具有大量不同值的分类数据。另外,在使用feature_columns之前,我必须预处理我的两个功能:

功能“街”

def __preProcessStreet(data, tokenizer=None):

    data['STREETPRO'] = data['STREET'].apply(lambda x: __getNormalizedString(x, ["gasse", "straße", "strasse", "str.", "g.", " "], False))

    if tokenizer == None:
        tokenizer = Tokenizer(split='XXX')
        tokenizer.fit_on_texts(data['STREETPRO'])

    street_tokenized = tokenizer.texts_to_sequences(data['STREETPRO'])

    data['STREETW'] = tf.keras.preprocessing.sequence.pad_sequences(street_tokenized, maxlen=1)

    return data, tokenizer

如您所见,我直接在加载的Pandas数据帧上进行了预处理步骤。之后,我在提到的列的帮助下处理了这个新列:

def __getFutureColumnStreet(street_num_words):

    street_voc = tf.feature_column.categorical_column_with_identity(
        key='STREETW', num_buckets=street_num_words)

    dim = __getNumberOfDimensions(street_num_words)

    street_embedding = feature_column.embedding_column(street_voc, dimension=dim)

    return street_embedding

功能“ NAME1

NAME1列的预处理步骤非常相似,不同的是我将NAME1字段分为两个不同的字段“ NAME1W1”和“ NAME1W2”,这两个字段包括词汇表中两个最常见的单词:

def __preProcessName(data, tokenizer=None):

    data['NAME1PRO'] = data['NAME1'].apply(lambda x: __getNormalizedString(x, ["(asg)", "asg", "(poasg)", "poasg"]))

    if tokenizer == None:
        tokenizer = Tokenizer()
        tokenizer.fit_on_texts(data['NAME1PRO'])

    name1_tokenized = tokenizer.texts_to_sequences(data['NAME1PRO'])

    name1_tokenized_pad = tf.keras.preprocessing.sequence.pad_sequences(name1_tokenized, maxlen=2, truncating='pre')

    data = pd.concat([data, pd.DataFrame(name1_tokenized_pad, columns=['NAME1W1', 'NAME1W2'])], axis=1)

    return data, tokenizer

此后,我还使用feature_colums进行单词嵌入:

def __getFutureColumnsName(name_num_words):

    namew1_voc = tf.feature_column.categorical_column_with_identity(
        key='NAME1W1', num_buckets=name_num_words)
    namew2_voc = tf.feature_column.categorical_column_with_identity(
        key='NAME1W2', num_buckets=name_num_words)

    dim = __getNumberOfDimensions(name_num_words)

    namew1_embedding = feature_column.embedding_column(namew1_voc, dimension=dim)
    namew2_embedding = feature_column.embedding_column(namew2_voc, dimension=dim)

    return (namew1_embedding, namew2_embedding)

型号

我正在使用TensorFlow的Functional API来构建我的模型:

                print("start preprocessing...")
                feature_columns = feature_selection.getFutureColumns(data, args.zip, args.sc, bucketSizeGEO, False)
                feature_layer = tf.keras.layers.DenseFeatures(feature_columns, trainable=True)
                print("preprocessing completed")

…                

                            print("Step {}/{}".format(currentStep, stepNum))

                            feature_layer_inputs = feature_selection.getFeatureLayerInputs()
                            new_layer = feature_layer(feature_layer_inputs)
                            

                            for _ in range(numLayers):
                                new_layer = tf.keras.layers.Dense(numNodes, activation=tf.nn.swish, kernel_regularizer=regularizers.l2(reg), bias_regularizer=regularizers.l2(reg))(new_layer)
                                new_layer = tf.keras.layers.Dropout(dropRate)(new_layer) 

                            output_layer = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid, kernel_regularizer=regularizers.l2(reg), bias_regularizer=regularizers.l2(reg))(new_layer)

                            model = tf.keras.Model(inputs=[v for v in feature_layer_inputs.values()], outputs=output_layer)

                            model.compile(optimizer=opt,
                                loss='binary_crossentropy',
                                metrics=['accuracy'])

                            paramString = "Arg-e{}-b{}-l{}-n{}-o{}-z{}-r{}-d{}".format(args.epoch, args.batchSize, numLayers, numNodes, opt, bucketSizeGEO, reg, dropRate)

                            log_dir = "logs\\neural\\" + paramString + "\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
                            tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)

                            print("Start training with the following parameters:", paramString)

                            model.fit(train_ds,
                                    validation_data=val_ds,
                                    epochs=args.epoch,
                                    callbacks=[tensorboard_callback])

TensorFlow服务

从逻辑上讲,包含Tokenizer的两个预处理步骤不是模型的一部分,因此无法在投放过程中进行处理,因此模型服务器的POST命令如下所示(在Windows上):

curl -d "{"""instances""": [{"""NAME1W1""": [12], """NAME1W2""": [2032], """ZIP""": [""1120""], """STREETW""": [1180], """LONGITUDE""": 16.47, """LATITUDE""": 48.22, """AVIS_TYPE""": [""E""],"""ASG""": [0], """SC""": [""101""], """PREDICT""": [0]}]}" -X POST http://localhost:8501/v1/models/my_model:predict

因此,目前我正试图在模型中包括这两个预处理步骤的方法,以便POST命令如下所示:

curl -d "{"""instances""": [{"""NAME1""": [“”Max Mustermann””], """ZIP""": [""1120""], """STREET""": [Teststraße], """LONGITUDE""": 16.47, """LATITUDE""": 48.22, """AVIS_TYPE""": [""E""],"""ASG""": [0], """SC""": [""101""], """PREDICT""": [0]}]}" -X POST http://localhost:8501/v1/models/my_model:predict

,但是模型中的预处理步骤相同。

我尝试在数据集或preprocessing layers上使用地图函数,但未成功,因为我不确定是否可以将它们与future_columns结合使用。我也尝试过类似此处提到的内容:https://keras.io/examples/structured_data/structured_data_classification_from_scratch/

1 个答案:

答案 0 :(得分:1)

我认为您需要TFX转换组件。它不是模型的一部分,而是管道的一部分。这样,您可以轻松修改将来想要的预处理转换,而不会影响模型。

该组件的主要功能是preprocessing_fn,这是您要应用于输入的一系列转换。 TensorFlow指南提供了更好的解释和教程供您尝试。

这里有一些参考资料: