具有自定义层的CoreML在带有Apple Neural Engine的设备上存在错误

时间:2019-05-17 23:24:22

标签: coreml coremltools

带有自定义层的CoreML之类的东西在带有Apple Neural Engine的设备上存在错误。

错误症状:在具有iPhone XS等自定义图层的ANE设备上,在“ setWeightData”之前调用函数“ ouputShapes”。作为自定义图层的结果,其形状取决于输入权重数据可能会崩溃。在iPad Air 2等较旧的设备上,一切正常。 通常,函数“ setWeightData”必须在“ ouputShapes”之前调用。

相关讨论:https://forums.developer.apple.com/thread/113861

2 个答案:

答案 0 :(得分:1)

解决方案是防止在ANE上的海关层运行CoreML模型。为此,https://developer.apple.com/documentation/coreml/mlcomputeunits

let config = MLModelConfiguration()
config.computeUnits = MLComputeUnits.cpuAndGPU

但是,如果您有大型模型,则可以使用CoreML的黑魔法来使用ANE。 需要将模型划分为两个CoreML部分,其中一个模型没有自定义层,可以在ANE上运行,而另一部分则可以在CPU或GPU上运行。并在SWIFT应用程序端将第一个模型的输出连接到第二个模型的输入。

示例,我有一个为图像生成标题的模型。它由两部分组成:图像特征提取器和字幕生成器。

要将此模型转换为CoreML,字幕生成器需要一些自定义图层,因此我将模型划分为两个CoreML部分:

// Standart initialize CoreML model
let model_features = CaptionMobile_features()

// Initialize CoreML model with options
// Prevent run model on ANE but alow run on CPU and GPU
let config = MLModelConfiguration()
config.computeUnits = MLComputeUnits.cpuAndGPU
guard let model_caption = try? CaptionMobile_caption(configuration: config)
else { fatalError("Can't intitalize Caption CoreML model") }

结果是,重型特征模型在ANE上运行,可能会加快10倍。小型模型可以在CPU或GPU上运行。

答案 1 :(得分:1)

通过向Matthijs Hollemans提出建议,还有另一种可能的解决方案:如果outShape依赖于小数据,我们可能不将此类数据存储到“权重”中,而是存储到传递到自定义层初始化中的“参数”中。

Python端(Coremltools):

# MatMul - matrix multiplication of two matrix A * B
def  _convert_matmul(**kwargs):
    tf_op = kwargs["op"]
    coreml_nn_builder = kwargs["nn_builder"]
    constant_inputs = kwargs["constant_inputs"]

    params = NeuralNetwork_pb2.CustomLayerParams()
    # The name of the Swift or Obj-C class that implements this layer.
    params.className = 'MatMul'
    params.description = 'Custom layer that corresponds to the MatMul TF op'

    # Get specific parameters (constant inputs) of operation
    ################
    # Store matrix (B) as weight parameter, in weights by index [0]
    w = constant_inputs.values()[0]

    ########
    # We Store B matrix shape for ability calculate out results matrix shape during matrix multiplication in Swift code,
    # Store shape of B matrix, in parameters which passed to function init in SWIFT app side
    params.parameters["b_shape_0"].intValue = w.shape[0]
    params.parameters["b_shape_1"].intValue = w.shape[1]
    ########

    # Store constant input as weights because this array/matrix
    w_as_weights = params.weights.add()
    # Suppoerted types for WeightParams see in:
    # https://github.com/apple/coremltools/blob/5bcd4f8aa55df82792deb9a4491c3f37d5b89149/mlmodel/format/NeuralNetwork.proto#L658
    w_as_weights.floatValue.extend(map(float, w.flatten()))
    ################

    # This operation receive first input (A) as standard tensor and second input as const which we resend via 'weights', see above
    input_names = [tf_op.inputs[0].name]
    # Get list of out tensors names
    output_names = [out_tensor.name for out_tensor in tf_op.outputs]

    coreml_nn_builder.add_custom(name=tf_op.name,
                                    input_names=input_names,
                                    output_names=output_names,
                                    custom_proto_spec=params)

SWIFT应用程序一侧:

@objc(MatMul) class MatMul: NSObject, MLCustomLayer {
    private var b_shape = [Int]()

    required init(parameters: [String : Any]) throws {
        //print(String(describing: self), #function, parameters)
        super.init()

        // Parameters came from _convert_matmul() 
        b_shape.append(parameters["b_shape_0"] as? Int ?? 0)
        b_shape.append(parameters["b_shape_1"] as? Int ?? 0)
    }
}