我正在研究可识别手势的 Unity-Android 应用程序。我用来训练模型的图像是 50x50黑白图像,其手通过 HSV值进行了细分。现在,在测试模型时也进行了相同的操作,但问题是: 当相机中没有手时,由于HSV不准确,它仍然会检测到任何东西(通过移动相机),并且当(没有手)的图像馈入模型时,它仍然会给出准确率达到80%以上,并为其确定随机类别。
训练模型所依据的图像和代码被链接下来。
我正在使用 TensorflowSharp 加载我的模型。 对于openCV,我正在使用 OpenCV for Unity 我有 4个手势(4个类),每个类有4-4.5k个图像,共17k个图像。样本图片
1级
第2类
3级
第4类
如果您需要其他任何信息,请告诉我,我们将不胜感激。
using (var graph = new TFGraph())
{
graph.Import(buffer);
using (var session = new TFSession(graph))
{
Stopwatch sw = new Stopwatch();
sw.Start();
var runner = session.GetRunner();
Mat gray = new Mat();
Mat HSVMat = new Mat();
Imgproc.resize(touchedRegionRgba, gray, new
OpenCVForUnity.Size(50, 50));
Imgproc.cvtColor(gray, HSVMat, Imgproc.COLOR_RGB2HSV_FULL);
Imgproc.cvtColor(gray, gray, Imgproc.COLOR_RGBA2GRAY);
for (int i = 0; i < gray.rows(); i++)
{
int count = 0;
for (int j = 200; count<gray.cols(); j++)
{
double[] Hvalue = HSVMat.get(i, count);
if (!((detector.mLowerBound.val[0] <= Hvalue[0] && Hvalue[0] <= detector.mUpperBound.val[0]) &&
(detector.mLowerBound.val[1] <= Hvalue[1] && Hvalue[1] <= detector.mUpperBound.val[1]) &&
(detector.mLowerBound.val[2] <= Hvalue[2] && Hvalue[2] <= detector.mUpperBound.val[2])))
{
gray.put(i, count, new byte[] { 0 });
}
}
}
var tensor = Util.ImageToTensorGrayScale(gray);
//runner.AddInput(graph["conv1_input"][0], tensor);
runner.AddInput(graph["zeropadding1_1_input"][0], tensor);
//runner.Fetch(graph["outputlayer/Softmax"][0]);
//runner.Fetch(graph["outputlayer/Sigmoid"][0]);
runner.Fetch(graph["outputlayer/Softmax"][0]);
var output = runner.Run();
var vecResults = output[0].GetValue();
float[,] results = (float[,])vecResults;
sw.Stop();
int result = Util.Quantized(results);
//numberOfFingersText.text += $"Length={results.Length} Elapsed= {sw.ElapsedMilliseconds} ms, Result={result}, Acc={results[0, result]}";
}
}
# EDITED MODEL, MODEL 1
model = models.Sequential()
model.add(layers.ZeroPadding2D((2, 2), batch_input_shape=(None, 50, 50, 1), name="zeropadding1_1"))
#54x54 fed in due to zero padding
model.add(layers.Conv2D(8, (5, 5), activation='relu', name='conv1_1'))
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding1_2"))
model.add(layers.Conv2D(8, (5, 5), activation='relu', name='conv1_2'))
model.add(layers.MaxPooling2D((2, 2), strides=(2, 2), name="maxpool_1")) #convert 50x50 to 25x25
#25x25 fed in
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding2_1"))
model.add(layers.Conv2D(16, (5, 5), activation='relu', name='conv2_1'))
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding2_2"))
model.add(layers.Conv2D(16, (5, 5), activation='relu', name='conv2_2'))
model.add(layers.MaxPooling2D((5, 5), strides=(5, 5), name="maxpool_2")) #convert 25x25 to 5x5
#5x5 fed in
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding3_1"))
model.add(layers.Conv2D(40, (5, 5), activation='relu', name='conv3_1'))
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding3_2"))
model.add(layers.Conv2D(32, (5, 5), activation='relu', name='conv3_2'))
model.add(layers.Dropout(0.2))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dropout(0.15))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dropout(0.1))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(4, activation='softmax', name="outputlayer"))
# MODEL 2, used a few more that I haven't mentioned
model = models.Sequential()
model.add(layers.ZeroPadding2D((2, 2), batch_input_shape=(None, 50, 50, 1), name="zeropadding1_1"))
#54x54 fed in due to zero padding
model.add(layers.Conv2D(8, (5, 5), activation='relu', name='conv1_1'))
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding1_2"))
model.add(layers.Conv2D(8, (5, 5), activation='relu', name='conv1_2'))
model.add(layers.MaxPooling2D((2, 2), strides=(2, 2), name="maxpool_1")) #convert 50x50 to 25x25
#25x25 fed in
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding2_1"))
model.add(layers.Conv2D(16, (5, 5), activation='relu', name='conv2_1'))
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding2_2"))
model.add(layers.Conv2D(16, (5, 5), activation='relu', name='conv2_2'))
model.add(layers.MaxPooling2D((5, 5), strides=(5, 5), name="maxpool_2")) #convert 25x25 to 5x5
#5x5 fed in
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding3_1"))
model.add(layers.Conv2D(40, (5, 5), activation='relu', name='conv3_1'))
model.add(layers.ZeroPadding2D((2, 2), name="zeropadding3_2"))
model.add(layers.Conv2D(32, (5, 5), activation='relu', name='conv3_2'))
model.add(layers.Dropout(0.2))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dropout(0.15))
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dropout(0.1))
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dense(512, activation='tanh'))
model.add(layers.Dense(4, activation='sigmoid', name="outputlayer"))
预期结果:实际4类训练模型的准确性较高,而其余模型则较低。
实际结果:在实际的4类以及提供给它的其余图像上,准确性更高。
答案 0 :(得分:1)
根据我的说法,基本问题是您无法检测图像中是否有手。您需要将手定位。
首先,我们需要检测手是否存在。您可以尝试暹罗网络来完成这些任务。我已经成功地将它们用于检测皮肤异常。你可以参考-> Harshall Lamba的“使用Keras的暹罗网络进行一次射击学习” https://link.medium.com/xrCQOD8ntV和 Harshvardhan Gupta的“与PyTorch中的暹罗网络的面部相似性” https://link.medium.com/htBzNmUCyV
网络将提供二进制输出。如果存在手,则将看到更接近一的值。否则,将看到接近零的值。
其他的ML模型(例如YOLO)用于对象定位,但是暹罗网络既简单又清醒。
暹罗网络实际上使用相同的CNN,因此它们是暹罗语或联合语。他们测量图像嵌入之间的绝对误差,并尝试近似图像之间的相似度函数。
经过适当的检测,可以进行分类。