我需要读取图像像素颜色,图像将只有黑白两色。因此,如果像素是白色,我想实例化白色立方体,如果像素是黑色,我想实例化黑色立方体。现在这对我来说是全新的,所以我进行了一些挖掘,最后我使用了system.Drawing和bitmaps。但是现在我卡住了。我不知道如何检查白色像素
例如
private void Pixelreader()
{
Bitmap img = new Bitmap(("ImageName.png");
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
System.Drawing.Color pixel = img.GetPixel(i, j);
if (pixel == *if image is white)
{
// instantiate white color.
}
}
}
}
有没有其他方法可以做到这一点?谢谢!
答案 0 :(得分:2)
如果图像只是真正的黑白(即所有像素都等于System.Drawing.Color.Black
或System.Drawing.Color.White
),那么您可以直接与这些颜色进行比较。在您发布的代码中,它将如下所示:
if (pixel == System.Drawing.Color.White)
{
//instantiate white color.
}
如果图像是Unity资产的一部分,更好的方法是使用Resources读取它。将图像放入Assets / Resources文件夹;然后你可以使用以下代码:
Texture2D image = (Texture2D)Resources.Load("ImageName.png");
如果图像是全黑或全白,则无需循环 - 只需检查一个像素:
if(image.GetPixel(0,0) == Color.White)
{
//Instantiate white cube
}
else
{
//Instantiate black cube
}
答案 1 :(得分:0)
您实际上可以将图像作为资源加载到Texture2D中,然后使用UnityEngine.Texture2D
和UnityEngine.Color.GrayScale
检查您获得的颜色是否足够接近白色。
答案 2 :(得分:-1)
听起来你有点过分使用它,而是可以使用已内置于Unity的功能。尝试在光线投射过程中查看像素颜色。
def getImage(filename): # convert filenames to a queue for an input pipeline. filenameQ = tf.train.string_input_producer([filename],num_epochs=None) # object to read records recordReader = tf.TFRecordReader() # read the full set of features for a single example key, fullExample = recordReader.read(filenameQ) # parse the full example into its' component features. features = tf.parse_single_example( fullExample, features={ 'image/height': tf.FixedLenFeature([], tf.int64), 'image/width': tf.FixedLenFeature([], tf.int64), 'image/colorspace': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/channels': tf.FixedLenFeature([], tf.int64), 'image/class/label': tf.FixedLenFeature([],tf.int64), 'image/class/text': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/format': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/filename': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/encoded': tf.FixedLenFeature([], dtype=tf.string, default_value='') }) # now we are going to manipulate the label and image features label = features['image/class/label'] image_buffer = features['image/encoded'] # Decode the jpeg with tf.name_scope('decode_jpeg',[image_buffer], None): # decode image = tf.image.decode_jpeg(image_buffer, channels=3) # and convert to single precision data type image = tf.image.convert_image_dtype(image, dtype=tf.float32) # cast image into a single array, where each element corresponds to the greyscale # value of a single pixel. # the "1-.." part inverts the image, so that the background is black. # re-define label as a "one-hot" vector # it will be [0,1] or [1,0] here. # This approach can easily be extended to more classes. image=tf.reshape(image,[height,width,3]) label=tf.pack(tf.one_hot(label-1, nClass)) return label, image label, image = getImage("train-00000-of-00001") imageBatch, labelBatch = tf.train.shuffle_batch( [image, label], batch_size=2, capacity=20, min_after_dequeue=10) sess = tf.InteractiveSession() sess.run(tf.initialize_all_variables()) # start the threads used for reading files coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess,coord=coord) batch_xs, batch_ys = sess.run([imageBatch, labelBatch]) print batch_xs coord.request_stop() coord.join(threads)