StartCoroutine()修复targetTexture.ReadPixels错误

时间:2019-03-27 13:04:22

标签: c# unity3d hololens

正如标题所示,我对行中发生的错误有疑问

targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);

错误:

  

ReadPixels被调用以从系统帧缓冲区读取像素,而   不在并条机内。 UnityEngine.Texture2D:ReadPixels(Rect,   Int32,Int32)

正如我从其他文章中了解到的,解决此问题的一种方法是制作一个Ienumerator方法,该方法产生返回新的WaitForSeconds或类似内容的对象,并将其命名为:StartCoroutine(methodname),以便及时加载框架,以便将会有需要读取的像素。

我没有得到的是,在下面的代码中,这种方法最有意义。哪一部分没有及时加载?

    PhotoCapture photoCaptureObject = null;
    Texture2D targetTexture = null;
    public string path = "";
    CameraParameters cameraParameters = new CameraParameters();

private void Awake()
{

    var cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
    targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);

    // Create a PhotoCapture object
    PhotoCapture.CreateAsync(false, captureObject =>
    {
        photoCaptureObject = captureObject;
        cameraParameters.hologramOpacity = 0.0f;
        cameraParameters.cameraResolutionWidth = cameraResolution.width;
        cameraParameters.cameraResolutionHeight = cameraResolution.height;
        cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;
    });
}

private void Update()
{
    // if not initialized yet don't take input
    if (photoCaptureObject == null) return;

    if (Input.GetKey("k") || Input.GetKey("k"))
    {
        Debug.Log("k was pressed");

        VuforiaBehaviour.Instance.gameObject.SetActive(false);

        // Activate the camera
        photoCaptureObject.StartPhotoModeAsync(cameraParameters, result =>
        {
            if (result.success)
            {
                // Take a picture
                photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
            }
            else
            {
                Debug.LogError("Couldn't start photo mode!", this);
            }
        });
    }
}

private static string FileName(int width, int height)
{
    return $"screen_{width}x{height}_{DateTime.Now:yyyy-MM-dd_HH-mm-ss}.png";
}

private void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
    // Copy the raw image data into the target texture
    photoCaptureFrame.UploadImageDataToTexture(targetTexture);

    Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();

    targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
    targetTexture.Apply();

    byte[] bytes = targetTexture.EncodeToPNG();

    string filename = FileName(Convert.ToInt32(targetTexture.width), Convert.ToInt32(targetTexture.height));
    //save to folder under assets
    File.WriteAllBytes(Application.streamingAssetsPath + "/Snapshots/" + filename, bytes);
    Debug.Log("The picture was uploaded");

    // Deactivate the camera
    photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}

private void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
{
    // Shutdown the photo capture resource
    VuforiaBehaviour.Instance.gameObject.SetActive(true);
    photoCaptureObject.Dispose();
    photoCaptureObject = null;


}

很抱歉,例如,如果这算作与this的重复。


编辑

this可能会有用。

是不是我根本不需要这三行?

Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();

正如评论中所写,使用这三行之间的区别在于保存的照片具有黑色背景+ AR-GUI。没有上面的第二行代码,是带有AR-GUI的照片,但背景是我的计算机网络摄像头的实时流。而且我真的不想看到计算机网络摄像头,而是看到HoloLens看到的东西。

1 个答案:

答案 0 :(得分:1)

您的三行

Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();

targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();

对我来说意义不大。 Texture2D.ReadPixels用于创建屏幕截图,因此您将使用屏幕截图覆盖刚刚从PhotoCapture收到的纹理吗? (尺寸也不正确,因为相机分辨率很可能是!=屏幕分辨率。)

那也是原因

  

正如评论中所写,使用这三行之间的区别在于保存的照片具有黑色背景+ AR-GUI。

做完

photoCaptureFrame.UploadImageDataToTexture(targetTexture);

您已经在Texture2D的{​​{1}}收到了PhotoCapture

我认为您可能将其与Texture2D.GetPixels混淆了,后者用于获取给定targetTexture的像素数据。


  

我想最后从中心裁剪捕获的照片,并认为使用此代码行也许可以实现吗?从0、0以外的其他像素处开始新的矩形。

您真正想要的是像注释中提到的那样从中心裁剪接收到的Texture2D。您可以使用GetPixels(int x, int y, int blockWidth, int blockHeight, int miplevel)进行此操作,该操作用于切出给定Texture2D

的特定区域
Texture2D

(希望)更好地理解这是一个说明,public static Texture2D CropAroundCenter(Texture2D input, Vector2Int newSize) { if(input.width < newSize.x || input.height < newSize.y) { Debug.LogError("You can't cut out an area of an image which is bigger than the image itself!", this); return null; } // get the pixel coordinate of the center of the input texture var center = new Vector2Int(input.width / 2, input.height / 2); // Get pixels around center // Get Pixels starts width 0,0 in the bottom left corner // so as the name says, center.x,center.y would get the pixel in the center // we want to start getting pixels from center - half of the newSize // // than from starting there we want to read newSize pixels in both dimensions var pixels = input.GetPixels(center.x - newSize.x / 2, center.y - newSize.y / 2, newSize.x, newSize.y, 0); // Create a new texture with newSize var output = new Texture2D(newSize.x, newSize.y); output.SetPixels(pixels); output.Apply(); return output; } 在给定值下的重载在这里做了什么:

enter image description here

,然后在

中使用它
GetPixels

或者您也可以将private void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame) { // Copy the raw image data into the target texture photoCaptureFrame.UploadImageDataToTexture(targetTexture); // for example take only half of the textures width and height targetTexture = CropAroundCenter(targetTexture, new Vector2Int(targetTexture.width / 2, targetTexture.height / 2); byte[] bytes = targetTexture.EncodeToPNG(); string filename = FileName(Convert.ToInt32(targetTexture.width), Convert.ToInt32(targetTexture.height)); //save to folder under assets File.WriteAllBytes(Application.streamingAssetsPath + "/Snapshots/" + filename, bytes); Debug.Log("The picture was uploaded"); // Deactivate the camera photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode); } extension method分开,例如

static class

并像

那样使用它
public static class Texture2DExtensions
{
    public static void CropAroundCenter(this Texture2D input, Vector2Int newSize)
    {
        if (input.width < newSize.x || input.height < newSize.y)
        {
            Debug.LogError("You can't cut out an area of an image which is bigger than the image itself!");
            return;
        }

        // get the pixel coordinate of the center of the input texture
        var center = new Vector2Int(input.width / 2, input.height / 2);

        // Get pixels around center
        // Get Pixels starts width 0,0 in the bottom left corner
        // so as the name says, center.x,center.y would get the pixel in the center
        // we want to start getting pixels from center - half of the newSize 
        //
        // than from starting there we want to read newSize pixels in both dimensions
        var pixels = input.GetPixels(center.x - newSize.x / 2, center.y - newSize.y / 2, newSize.x, newSize.y, 0);

        // Resize the texture (creating a new one didn't work)
        input.Resize(newSize.x, newSize.y);

        input.SetPixels(pixels);
        input.Apply(true);
    }
}

注意:

  

UploadImageDataToTexture:只有在CameraParameters中指定了 BGRA32 格式时,才可以使用此方法。

幸运的是,您还是使用它;)

  

请记住,此操作将在主线程上发生,因此很慢。

不过,唯一的选择是CopyRawImageDataIntoBuffer并在另一个线程或外部线程中生成纹理,因此我想说,可以继续使用targetTexture.CropAroundCenter(new Vector2Int(targetTexture.width / 2, targetTexture.height / 2)); ;)

  

捕获的图像也会在HoloLens上显示翻转。您可以使用自定义着色器重新调整图像的方向。

通过翻转实际上是指纹理的UploadImageDataToTexture颠倒了。 Y-Axis是正确的。

要垂直翻转“纹理”,可以使用第二种扩展方法:

X-Axis

并像使用它

public static class Texture2DExtensions
{
    public static void CropAroundCenter(){....}

    public static void FlipVertically(this Texture2D texture)
    {
        var pixels = texture.GetPixels();
        var flippedPixels = new Color[pixels.Length];

        // These for loops are for running through each individual pixel and 
        // write them with inverted Y coordinates into the flippedPixels
        for (var x = 0; x < texture.width; x++)
        {
            for (var y = 0; y < texture.height; y++)
            {
                var pixelIndex = x + y * texture.width;
                var flippedIndex = x  + (texture.height - 1 - y) * texture.width;

                flippedPixels[flippedIndex] = pixels[pixelIndex];
            }
        }

        texture.SetPixels(flippedPixels);
        texture.Apply();
    }
}

结果:(在此示例和给定的Texture中,我每秒使用FlipVertically并将其裁剪为一半大小,但是对于拍摄的图片,它应该工作相同。)

enter image description here

图片来源:http://developer.vuforia.com/sites/default/files/sample-apps/targets/imagetargets_targets.pdf


更新

您的按钮问题:

不使用

targetTexture.FlipVertically();

首先,您要检查两次完全相同的条件。然后,在按住键的同时,if (Input.GetKey("k") || Input.GetKey("k")) 会发射每一帧。而是使用

GetKey

仅触发一次。我猜Vuforia和PhotoCapture存在问题,因为您的原始版本经常被触发,也许您有一些并发的PhotoCapture流程...