如何使用ARCore用相机拍照

时间:2018-01-10 16:05:42

标签: android camera arcore

ARCore相机似乎不支持takePicture。 https://developers.google.com/ar/reference/java/com/google/ar/core/Camera

任何人都知道如何用ARCore拍照?

3 个答案:

答案 0 :(得分:5)

我假设您的意思是相机所看到的内容和AR对象的图片。在较高级别,您需要获得写入外部存储的权限以保存图片,从OpenGL复制帧然后将其另存为png(例如)。以下是具体内容:

WRITE_EXTERNAL_STORAGE权限添加到AndroidManifest.xml

   <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

然后更改CameraPermissionHelper以迭代CAMERA和WRITE_EXTERNAL_STORAGE权限以确保它们被授予

 private static final String REQUIRED_PERMISSIONS[] = {
          Manifest.permission.WRITE_EXTERNAL_STORAGE,
          Manifest.permission.CAMERA
  };

  /**
   * Check to see we have the necessary permissions for this app.
   */
  public static boolean hasCameraPermission(Activity activity) {
    for (String p : REQUIRED_PERMISSIONS) {
      if (ContextCompat.checkSelfPermission(activity, p) !=
            PackageManager.PERMISSION_GRANTED) {
        return false;
      }
    }
    return true;
  }

  /**
   * Check to see we have the necessary permissions for this app,
   *   and ask for them if we don't.
   */
  public static void requestCameraPermission(Activity activity) {
    ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
            CAMERA_PERMISSION_CODE);
  }

  /**
   * Check to see if we need to show the rationale for this permission.
   */
  public static boolean shouldShowRequestPermissionRationale(Activity activity) {
    for (String p : REQUIRED_PERMISSIONS) {
      if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
        return true;
      }
    }
    return false;
  }

接下来,向HelloARActivity添加几个字段以跟踪框架的尺寸和布尔值,以指示何时保存图片。

 private int mWidth;
 private int mHeight;
 private  boolean capturePicture = false;

onSurfaceChanged()

中设置宽度和高度
 public void onSurfaceChanged(GL10 gl, int width, int height) {
     mDisplayRotationHelper.onSurfaceChanged(width, height);
     GLES20.glViewport(0, 0, width, height);
     mWidth = width;
     mHeight = height;
 }

onDrawFrame()的底部,添加对捕获标志的检查。这应该在所有其他绘图发生后完成。

         if (capturePicture) {
             capturePicture = false;
             SavePicture();
         }

然后为按钮添加onClick方法以拍摄照片,并添加实际代码以保存图像:

  public void onSavePicture(View view) {
    // Here just a set a flag so we can copy
    // the image from the onDrawFrame() method.
    // This is required for OpenGL so we are on the rendering thread.
    this.capturePicture = true;
  }

  /**
   * Call from the GLThread to save a picture of the current frame.
   */
  public void SavePicture() throws IOException {
    int pixelData[] = new int[mWidth * mHeight];

    // Read the pixels from the current GL frame.
    IntBuffer buf = IntBuffer.wrap(pixelData);
    buf.position(0);
    GLES20.glReadPixels(0, 0, mWidth, mHeight,
            GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);

    // Create a file in the Pictures/HelloAR album.
    final File out = new File(Environment.getExternalStoragePublicDirectory(
            Environment.DIRECTORY_PICTURES) + "/HelloAR", "Img" +
            Long.toHexString(System.currentTimeMillis()) + ".png");

    // Make sure the directory exists
    if (!out.getParentFile().exists()) {
      out.getParentFile().mkdirs();
    }

    // Convert the pixel data from RGBA to what Android wants, ARGB.
    int bitmapData[] = new int[pixelData.length];
    for (int i = 0; i < mHeight; i++) {
      for (int j = 0; j < mWidth; j++) {
        int p = pixelData[i * mWidth + j];
        int b = (p & 0x00ff0000) >> 16;
        int r = (p & 0x000000ff) << 16;
        int ga = p & 0xff00ff00;
        bitmapData[(mHeight - i - 1) * mWidth + j] = ga | r | b;
      }
    }
    // Create a bitmap.
    Bitmap bmp = Bitmap.createBitmap(bitmapData,
                     mWidth, mHeight, Bitmap.Config.ARGB_8888);

    // Write it to disk.
    FileOutputStream fos = new FileOutputStream(out);
    bmp.compress(Bitmap.CompressFormat.PNG, 100, fos);
    fos.flush();
    fos.close();
    runOnUiThread(new Runnable() {
      @Override
      public void run() {
        showSnackbarMessage("Wrote " + out.getName(), false);
      }
    });
  }

最后一步是将按钮添加到activity_main.xml布局

的末尾
<Button
    android:id="@+id/fboRecord_button"
    android:layout_width="wrap_content"
    android:layout_height="wrap_content"
    android:layout_alignStart="@+id/surfaceview"
    android:layout_alignTop="@+id/surfaceview"
    android:onClick="onSavePicture"
    android:text="Snap"
    tools:ignore="OnClick"/>

答案 1 :(得分:1)

很抱歉回答迟到。您可以使用代码点击ARCore中的图片:

private String generateFilename() {
    String date =
            new SimpleDateFormat("yyyyMMddHHmmss", java.util.Locale.getDefault()).format(new Date());
    return Environment.getExternalStoragePublicDirectory(
            Environment.DIRECTORY_PICTURES) + File.separator + "Sceneform/" + date + "_screenshot.jpg";
}

private void saveBitmapToDisk(Bitmap bitmap, String filename) throws IOException {

    File out = new File(filename);
    if (!out.getParentFile().exists()) {
        out.getParentFile().mkdirs();
    }
    try (FileOutputStream outputStream = new FileOutputStream(filename);
         ByteArrayOutputStream outputData = new ByteArrayOutputStream()) {
        bitmap.compress(Bitmap.CompressFormat.PNG, 100, outputData);
        outputData.writeTo(outputStream);
        outputStream.flush();
        outputStream.close();
    } catch (IOException ex) {
        throw new IOException("Failed to save bitmap to disk", ex);
    }
}

private void takePhoto() {
    final String filename = generateFilename();
    /*ArSceneView view = fragment.getArSceneView();*/
    mSurfaceView = findViewById(R.id.surfaceview);
    // Create a bitmap the size of the scene view.
    final Bitmap bitmap = Bitmap.createBitmap(mSurfaceView.getWidth(), mSurfaceView.getHeight(),
            Bitmap.Config.ARGB_8888);

    // Create a handler thread to offload the processing of the image.
    final HandlerThread handlerThread = new HandlerThread("PixelCopier");
    handlerThread.start();
    // Make the request to copy.
    PixelCopy.request(mSurfaceView, bitmap, (copyResult) -> {
        if (copyResult == PixelCopy.SUCCESS) {
            try {
                saveBitmapToDisk(bitmap, filename);
            } catch (IOException e) {
                Toast toast = Toast.makeText(DrawAR.this, e.toString(),
                        Toast.LENGTH_LONG);
                toast.show();
                return;
            }
            Snackbar snackbar = Snackbar.make(findViewById(android.R.id.content),
                    "Photo saved", Snackbar.LENGTH_LONG);
            snackbar.setAction("Open in Photos", v -> {
                File photoFile = new File(filename);

                Uri photoURI = FileProvider.getUriForFile(DrawAR.this,
                        DrawAR.this.getPackageName() + ".ar.codelab.name.provider",
                        photoFile);
                Intent intent = new Intent(Intent.ACTION_VIEW, photoURI);
                intent.setDataAndType(photoURI, "image/*");
                intent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
                startActivity(intent);

            });
            snackbar.show();
        } else {
            Log.d("DrawAR", "Failed to copyPixels: " + copyResult);
            Toast toast = Toast.makeText(DrawAR.this,
                    "Failed to copyPixels: " + copyResult, Toast.LENGTH_LONG);
            toast.show();
        }
        handlerThread.quitSafely();
    }, new Handler(handlerThread.getLooper()));
}

答案 2 :(得分:1)

获取图像缓冲区

在最新的ARCore SDK中,我们可以通过公共类Frame访问图像缓冲区。以下是使我们能够访问图像缓冲区的示例代码。

private void onSceneUpdate(FrameTime frameTime) {
    try {
        Frame currentFrame = sceneView.getArFrame();
        Image currentImage = currentFrame.acquireCameraImage();
        int imageFormat = currentImage.getFormat();
        if (imageFormat == ImageFormat.YUV_420_888) {
            Log.d("ImageFormat", "Image format is YUV_420_888");
        }
}
如果您将其注册到setOnUpdateListener()回调中,则每次更新都会调用

onSceneUpdate()。图片将采用YUV_420_888格式,但具有原始高分辨率相机的完整视野。

将获取的图像缓冲区写入文件 以下实现将YUV缓冲区转换为压缩的JPEG字节数组

private static byte[] NV21toJPEG(byte[] nv21, int width, int height) {
    ByteArrayOutputStream out = new ByteArrayOutputStream();
    YuvImage yuv = new YuvImage(nv21, ImageFormat.NV21, width, height, null);
    yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
    return out.toByteArray();
}

public static void WriteImageInformation(Image image, String path) {
    byte[] data = null;
    data = NV21toJPEG(YUV_420_888toNV21(image),
                image.getWidth(), image.getHeight());
    BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(path));                       
    bos.write(data);
    bos.flush();
    bos.close();
}

private static byte[] YUV_420_888toNV21(Image image) {
    byte[] nv21;
    ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
    ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
    ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();

    int ySize = yBuffer.remaining();
    int uSize = uBuffer.remaining();
    int vSize = vBuffer.remaining();

    nv21 = new byte[ySize + uSize + vSize];

    //U and V are swapped
    yBuffer.get(nv21, 0, ySize);
    vBuffer.get(nv21, ySize, vSize);
    uBuffer.get(nv21, ySize + vSize, uSize);

    return nv21;
}