使用webrtc

时间:2015-11-04 06:56:10

标签: java android canvas webrtc

我正在关注这两个代码示例:一个用于从android发送图像,另一个用于在画布上附加接收到的图像

使用webrtc datachannel从Android发送图像

https://github.com/Temasys/skylink-android-screen-sharing/blob/master/SkylinkShare/app/src/main/java/skylink/temasys/com/sg/skylinkshare/MainActivity.java

用于在网络上接收图像并使用webrtc datachannel附加到画布上

https://io2014codelabs.appspot.com/static/codelabs/webrtc-file-sharing/#7

案例是我想要不断地将屏幕图像从android发送到网络,以便它看起来像是从Android共享屏幕,并且android屏幕上的每个更改都将显示在web上的画布上。

Android上的代码

这是捕获android屏幕的代码。

public void startProjection() {
   startActivityForResult(projectionManager.createScreenCaptureIntent(), SCREEN_REQUEST_CODE);
}

这是从刚捕获的android屏幕中提取图像的代码。

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    switch (requestCode) {
        case SCREEN_REQUEST_CODE:
            mediaProjection = projectionManager.getMediaProjection(resultCode, data);
            if (mediaProjection != null) {

                projectionStarted = true;

                // Initialize the media projection
                DisplayMetrics metrics = getResources().getDisplayMetrics();
                int density = metrics.densityDpi;
                int flags = DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY
                        | DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC;

                Display display = getWindowManager().getDefaultDisplay();
                Point size = new Point();
                display.getSize(size);

                projectionDisplayWidth = size.x;
                projectionDisplayHeight = size.y;

                imageReader = ImageReader.newInstance(projectionDisplayWidth, projectionDisplayHeight
                        , PixelFormat.RGBA_8888, 2);
                mediaProjection.createVirtualDisplay("screencap",
                        projectionDisplayWidth, projectionDisplayHeight, density,
                        flags, imageReader.getSurface(), null, handler);
                imageReader.setOnImageAvailableListener(new ImageAvailableListener(), handler);
            }
            break;
    }
}

这是可用的监听器类图像:

private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
    @Override
    public void onImageAvailable(ImageReader reader) {
        Image image = null;
        FileOutputStream fos = null;
        Bitmap bitmap = null;

        ByteArrayOutputStream stream = null;

        try {
            image = imageReader.acquireLatestImage();
            if (image != null) {
                Image.Plane[] planes = image.getPlanes();
                ByteBuffer buffer = planes[0].getBuffer();
                int pixelStride = planes[0].getPixelStride();
                int rowStride = planes[0].getRowStride();
                int rowPadding = rowStride - pixelStride * projectionDisplayWidth;

                // create bitmap
                bitmap = Bitmap.createBitmap(projectionDisplayWidth + rowPadding / pixelStride,
                        projectionDisplayHeight, Bitmap.Config.ARGB_8888);
                bitmap.copyPixelsFromBuffer(buffer);

                stream = new ByteArrayOutputStream();
                bitmap.compress(Bitmap.CompressFormat.JPEG, 5, stream);


                ByteBuffer byteBuffer = ByteBuffer.wrap(stream.toByteArray());
                DataChannel.Buffer buf = new DataChannel.Buffer(byteBuffer, true);

                Log.w("CONFERENCE_SCREEN", "Image size less than chunk size condition");

                client.sendDataChannelMessage(buf);

                imagesProduced++;
                Log.w("CONFERENCE_SCREEN", "captured image: " + imagesProduced);
            }

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (fos != null) {
                try {
                    fos.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (stream != null) {
                try {
                    stream.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (bitmap != null) {
                bitmap.recycle();
            }

            if (image != null) {
                image.close();
            }
        }
    }
}

网络代码

创建画布:

var canvas = document.createElement('canvas');
canvas.classList.add('incomingPhoto');
screenAndroidImage.insertBefore(canvas, screenAndroidImage.firstChild); // screenAndroidImage is a div

每当从android发送图像时,我都运行以下代码:

if (data.data.byteLength  || typeof data.data !== 'string') {
      var context = canvas.getContext('2d');
      var img = context.createImageData(300, 150);
      img.data.set(data.data);
      context.putImageData(img, 0, 0);
      trace("Image chunk received");
}

我可以在Web控制台上看到图像数据作为ArrayBuffer {}接收。我无法在画布上看到任何内容。

2 个答案:

答案 0 :(得分:1)

似乎SkylinkJS目前不支持二进制传输。我想可以做的解决方案是将字节编码为Base64编码的字符串,并将它们作为P2P消息发送到Web端。从Web端,将base64字符串转换为图像以写入画布。

对于Android SDK doc API:MessagesListener sendP2PMessage 对于Web SDK doc API:incomingMessage

答案 1 :(得分:0)

我发现了错误和纠正。首先,在类ImageAvailableListener中,我们需要更改它以支持图像大小是否大于webrtc数据通道的字节限制。如果图像大小超过我们的限制,那么我们将图像分成更小的字节块。

private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
    @Override
    public void onImageAvailable(ImageReader reader) {
        Image image = null;
        FileOutputStream fos = null;
        Bitmap bitmap = null;

        ByteArrayOutputStream stream = null;

        try {
            image = imageReader.acquireLatestImage();
            if (image != null) {
                Image.Plane[] planes = image.getPlanes();
                ByteBuffer buffer = planes[0].getBuffer();
                int pixelStride = planes[0].getPixelStride();
                int rowStride = planes[0].getRowStride();
                int rowPadding = rowStride - pixelStride * projectionDisplayWidth;

                // create bitmap
                bitmap = Bitmap.createBitmap(projectionDisplayWidth + rowPadding / pixelStride,
                        projectionDisplayHeight, Bitmap.Config.ARGB_8888);
                bitmap.copyPixelsFromBuffer(buffer);

                stream = new ByteArrayOutputStream();
                bitmap.compress(Bitmap.CompressFormat.JPEG, 5, stream);

                if(stream.toByteArray().length < 16000){
                    ByteBuffer byteBuffer = ByteBuffer.wrap(stream.toByteArray());
                    DataChannel.Buffer buf = new DataChannel.Buffer(byteBuffer, true);

                    Log.w("CONFERENCE_SCREEN", "Image size less than chunk size condition");

                    client.sendDataChannelMessage(buf);

                    client.sendDataChannelMessage(new DataChannel.Buffer(Utility.toByteBuffer("\n"), false));
                } else {
                    // todo break files in pieces here

                    ByteBuffer byteBuffer = ByteBuffer.wrap(stream.toByteArray());
                    DataChannel.Buffer buf = new DataChannel.Buffer(byteBuffer, true);
                    client.sendDataChannelMessage(buf);
                    client.sendDataChannelMessage(new DataChannel.Buffer(Utility.toByteBuffer("\n"), false));
                    //   skylinkConnection.sendData(currentRemotePeerId, stream.toByteArray());
                    Log.w("CONFERENCE_SCREEN", "sending screen data to peer :");
                }

                imagesProduced++;
                Log.w("CONFERENCE_SCREEN", "captured image: " + imagesProduced);
            }

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (fos != null) {
                try {
                    fos.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (stream != null) {
                try {
                    stream.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (bitmap != null) {
                bitmap.recycle();
            }

            if (image != null) {
                image.close();
            }
        }
    }
}

网络代码

以下变量应在函数外部声明,该函数侦听来自datachannel的传入字节。

var buf;
var chunks = []; var count;

侦听datachannel的函数体:

   if (typeof data.data === 'string') {
      buf = new Uint8ClampedArray(parseInt(data.data));
      count = 0;
      chunks = [];
      console.log('Expecting a total of ' + buf.byteLength + ' bytes');
      return;
    }
    var imgdata = new Uint8ClampedArray(data.data);
    console.log('image chunk')
    buf.set(imgdata, count);
    chunks[count] = data.data;
    count += imgdata.byteLength;
    if (count === buf.byteLength) {
      // we're done: all data chunks have been received
      //renderPhoto(buf);
      var builder = new Blob(chunks, buf.type);
      console.log('full image received');
      screenViewer.src = URL.createObjectURL(builder);
    }

其中screenViewer是HTML图片元素。