您好我试图通过PubNub的WebRTC实现按照本指南在两个Android设备之间实现视频聊天:
https://github.com/GleasonK/android-webrtc-tutorial
我的代码非常相似,能够连接并让视频双向工作,但是我只能在这个来自callees手机上来电者的音频,而不是相反。我查看了两台设备的日志,并尝试将它们作为Caller和Callee,设备是Galaxy S4和HTC One。我注意到每次调用者都有一些日志输出(不是来自我的代码),它从WebRtcAudioRecord类输出“StartRecording”和“StopRecording”,但Callee从不进行这些输出。类似地,Callee从WebRtcAudioTrack类输出“StartPlayout”和“StopPlayout”,但是Caller没有。我已经在下面列出了确切的日志片段,因为整个日志很长,但如果您愿意,我可以提供它。
呼叫者:
D/AudioRecordJni: InitRecording@[tid=23807]
D/WebRtcAudioRecord: InitRecording(sampleRate=48000, channels=1)
D/WebRtcAudioRecord: byteBuffer.capacity: 960
D/AudioRecordJni: OnCacheDirectBufferAddress
D/AudioRecordJni: direct buffer capacity: 960
D/WebRtcAudioRecord: AudioRecord.getMinBufferSize: 3840
D/WebRtcAudioRecord: bufferSizeInBytes: 3840
D/WebRtcAudioRecord: AudioRecord session ID: 90, audio format: 2, channels: 1, sample rate: 48000
D/WebRtcAudioRecord: AcousticEchoCanceler.isAvailable: true
D/WebRtcAudioRecord: AcousticEchoCanceler name: Acoustic Echo Canceler, implementor: NXP Software Ltd., uuid: d6dbf400-93ce-11e0-bcd7-0002a5d5c51b
D/WebRtcAudioRecord: AcousticEchoCanceler.getEnabled: true
D/AudioRecordJni: frames_per_buffer: 480
D/AudioManager: IsCommunicationModeEnabled()
W/AudioDeviceTemplate: The application should use MODE_IN_COMMUNICATION audio mode!
D/AudioRecordJni: StartRecording@[tid=23807]
D/WebRtcAudioRecord: StartRecording
D/1077203.VideoChatActi: Debug Message from listener: {"packet":{"sdpMLineIndex":1,"sdpMid":"video","candidate":"candidate:1467089761 1 udp 1686052607 50.59.0.34 26520 typ srflx raddr 10.101.142.190 rport 56379 generation 0"},"id":"","number":"tomchtc"}
D/AICAction: AddIceCandidateAction
D/WebRtcAudioRecord: AudioRecordThread@[name=AudioRecordJavaThread, id=1857]
I/System.out: (HTTPLog)-Static: isSBSettingEnabled false
D/MediaCodecVideo: InitDecode.
和
D/AudioRecordJni: StopRecording@[tid=23807]
D/WebRtcAudioRecord: StopRecording
Callee:
D/AudioTrackJni: InitPlayout@[tid=9936]
D/WebRtcAudioTrack: InitPlayout(sampleRate=48000, channels=1)
D/WebRtcAudioTrack: byteBuffer.capacity: 960
D/AudioTrackJni: OnCacheDirectBufferAddress
D/AudioTrackJni: direct buffer capacity: 960
D/AudioTrackJni: frames_per_buffer: 480
D/WebRtcAudioTrack: AudioTrack.getMinBufferSize: 14336
D/AudioManager: IsCommunicationModeEnabled()
W/AudioDeviceTemplate: The application should use MODE_IN_COMMUNICATION audio mode!
D/AudioTrackJni: StartPlayout@[tid=9936]
D/WebRtcAudioTrack: StartPlayout
D/MediaCodecVideo: DecoderRelease request
D/WebRtcAudioTrack: AudioTrackThread@[name=AudioTrackJavaThread, id=2331]
和
D/AudioTrackJni: StopPlayout@[tid=9936]
D/WebRtcAudioTrack: StopPlayout
如果有人遇到这个问题或者对于为什么会发生这种情况有一些见解我会很感激帮助!如果您想了解更多信息或代码片段,我很乐意为您提供。
答案 0 :(得分:-1)
我做了一个小错误修复,它对我有用。
您需要做的就是将 OfferToReceiveAudio 设置为true。
pcConstraints.mandatory.add(new MediaConstraints.KeyValuePair(“OfferToReceiveAudio”,“true”)); 在PnSignalingParams
设置用户接受调用以接收音频。以前 OfferToReceiveAudio 它被设置为false。
或
“现在你必须克隆repo并将它添加到你的项目中。很快就会为gradle发布重建工件” - GleasonK。