使用OpenSL在Android中进行语音呼叫

时间:2013-09-26 17:27:13

标签: android audio voip opensl

我正在为我的论文做一个VoIP应用程序。我想知道是否有人可以帮我解决这个问题: 我有两个线程,AudioThread和AudioSendThread。第一个是通过DatagramSocket接收音频数据包并在手机中播放它的监听器。第二个是它的记录器,它可以获得20毫秒的声音并将其发送到另一个设备。我已经在java中实现了它,但它真的很慢,所以我决定尝试OpenSL,但我没有找到类似这样的文档。

这是AudioSendThread

public class AudioSendThread implements Runnable {
private final static String TAG = "AudioSndThread";
private boolean createdAudioP = false;
private DatagramSocket audioSndSocket;
private String ipAddr;
private byte[] buffer;

public AudioSendThread(Object o){
    this.ipAddr = //getting IpAddress
    audioSndSocket = (DatagramSocket)o;
}

@Override
public void run() {
    if(!createdAudioP)
        createdAudioP = createAudioRecorder();
    if(createdAudioP)
        startRecording();
    DatagramPacket packet = null;
    while(true){
            byte[] buffer = readAudio(20); //read 20 milliseconds of audio, this is the one i would like to implement in OpenSL
        try {
            packet = new DatagramPacket(buffer, buffer.length, InetAddress.getByName(this.ipAddr), PORT.AUDIO);
            audioSndSocket.send(packet);
        } catch (IOException e) {
            Log.e(TAG, e.getMessage());
            return;
        }

    }
}

public static native void startRecording();
public static native boolean createAudioRecorder();
public static native byte[] readAudio(int millis);

static {
    System.loadLibrary("SoundUtils");
}}

这就是AudioThread

public class AudioThread implements Runnable{
private DatagramSocket audioServSock;

@Override
public void run() {
            createBufferQueueAudioPlayer();
    DatagramPacket packet = null;
    Thread audioSndThread = null;
    try {
        this.audioServSock = new DatagramSocket(PORT.AUDIO);
    } catch (SocketException e1) {
        e1.printStackTrace();
    }
    if(true){
        audioSndThread = new Thread(new AudioSendThread(this.audioServSock));
        audioSndThread.start();
    }
            byte[] buffer = new buffer[1500]; //random size
    packet = new DatagramPacket(buffer, 1500);
    while(true){
        try {
            audioServSock.receive(packet);
            playAudio(buffer, packet.getLength()); //other method i would like to implement in OpenSL
        } catch (IOException e) {
            Log.e(TAG, Log.getStackTraceString(e));
            return;
        }           
    }
    at.stop();
    at.release();
}

public static native void createBufferQueueAudioPlayer();
public static native void playAudio(byte[] buffer, int length);

/** Load jni .so on initialization */
static {
     System.loadLibrary("native-audio-jni");
}

}

其他原生方法由NDK的NativeAudio样本

采用

感谢大家的任何建议!

1 个答案:

答案 0 :(得分:3)

您尝试了Android-NDK提供的原生音频示例代码,这意味着您熟悉JNI调用。这是Victor Lazzarini的一篇不错的博客,描述了他使用OpenSL ES进行语音通信音频流的方法。

Android audio streaming with OpenSL ES and the NDK.

您可以从here.下载源代码 按照说明在设备中运行。