播放通过http

时间:2019-04-17 08:21:25

标签: java audio stream

我正在研究一个小型即时通讯程序项目,该项目实现了我开发的自定义加密算法。但是,网络并不是我的强项。

基本上,这里我尝试使用一对多拱门来提供同步音频输出流。

到目前为止,我已经设法通过HTTP连接以base64编码格式输出音频,但是在这里,我被困住了。

我不知道如何实时回放音频,而无需两次读取相同的音频数据(重叠)

音频服务器

这是我的服务器端代码,如果我把整个事情都弄糟了,请保持友好,但是我想我已经使这一部分正常工作了。

/*
 * Decompiled with CFR 0.139.
 */
package SIM.net.client.networking;

import DARTIS.crypt;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.TargetDataLine;

import com.sun.org.apache.xml.internal.security.utils.Base64;

public class audioServer {
    public static void start(String[] key) {
        AudioFormat format = new AudioFormat(8000.0f, 16, 1, true, true);
            TargetDataLine microphone = null;
            try {
                microphone = AudioSystem.getTargetDataLine(format);
            } catch (LineUnavailableException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
            DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
            try {
                microphone = (TargetDataLine)AudioSystem.getLine(info);
            } catch (LineUnavailableException e1) {
                // TODO Auto-generated catch block
                e1.printStackTrace();
            }
            try {
                microphone.open(format);
            } catch (LineUnavailableException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
            ByteArrayOutputStream out = new ByteArrayOutputStream();
            int CHUNK_SIZE = 1024;
            byte[] data = new byte[microphone.getBufferSize() / 5];
            microphone.start();             
            int bytesRead = 0;    
            do {

                    if (bytesRead >= 4096) {
                        byte[] audioData = out.toByteArray();
                    String base64img = Base64.encode(audioData);
                    String audioclip;
                    if (key.length > 9999) {
                    audioclip = crypt.inject(base64img, key);
                    } else {
                        audioclip = base64img;
                    }
                    audioHandler.setdata(audioclip);
                    bytesRead = 0;
                    out.reset();

                } else {
                    int numBytesRead = microphone.read(data, 0, CHUNK_SIZE);
                    System.out.println(bytesRead += numBytesRead);
                    out.write(data, 0, numBytesRead);
                }

                }
                    while (true);
    }
}

音频处理程序

package SIM.net.client.networking;


import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.net.URI;
import java.util.HashMap;

import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;

public class audioHandler
implements HttpHandler {
    public static String audiodata;

    public static void setdata(String imgdta) {
        audiodata = imgdta;
    }

    public void handle(HttpExchange he) throws IOException {
        HashMap parameters = new HashMap();

    }

    public static void main(String[] args) throws Exception {
        HttpServer server = HttpServer.create(new InetSocketAddress(9991), 0);
        server.createContext("/audio", new MyHandler());
        server.setExecutor(null); // creates a default executor
        server.start();
        audioServer.start(new String[3]);
    }

    static class MyHandler implements HttpHandler {
        @Override
        public void handle(HttpExchange he) throws IOException {
            URI requestedUri = he.getRequestURI();
            String query = requestedUri.getRawQuery();
            he.sendResponseHeaders(200, audiodata.replace("\n", "").replace("\r", "").length());
            OutputStream os = he.getResponseBody();
            os.write(audiodata.toString().replace("\n", "").replace("\r", "").getBytes());
            os.close();
        }
    }
}

请理解,此代码最初是为通过HTTP实时实时流式传输网络快照而编写的,一次仅一帧,如果此设计不适用于音频流,请指出正确的方向,通常我会学习得最好通过运行示例,编辑并观察其输出中的更改,因此任何示例/示例代码都将大有帮助。(我不是要求您100%为我解决它,只是一些正确方向的指针和示例代码)

0 个答案:

没有答案