我试图使用浏览器麦克风支持和Java后端(使用Jetty websockets实现)从Javascript调用Watson Speech到文本服务。我正在使用Watson Speech来文本Java SDK以实现服务连接。
Maven依赖
<dependency>
<groupId>com.ibm.watson</groupId>
<artifactId>speech-to-text</artifactId>
<version>7.3.0</version>
</dependency>
这是它的代码:
JS
(function helperDrawingFunctions() {
CanvasRenderingContext2D.prototype.line = function(x1, y1, x2, y2) {
this.lineCap = 'round';
this.beginPath();
this.moveTo(x1, y1);
this.lineTo(x2, y2);
this.closePath();
this.stroke();
}
CanvasRenderingContext2D.prototype.circle = function(x, y, r, fill_opt) {
this.beginPath();
this.arc(x, y, r, 0, Math.PI * 2, true);
this.closePath();
if (fill_opt) {
this.fillStyle = 'rgba(0,0,0,1)';
this.fill();
this.stroke();
} else {
this.stroke();
}
}
CanvasRenderingContext2D.prototype.rectangle = function(x, y, w, h, fill_opt) {
this.beginPath();
this.rect(x, y, w, h);
this.closePath();
if (fill_opt) {
this.fillStyle = 'rgba(0,0,0,1)';
this.fill();
} else {
this.stroke();
}
}
CanvasRenderingContext2D.prototype.triangle = function(p1, p2, p3, fill_opt) {
// Stroked triangle.
this.beginPath();
this.moveTo(p1.x, p1.y);
this.lineTo(p2.x, p2.y);
this.lineTo(p3.x, p3.y);
this.closePath();
if (fill_opt) {
this.fillStyle = 'rgba(0,0,0,1)';
this.fill();
} else {
this.stroke();
}
}
CanvasRenderingContext2D.prototype.clear = function() {
this.clearRect(0, 0, this.canvas.clientWidth, this.canvas.clientHeight);
}
})();
(function playButtonHandler() {
// The play button is the canonical state, which changes via events.
var playButton = document.getElementById('playbutton');
playButton.addEventListener('click', function(e) {
if (this.classList.contains('playing')) {
playButton.dispatchEvent(new Event('pause'));
} else {
playButton.dispatchEvent(new Event('play'));
}
}, true);
// Update the appearance when the state changes
playButton.addEventListener('play', function(e) {
this.classList.add('playing');
});
playButton.addEventListener('pause', function(e) {
this.classList.remove('playing');
});
})();
(function audioInit() {
// Check for non Web Audio API browsers.
if (!window.AudioContext) {
alert("Web Audio isn't available in your browser.");
return;
}
var canvas = document.getElementById('fft');
var ctx = canvas.getContext('2d');
var canvas2 = document.getElementById('fft2');
var ctx2 = canvas2.getContext('2d');
const CANVAS_HEIGHT = canvas.height;
const CANVAS_WIDTH = canvas.width;
var analyser;
function rafCallback(time) {
window.requestAnimationFrame(rafCallback, canvas);
if (!analyser) return;
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(freqByteData); //analyser.getByteTimeDomainData(freqByteData);
var SPACER_WIDTH = 10;
var BAR_WIDTH = 5;
var OFFSET = 100;
var CUTOFF = 23;
var numBars = Math.round(CANVAS_WIDTH / SPACER_WIDTH);
ctx.clearRect(0, 0, CANVAS_WIDTH, CANVAS_HEIGHT);
ctx.fillStyle = '#F6D565';
ctx.lineCap = 'round';
ctx2.clearRect(0, 0, CANVAS_WIDTH, CANVAS_HEIGHT);
ctx2.fillStyle = '#3A5E8C';
ctx2.lineCap = 'round';
// Draw rectangle for each frequency bin.
for (var i = 0; i < numBars; ++i) {
var magnitude = freqByteData[i + OFFSET];
ctx.fillRect(i * SPACER_WIDTH, CANVAS_HEIGHT, BAR_WIDTH, -magnitude);
ctx2.fillRect(i * SPACER_WIDTH, CANVAS_HEIGHT, BAR_WIDTH, -magnitude);
}
}
rafCallback();
// per https://g.co/cloud/speech/reference/rest/v1beta1/RecognitionConfig
const SAMPLE_RATE = 16000;
const SAMPLE_SIZE = 16;
var playButton = document.getElementById('playbutton');
// Hook up the play/pause state to the microphone context
var context = new AudioContext();
playButton.addEventListener('pause', context.suspend.bind(context));
playButton.addEventListener('play', context.resume.bind(context));
// The first time you hit play, connect to the microphone
playButton.addEventListener('play', function startRecording() {
var audioPromise = navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: true,
channelCount: 1,
sampleRate: {
ideal: SAMPLE_RATE
},
sampleSize: SAMPLE_SIZE
}
});
audioPromise.then(function(micStream) {
var microphone = context.createMediaStreamSource(micStream);
analyser = context.createAnalyser();
microphone.connect(analyser);
}).catch(console.log.bind(console));
initWebsocket(audioPromise);
}, {once: true});
/**
* Hook up event handlers to create / destroy websockets, and audio nodes to
* transmit audio bytes through it.
*/
function initWebsocket(audioPromise) {
var socket;
var sourceNode;
// Create a node that sends raw bytes across the websocket
var scriptNode = context.createScriptProcessor(4096, 1, 1);
// Need the maximum value for 16-bit signed samples, to convert from float.
const MAX_INT = Math.pow(2, 16 - 1) - 1;
scriptNode.addEventListener('audioprocess', function(e) {
var floatSamples = e.inputBuffer.getChannelData(0);
// The samples are floats in range [-1, 1]. Convert to 16-bit signed
// integer.
socket.send(Int16Array.from(floatSamples.map(function(n) {
return n * MAX_INT;
})));
});
function newWebsocket() {
var websocketPromise = new Promise(function(resolve, reject) {
var socket = new WebSocket('wss://' + location.host + '/transcribe');
//var socket = new WebSocket('wss://localhost:8440/Websocket/socket');
//var socket = new WebSocket('wss://localhost:8442/events/');
socket.addEventListener('open', resolve);
socket.addEventListener('error', reject);
});
Promise.all([audioPromise, websocketPromise]).then(function(values) {
var micStream = values[0];
socket = values[1].target;
console.log("reaches here!!");
// If the socket is closed for whatever reason, pause the mic
socket.addEventListener('close', function(e) {
console.log('Websocket closing..');
playButton.dispatchEvent(new Event('pause'));
});
socket.addEventListener('error', function(e) {
console.log('Error from websocket', e);
playButton.dispatchEvent(new Event('pause'));
});
function startByteStream(e) {
// Hook up the scriptNode to the mic
console.log("reaches here also!!");
sourceNode = context.createMediaStreamSource(micStream);
sourceNode.connect(scriptNode);
scriptNode.connect(context.destination);
}
// Send the initial configuration message. When the server acknowledges
// it, start streaming the audio bytes to the server and listening for
// transcriptions.
socket.addEventListener('message', function(e) {
socket.addEventListener('message', onTranscription);
startByteStream(e);
}, {once: true});
socket.send(JSON.stringify({sampleRate: context.sampleRate}));
}).catch(console.log.bind(console));
}
function closeWebsocket() {
scriptNode.disconnect();
if (sourceNode) sourceNode.disconnect();
if (socket && socket.readyState === socket.OPEN) socket.close();
}
function toggleWebsocket(e) {
var context = e.target;
if (context.state === 'running') {
newWebsocket();
} else if (context.state === 'suspended') {
closeWebsocket();
}
}
var transcript = {
el: document.getElementById('transcript').childNodes[0],
current: document.createElement('div')
};
transcript.el.appendChild(transcript.current);
/**
* This function is called with the transcription result from the server.
*/
function onTranscription(e) {
var result = JSON.parse(e.data);
if (result.alternatives_) {
transcript.current.innerHTML = result.alternatives_[0].transcript_;
}
if (result.isFinal_) {
transcript.current = document.createElement('div');
transcript.el.appendChild(transcript.current);
}
}
// When the mic is resumed or paused, change the state of the websocket too
context.addEventListener('statechange', toggleWebsocket);
// initialize for the current state
toggleWebsocket({target: context});
}
})();
Java
public class JettySocket extends WebSocketAdapter {
private static final Logger logger = Logger.getLogger(JettySocket.class.getName());
private SpeechToText speech;
private RecognizeOptions recognizeOptions;
private ByteArrayOutputStream buffer = new ByteArrayOutputStream();
@Override
public void onWebSocketConnect(Session session) {
System.out.println("Session opened!!");
super.onWebSocketConnect(session);
speech = new SpeechToText();
System.setProperty("IBM_CREDENTIALS_FILE", "ibm-credentials.env");
}
@Override
public void onWebSocketText(String message) {
logger.info("message received - " + message);
super.onWebSocketText(message);
try {
getRemote().sendString("message");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
@Override
public void onWebSocketBinary(byte[] msg, int offset, int len) {
logger.info("Byte stream received!!");
super.onWebSocketBinary(msg, offset, len);
ByteArrayInputStream stream = new ByteArrayInputStream(msg);
this.recognizeOptions = new RecognizeOptions.Builder()
.audio(stream)
//.contentType("audio/wav")
.contentType("audio/l16;rate=48000;endianness=little-endian")
//.contentType("audio/wav;rate=48000")
//.model("en-US_NarrowbandModel")
.model("en-US_BroadbandModel")
//.keywords(Arrays.asList("colorado", "tornado", "tornadoes"))
//.keywordsThreshold((float) 0.5)
//.maxAlternatives(3)
.interimResults(true)
.build();
BaseRecognizeCallback baseRecognizeCallback =
new BaseRecognizeCallback() {
@Override
public void onTranscription
(SpeechRecognitionResults speechRecognitionResults) {
System.out.println(speechRecognitionResults);
}
};
speech.recognizeUsingWebSocket(recognizeOptions,
baseRecognizeCallback);
// wait 10 seconds for the asynchronous response
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
@Override
public void onWebSocketError(Throwable throwable) {
logger.info("Session error!!");
super.onWebSocketError(throwable);
throwable.printStackTrace();
}
@Override
public void onWebSocketClose(int statusCode, String reason) {
super.onWebSocketClose(statusCode, reason);
logger.info("Session closed - " + reason);
}
}
HTML
<!DOCTYPE html>
<html>
<head>
<title>TODO supply a title</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width,
initial-scale=1.0">
<script type="text/javascript">
var uri = "wss://localhost:8442/transcribe";
var websocket = null;
var message = "";
function openConnection() {
websocket = new WebSocket(uri);
//websocket.binaryType = "arraybuffer";
websocket.onmessage = function (event) {
var node = document.getElementById('fromServer');
var newNode = document.createElement('h1');
if (event.data) {
newNode.appendChild(document.
createTextNode(event.data));
node.appendChild(newNode);
}
else {
newNode.appendChild(document.
createTextNode("Image uploaded"));
node.appendChild(newNode);
}
};
}
function closeConnection() {
websocket.close();
}
function sendMessage() {
var msg = document.getElementById('messageText').value;
websocket.send(msg);
}
function sendFile() {
var file = document.getElementById('filename').files[0];
var reader = new FileReader();
var rawData = new ArrayBuffer();
reader.loadend = function() {}
reader.onload = function(e) {
rawData = e.target.result;
websocket.binaryType = "arraybuffer";
websocket.send(rawData);
alert("the File has been transferred.")
}
reader.readAsArrayBuffer(file);
}
</script>
</head>
<body onunload="closeConnection();">
<div>
<p>Client Message: <input id="messageText" type="text"/>
<input id="sendButton" type="button" value="Send"
onclick="sendMessage();"/>
<input id="connect" type="button" value="Connect"
onclick="openConnection();"/>
</p>
<p>
Client Upload: <input id="filename" type="file"/>
<input id="uploadButton" type="button" value="Upload"
onclick="sendFile();"/>
</p>
<div id="fromServer"></div>
</div>
</body>
</html>
使用上面的代码,当从麦克风说出音频字节时,音频字节将成功发送到后端。但是,当数据发送到Watson STT时,它将返回空响应,如下所示:
**回复:** { “结果”:[], “ result_index”:0 }
这向我表明字节未正确编码或用于音频标准化的配置不正确。
如果尝试使用此配置,则会出现以下错误:
Java:
this.recognizeOptions = new RecognizeOptions.Builder()
.audio(stream)
.contentType("audio/l16;rate=48000")
.model("en-US_BroadbandModel")
.interimResults(true)
.build();
SEVERE:在查看8192字节的数据流中的结尾0非零字节字符串后,无法检测字节序。字节流真的是PCM数据吗? java.lang.RuntimeException:在查看8192字节的数据流中的尾部0非零字节字符串后,无法检测字节序。字节流真的是PCM数据吗?在com.ibm.watson.speech_to_text.v1.websocket.SpeechToTextWebSocketListener.onMessage(SpeechToTextWebSocketListener.java:128)
上述错误来自Watson STT SDK,它表明音频字节传输到STT服务API的方式有问题。
我尝试了不同的配置变化,例如更改采样率,语音模型,但似乎无济于事。相同的配置似乎可以与Google Speech to Text完美配合,并且我在响应中得到了成绩单。我已经完成了以下Google教程中提供的示例。 Google STT tutorial
请帮助解决此问题,并提出解决方案。
答案 0 :(得分:0)
每Watson Speech To Text documentation:
该服务会自动检测传入音频的字节序...
不确定为什么失败,但是您始终可以通过将其添加到内容类型audio/l16;rate=48000;endianness=little-endian
中来显式设置字节序。
更新有关您的示例文件:
这些是您提供的文件的参数
$ soxi /run/shm/8_Channel_ID.wav
Input File : '/run/shm/8_Channel_ID.wav'
Channels : 8
Sample Rate : 48000
Precision : 24-bit
Duration : 00:00:08.05 = 386383 samples ~ 603.723 CDDA sectors
File Size : 9.27M
Bit Rate : 9.22M
Sample Encoding: 24-bit Signed Integer PCM
这是应该的样子
$ soxi /run/shm/16bit_test.wav
Input File : '/run/shm/16bit_test.wav'
Channels : 1
Sample Rate : 48000
Precision : 16-bit
Duration : 00:00:01.50 = 72000 samples ~ 112.5 CDDA sectors
File Size : 144k
Bit Rate : 768k
Sample Encoding: 16-bit Signed Integer PCM
您的文件有24位样本,而Whatson希望有16位。文件的通道数为8-虽然据说Watson将多通道音频缩混为一个通道,但我仍然会尝试首先手动进行转换。