Android - 使用RETROFIT2发送.wav文件进行说话人识别

时间:2016-08-12 12:23:12

标签: android httprequest speech-recognition retrofit2 voice-recognition

我正在尝试使用Android上的Retrofit2发送.wav文件,以便使用Microsoft Speaker Recognition API的创建注册请求(https://dev.projectoxford.ai/docs/services/563309b6778daf02acc0a508/operations/56406930e597ed20c8d8549c

但我总是得到以下错误400:

com.mobile.cir.voicerecognition D/OkHttp: <-- 400 Bad Request https://api.projectoxford.ai/spid/v1.0/verificationProfiles/94BC205B-FACD-42A7-9D80-485403106627/enroll (3907ms)
com.mobile.cir.voicerecognition D/OkHttp: Cache-Control: no-cache
com.mobile.cir.voicerecognition D/OkHttp: Pragma: no-cache
com.mobile.cir.voicerecognition D/OkHttp: Content-Length: 123
com.mobile.cir.voicerecognition D/OkHttp: Content-Type: application/json; charset=utf-8
com.mobile.cir.voicerecognition D/OkHttp: Expires: -1
com.mobile.cir.voicerecognition D/OkHttp: X-AspNet-Version: 4.0.30319
com.mobile.cir.voicerecognition D/OkHttp: X-Powered-By: ASP.NET
com.mobile.cir.voicerecognition D/OkHttp: apim-request-id: e5472946-ec90-4662-a3c9-dda62c2c6b27
com.mobile.cir.voicerecognition D/OkHttp: Date: Fri, 12 Aug 2016 11:43:04 GMT
com.mobile.cir.voicerecognition D/OkHttp: }
com.mobile.cir.voicerecognition D/OkHttp: <-- END HTTP (123-byte body)
com.mobile.cir.voicerecognition D/EnableVoiceRecognition: Upload success
com.mobile.cir.voicerecognition D/error message: RequestError{code='null', message='null'}

这是我的Apiclient课程:

public static Retrofit getClient() {
    if (retrofit==null) {
        HttpLoggingInterceptor logging = new HttpLoggingInterceptor();
        logging.setLevel(HttpLoggingInterceptor.Level.BODY);
        OkHttpClient httpClient = new OkHttpClient.Builder().addInterceptor(logging).build();
        retrofit = new Retrofit.Builder()
                .baseUrl(BASE_URL)
                .addConverterFactory(GsonConverterFactory.create())
                .client(httpClient)
                .build();
    }
    return retrofit;
}

POST请求:

 @Multipart
 @POST("verificationProfiles/{VerificationProfileId}/enroll")
 Call<EnrollmentResponse> createEnrollment(@Path("VerificationProfileId") String profileId,
                                           @Header("Content-Type") String contentType,
                                           @Header("Ocp-Apim-Subscription-Key") String subscriptionKey,
                                           @Part("file") RequestBody audioFile);
}

行动本身:

File audioFile = new File(fileDirectory + "my_voice.wav");
RequestBody requestAudioFile = RequestBody.create(MediaType.parse("application/octet-stream"), audioFile);
Call<EnrollmentResponse> call = apiService.createEnrollment(PROFILE_ID_TEST,"audio/wav; samplerate=1600",API_KEY,requestAudioFile);
call.enqueue(new Callback<EnrollmentResponse>() {
    @Override
    public void onResponse(Call<EnrollmentResponse> call, Response<EnrollmentResponse> response) {
        Log.d(TAG,"Upload success");
        RequestError error = ErrorUtils.parseError(response);
        Log.d("error message", error.toString());
        Log.d(TAG,"Response: " + response.body().toString());
    }

    @Override
    public void onFailure(Call<EnrollmentResponse> call, Throwable t) {
        Log.d(TAG,"Upload error: " + t.getMessage());
    }
});

我在哪里做错了?

*编辑*

Microsoft开发人员发布了此API的Android库 (https://github.com/Microsoft/Cognitive-SpeakerRecognition-Android

0 个答案:

没有答案