我在我的项目stated here中使用Flite文本到语音引擎。我从json url获取值并将它们转换为语音。但是,我想知道如何实现audioPlayerDidFinishPlaying:成功:委托方法并在Flitetts文件中调用时播放下一个块。音频播放器必须在另一个之后播放一个对象,并且在完成播放第一个值之后,它必须获得下一个值并将其转换为语音。相应的图像等也必须加载......
这是'到目前为止我所做的代码......
SBJSON *json = [[SBJSON alloc]init];
fliteEngine = [[FliteTTS alloc] init];
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.sampleurl.txt"]];
NSData *response = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil];
NSString *jsonstring = [[NSString alloc]initWithData:response encoding:NSUTF8StringEncoding];
NSArray *asanasList = [json objectWithString:jsonstring error:nil];
NSArray *asanas =[asanasList objectForKey:@"yogagurubackpain"];
for(NSDictionary *test in asanas)
{
UrlValues *myasanas = [[UrlValues alloc]init];
myasanas.asanatitle = [test objectForKey:@"asanatitle"];
myasanas.asanatranscript = [test objectForKey:@"asanatranscript"];
myasanas.asanapicture = [test objectForKey:@"asanapicture"];
[data.yoga addObject:myasanas];
[myasanas release];
}
UrlValues *asana=[[data yoga]objectAtIndex:0];
self.AsanaName.text = [asana asanatitle];
self.AsanaTranscript.text = [asana asanatranscript];
NSString *imageUrl = [asana asanapicture];
NSString* mapUrl = [imageUrl stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding];
NSData* imageData = [[NSData alloc]initWithContentsOfURL:[NSURL URLWithString:mapUrl]];
UIImage* image = [[UIImage alloc] initWithData:imageData];
self.AsanaImage.image = image;
NSString *speak = self.AsanaTranscript.text;
[fliteEngine setVoice:@"cmu_us_rms"];
[fliteEngine speakText:speak];
[fliteEngine setPitch:100.0 variance:11.0 speed:0.4];
[imageData release];
[image release];
[jsonstring release];
[json release];
Plz帮我提供一些示例代码或教程,以便我可以完成任务......
答案 0 :(得分:1)
您可以调用audioPlayerDidFinishPlaying
委托方法的唯一方法是文本转语音引擎使用AVAudioPlayer
对象播放声音。如果没有,那么显然委托方法不会被调用。相反,您必须禁止它直接播放声音并改为使用AVAudioPlayer
对象。
这里有一个例子:
http://artofsystems.blogspot.com/2009/02/speech-synthesis-on-iphone-with-flite.html