这是我在这里的第一个问题:D,首先抱歉我的英语。
我的问题基本上是如何将Flash短片保存到FLV。
影片剪辑由用户生成,它有各种声音和动画,我需要保存FLV才能将其发送到Youtbue。
我尝试了什么: 我在这里发现了一些关于使用Alchemy Lib的问题,我用它来抓取Movie Clip帧到帧并将其保存到Bitmap。
Alchemy Lib将这些帧转换为FLV,就像魅力一样,并支持使用ByteArray保存声音块。
在这种情况下我的问题是,我如何抓住Movie Clip的声音将其发送给Alchemy Lib? 我尝试过使用:
SoundMixer.computeSpectrum(sndData, false, 2);
Witch在sndData变量中返回一个byteArray但由于它用于在屏幕上呈现音频波形,因此无用。
大声使用
Sound.extract();
但我相信声音类只用于一个MP3声音,我需要抓住Movie Clip生成的混合声音。
是否有其他方法可以从MovieClip生成FLV?
我的一些代码:
我的代码基于我在此链接中找到的教程:http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
private const OUTPUT_WIDTH:Number = 550;
private const OUTPUT_HEIGHT:Number = 400;
private const FLV_FRAMERATE:int = 24;
private var _baFlvEncoder:ByteArrayFlvEncoder;
public var anime:MovieClip;
//Starts recording
public function startRecording()
{
this.addEventListener(Event.ENTER_FRAME, enterFrame);
//Initialize the Alchemy Lib
_baFlvEncoder = new ByteArrayFlvEncoder(stage.frameRate);
_baFlvEncoder.setVideoProperties(OUTPUT_WIDTH, OUTPUT_HEIGHT);
_baFlvEncoder.setAudioProperties(FlvEncoder.SAMPLERATE_22KHZ);
_baFlvEncoder.start();
}
//Stops recording
public function stopRecording()
{
this.removeEventListener(Event.ENTER_FRAME, enterFrame);
_baFlvEncoder.updateDurationMetadata();
// Save FLV file via FileReference
var fileRef:FileReference = new FileReference();
fileRef.save(_baFlvEncoder.byteArray, "test.flv");
_baFlvEncoder.kill();
}
//The Main Loop activated by StartRecording
public function enterFrame(evt:Event)
{
var bmpData:BitmapData = new BitmapData(OUTPUT_WIDTH, OUTPUT_HEIGHT, false, 0xFFFFFFFF);
bmpData.draw(anime);
var sndData:ByteArray = new ByteArray();
SoundMixer.computeSpectrum(sndData, false, 2);
_baFlvEncoder.addFrame(bmpData, sndData);
bmpData.dispose();
}
答案 0 :(得分:1)
完成它! (嗯......差不多:D)
这很复杂,但由于我的问题只是MovieClip的音频,我创建了一个类似于混音器的类。
调音台负责使用独特的SampleDataEvent播放所有声音,混合我所有声音的字节。当我执行startRecording函数时,它还会在ByteArray中生成单个声音数据。种类复杂解释,但这里是混音器的代码:
/**** MySoundMixer initializations omitted ***/
//Generates the sound object to start the stream
//for reproduce the mixed sounds.
public function startStream()
{
fullStreamSound= new Sound();
fullStreamSoundData = new ByteArray();
this.fullStreamSoundChannel = this.fullStreamSound.play();
}
//Adds a sound in the soundlib
//(See: MySound object for more details)
public function addSound(sound:MySound, key:String)
{
sound.initialize();
sounds.push({sound:sound, key:key});
}
//Play a sound in the sound lib
public function play(key)
{
var founded:MySound = null;
for (var i = 0; i < sounds.length; i++)
{
if (key == sounds[i].key)
{
founded = sounds[i].sound;
break;
}
}
if (founded != null)
{
founded.play();
}
}
// The SampleDataEvent function to Play the sound and
// if recording is activated record the sound to fullStreamSoundData
public function processSampleData(event:SampleDataEvent)
{
var pos = 0;
var normValue:Number = 1 / this.sounds.length;
while (pos < BUFFER)
{
var leftChannel:Number = 0;
var rightChannel:Number = 0;
for (var i = 0; i < this.sounds.length; i++)
{
var currentSound:MySound = this.sounds[i].sound;
var result = currentSound.getSampleData();
leftChannel += result.leftChannel * normValue;
rightChannel += result.rightChannel * normValue;
}
event.data.writeFloat(leftChannel);
event.data.writeFloat(rightChannel);
if (isRecording)
{
fullStreamSoundData.writeFloat(leftChannel);
fullStreamSoundData.writeFloat(rightChannel);
}
pos++;
}
}
//Starts recording
public function startRecording()
{
this.isRecording = true;
}
//Stops recording
public function stopRecording()
{
this.isRecording = false;
}
SampleDataEvent用于播放并同时提取混合声音。
我还必须创建一个扩展Sound对象的MySound类,以便在 processSampleData getSampleData()) >方法。当Mixer启动时(两个通道发送0个字节),MySound类也开始播放,当调音台停止时,它也会停止,当播放()功能时,它只会开始发送音乐的字节信息被称为。
我创建的类是这样的:
/**** MySound initializations omitted ***/
public function initialize()
{
this.extractInformation(null);
}
//Override the play function to avoid playing.
//(The play act will be triggered by SoundMixer class)
override public function play(startTime:Number = 0, loops:int = 0, sndTransform:SoundTransform = null):SoundChannel
{
this.isPlaying = true;
this.currentPhase = 0;
return null;
}
// On each buffer in sampledata i read the chunk of sound bytes
public function getSampleData()
{
var leftChannel:Number = 0;
var rightChannel:Number = 0;
if (this.isPlaying) {
if (currentPhase < totalPhases)
{
this.soundData.position = currentPhase * 8;
leftChannel = this.soundData.readFloat();
rightChannel = this.soundData.readFloat();
this.currentPhase ++;
} else
{
stopPlaying();
}
}
return { leftChannel:leftChannel, rightChannel:rightChannel };
}
//Extracts information of the sound object in order to
//split it in several chunks of sounds.
public function extractInformation(evt:Event)
{
trace("Inicializando o som " + this.id3);
this.soundData = new ByteArray();
this.extract(soundData, int(this.length * SAMPLE_44HZ + 10000));
this.totalPhases = this.soundData.length / 8;
this.currentPhase = 0;
}
///Stop playing means stop extracting bytes
public function stopPlaying()
{
this.isPlaying = false;
}
有了这个,我生成了一个唯一的ByteArray对象,其中包含混音器的孔声信息。 我只需要在动画片段开始时启动调音台,并在动画片段停止时也将其停止。 声音对象的ByteArray信息传递给Alchemy Lib的 addFrame(bitmapData,sndData),并成功记录它。
它在我的项目中运行良好,但我可能需要优化代码。
感谢所有帮助我的人!