我是kinect sdk v1.7的新手。
我想知道如何从样本中捕获运动数据。(http://msdn.microsoft.com/en-us/library/jj131041.aspx)
那么,如何创建一个可以将骨架数据捕获到文件中的过程? (记录)
然后,将文件读回示例程序并对其进行建模。(播放)?
我的想法是将骨架数据记录到文件中,然后从文件中获取骨架数据并让“阿凡达”播放。
我可以在另一个示例程序中执行我想要的操作。 (http://msdn.microsoft.com/en-us/library/hh855381),导致示例程序仅绘制直线和骨架点。
例如,
00001 00:00:00.0110006@353,349,354,332,358,249,353,202,310,278,286,349,269,407,266,430,401,279,425,349,445,408,453,433,332,369,301,460,276,539,269,565,372,370,379,466,387,548,389,575,
00002 00:00:00.0150008@352,349,353,332,356,249,352,202,309,278,284,349,266,406,263,430,398,279,424,349,445,408,453,433,331,369,301,461,277,541,271,566,371,371,379,466,387,548,390,575,
[帧号] [时间戳] @ [骨架位置坐标]
在这个例子中,我假设骨架位置是Joint Id顺序。
谢谢(原谅我可怜的英语)。答案 0 :(得分:2)
您可以使用StreamWriter
,在选定路径初始化它,然后为每个帧,增加帧计数器,将其写入文件,将时间戳写入文件,然后循环关节并写入他们到文件。我会这样做:
using System.IO;
StreamWriter writer = new StreamWriter(@path);
int frames = 0;
...
void AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
frames++;
using (SkeletonFrame sFrame = e.OpenSkeletonFrameData())
{
if (sFrame == null)
return;
skeletonFrame.CopySkeletonDataTo(skeletons);
Skeleton skeleton = (from s in skeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s);
if (skeleton == null)
return;
if (skeleton.TrackingState == SkeletonTrackingState.Tracked)
{
writer.Write("{0} {1}@", frames, timestamp);//I dont know how you want to do this
foreach (Joint joint in skeleton.Joints)
{
writer.Write(joint.Position.X + "," + joint.Position.Y + "," joint.Position.Z + ",");
}
writer.Write(Environment.NewLine);
}
}
}
然后从文件中读取:
StreamReader reader = new StreamReader(@path);
int frame = -1;
JointCollection joints;
...
string[] lines = reader.ReadAllLines();
...
void AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
canvas.Children.Clear();
string[] coords = lines[frame].Split('@')[1].Split(',');
int jointIndex = 0;
for (int i = 0; i < coords.Length; i += 3)
{
joints[jointIndex].Position.X = int.Parse(coords[i]);
joints[jointIndex].Position.Y = int.Parse(coords[i + 1]);
joints[jointIndex].Position.X = int.Parse(coords[i + 2]);
jointIndex++;
}
DepthImageFrame depthFrame = e.OpenDepthImageFrame();
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.Spine, JointType.ShoulderCenter, JointType.Head }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderLeft, JointType.ElbowLeft, JointType.WristLeft, JointType.HandLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderRight, JointType.ElbowRight, JointType.WristRight, JointType.HandRight }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipLeft, JointType.KneeLeft, JointType.AnkleLeft, JointType.FootLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipRight, JointType.KneeRight, JointType.AnkleRight, JointType.FootRight }, depthFrame, canvas));
depthFrame.Dispose();
frame++;
}
Point GetDisplayPosition(Joint joint, DepthImageFrame depthFrame, Canvas skeleton)
{
float depthX, depthY;
KinectSensor sensor = KinectSensor.KinectSensors[0];
DepthImageFormat depthImageFormat = sensor.DepthStream.Format;
DepthImagePoint depthPoint = sensor.CoordinateMapper.MapSkeletonPointToDepthPoint(joint.Position, depthImageFormat);
depthX = depthPoint.X;
depthY = depthPoint.Y;
depthX = Math.Max(0, Math.Min(depthX * 320, 320));
depthY = Math.Max(0, Math.Min(depthY * 240, 240));
int colorX, colorY;
ColorImagePoint colorPoint = sensor.CoordinateMapper.MapDepthPointToColorPoint(depthImageFormat, depthPoint, ColorImageFormat.RgbResolution640x480Fps30);
colorX = colorPoint.X;
colorY = colorPoint.Y;
return new System.Windows.Point((int)(skeleton.Width * colorX / 640.0), (int)(skeleton.Height * colorY / 480));
}
Polyline GetBodySegment(Joint[] joints, Brush brush, JointType[] ids, DepthImageFrame depthFrame, Canvas canvas)
{
PointCollection points = new PointCollection(ids.Length);
for (int i = 0; i < ids.Length; ++i)
{
points.Add(GetDisplayPosition(joints[i], depthFrame, canvas));
}
Polyline polyline = new Polyline();
polyline.Points = points;
polyline.Stroke = brush;
polyline.StrokeThickness = 5;
return polyline;
}
当然,这仅适用于wpf。您只需要更改使用代码:
DepthImageFrame depthFrame = e.OpenDepthImageFrame();
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.Spine, JointType.ShoulderCenter, JointType.Head }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderLeft, JointType.ElbowLeft, JointType.WristLeft, JointType.HandLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderRight, JointType.ElbowRight, JointType.WristRight, JointType.HandRight }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipLeft, JointType.KneeLeft, JointType.AnkleLeft, JointType.FootLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipRight, JointType.KneeRight, JointType.AnkleRight, JointType.FootRight }, depthFrame, canvas));
depthFrame.Dispose();
如果定位样本如何为模型设置动画,您甚至可以创建新的Skeleton
并将joints
复制到Skeleton.Joints
,然后将该骨架作为“检测到的”骨架传递。请注意,您需要更改此示例中使用的函数所需的任何其他所需变量。我不熟悉示例,因此我无法给出具体的方法名称,但您可以将全局Skeleton
替换为您在开头创建的全局//in the game class (AvateeringXNA.cs)
StreamReader reader = new StreamReader(@path);
int frame = -1;
JointCollection joints;
Skeleton recorded = new Skeleton();
...
string[] lines = reader.ReadAllLines();
...
void Update(...)
{
string[] coords = lines[frame].Split('@')[1].Split(',');
int jointIndex = 0;
for (int i = 0; i < coords.Length; i += 3)
{
joints[jointIndex].Position.X = int.Parse(coords[i]);
joints[jointIndex].Position.Y = int.Parse(coords[i + 1]);
joints[jointIndex].Position.X = int.Parse(coords[i + 2]);
jointIndex++;
}
recorded.Joints = joints;
...
//preform necessary methods, except with recorded skeleton instead of detected, I think it is:
this.animator.CopySkeleton(recorded);
this.animator.FloorClipPlane = skeletonFrame.FloorClipPlane;
// Reset the filters if the skeleton was not seen before now
if (this.skeletonDetected == false)
{
this.animator.Reset();
}
this.skeletonDetected = true;
this.animator.SkeletonVisible = true;
...
frame++;
}
并更新每一帧。所以我建议这样做:
clipPlanes[0]
修改强>
当您读取初始楼层剪裁平面(var newFloorClipPlane = Tuple.Create(Single.Parse(clipPlanes[2]), Single.Parse(clipPlanes[3]), Single.Parse(clipPlanes[4]), Single.Parse(clipPlanes[5]));
)时,它将获得整个帧信息到第一个空格。请参阅下文,了解它将如何拆分以及如何阅读:
frame# timestam@joint1Posx,joint1posy,joint1posz,...jointNPosx,jointNposy,jointNposz floorX floorY floorZ floorW
以下是如何布置框架:
["frame#", "timestam@joint1Posx,joint1posy,joint1posz,...jointNPosx,jointNposy,jointNposz", "floorX", "floorY", "floorZ", "floorW"]
这是由`.Split('')
生成的数组00000002 10112@10,10,10... 11 12 13 14
因此,输入示例为:
[2, 10112101010..., 11, 12]
使用您的代码,您将获得:
[11, 12, 13, 14]
使用我的代码中的更正索引:
Console.WriteLine(Convert.ToSingle("10,10"));
将此行放入控制台应用程序中非常快,看看它输出的内容:
1010
输出为Convert.ToSingle
对于您要完成的操作,这会创建错误的地板剪裁平面。您需要适当的索引来实现您想要实现的目标。
注意:我将Single.Parse
更改为{{1}},因为这是更好的做法,在堆栈跟踪中它们都预先形成相同的功能
答案 1 :(得分:0)
嘿,为什么不使用csv方法根据excel文件编写所有关节数据。它可以帮助您在以后分析它们。我已经定制了我的代码,用于将它们放在csv格式中,这有助于我在数据的后期进行分析。您可以在项目中编写一个单独的文件,该文件将导出所有骨架数据
public void CoordinatesExportToCSV(Skeleton data)
{
if (!TimeRecorded)
{
startTime = DateTime.Now;
TimeRecorded = true;
}
recordedSamples[1]++;
if (!titles)
{
sw1.Write("Counter,Time,Clipped Edges,");
foreach (Joint joint in data.Joints)
{
sw1.Write(joint.JointType.ToString()+",");
}
titles = true;
}
else
{
double a = DateTime.Now.TimeOfDay.TotalSeconds - startTime.TimeOfDay.TotalSeconds;
sw1.Write(recordedSamples[1] + "," + a + "," + data.ClippedEdges);
foreach (Joint joint in data.Joints)
{
sw1.Write(joint.Position.X + "|" + joint.Position.Y + "|" + joint.Position.Z+",");
}
}
sw1.WriteLine();
}