-
Notifications
You must be signed in to change notification settings - Fork 101
Kinect Overview
Platform for Situated Intelligence supports reading from a Microsoft Kinect V2 Depth Camera (for the new Azure Kinect device, see the Azure Kinect component). This includes capture of video, audio, body tracking, and face tracking streams from the Kinect. A sample application here demonstrates its use.
Please note:
Support for Kinect is currently limited to Windows only.
Support for Face tracking via Kinect is limited to Windows 64 bit applications.
Basic Kinect capabilities are provided by instantiating a KinectSensor
component which is part of the Microsoft.Psi.Kinect
namespace.
Support for Kinect Face tracking is provided by instantiating a KinectFaceDetector
component which is part of the Microsoft.Psi.Kinect.Face
namespace.
The following are some examples of how to use the Kinect sensor in \psi.
The following example shows how to create a KinectSensor
and to receive images from the Kinect's color camera.
using (var pipeline = Pipeline.Create())
{
var kinectSensorConfig = new Microsoft.Psi.Kinect.KinectSensorConfiguration();
kinectSensorConfig.OutputColor = true;
var kinectSensor = new KinectSensor(pipeline, kinectSensorConfig);
kinectSensor.ColorImage.Do((img, e) =>
{
// Do something with the image
});
pipeline.Run();
}
The next example shows how to receive audio from the Kinect and convert the audio stream into 16KHz-16b PCM format.
using (var pipeline = Pipeline.Create())
{
var kinectSensorConfig = new Microsoft.Psi.Kinect.KinectSensorConfiguration();
kinectSensorConfig.OutputAudio = true;
var kinectSensor = new KinectSensor(pipeline, kinectSensorConfig);
var convertedAudio = kinectSensor.Audio.Resample(WaveFormat.Create16kHz1Channel16BitPcm());
convertedAudio.Do((audio, e) =>
{
// Do something with the audio block
});
pipeline.Run();
}
This final example demonstrates how to use the Kinect to perform face tracking. This simple example will print out for each face detected whether the person's mouth is open or closed. Note: That face tracking on the Kinect relies on enabling the body tracking, and thus we need to enable OutputBodies
in the KinectSensorConfiguration
.
using Microsoft.Psi.Kinect;
using (var pipeline = Pipeline.Create())
{
var kinectSensorConfig = new KinectSensorConfiguration();
kinectSensorConfig.OutputFaces = true;
kinectSensorConfig.OutputBodies = true;
var kinectSensor = new KinectSensor(pipeline, kinectSensorConfig);
var faceTracker = new Face.KinectFaceDetector(pipeline, kinectSensor, Face.KinectFaceDetectorConfiguration.Default);
faceTracker.Faces.Select((List<Face.KinectFace> list) =>
{
for (int i = 0; i < list.Count; i++)
{
if (list[i] != null)
{
string mouthIsOpen = "closed";
if (list[i].FaceProperties[Microsoft.Kinect.Face.FaceProperty.MouthOpen] == Microsoft.Kinect.DetectionResult.Yes)
mouthIsOpen = "open";
Console.WriteLine($"Person={i} mouth is {mouthIsOpen}");
}
}
});
pipeline.Run();
}
The KinectSensor
component emits all its calibration, joint, and body orientation information in the coordinate system basis of MathNet.Spatial. This is a different basis assumption from that used by the sensor technology underneath.
All coordinate systems are immediately rebased inside the component such that the X-axis represents "forward", the Y-axis represents "left", and Z-axis represents "up". All coordinate system information emitted by these components adhere to this basis.
Z
| X
| /
| /
Y <----+
- Basic Stream Operators
- Writing Components
- Pipeline Execution
- Delivery Policies
- Stream Fusion and Merging
- Interpolation and Sampling
- Windowing Operators
- Stream Generators
- Parallel Operator
- Intervals
- Data Visualization (PsiStudio)
- Data Annotation (PsiStudio)
- Distributed Systems
- Bridging to Other Ecosystems
- Debugging and Diagnostics
- Shared Objects
- Datasets
- Event Sources
- 3rd Party Visualizers
- 3rd Party Stream Readers
Components and Toolkits
- List of NuGet Packages
- List of Components
- Audio Overview
- Azure Kinect Overview
- Kinect Overview
- Speech and Language Overview
- Imaging Overview
- Media Overview
- ONNX Overview
- Finite State Machine Toolkit
- Mixed Reality Overview
- How to Build/Configure
- How to Define Tasks
- How to Place Holograms
- Data Types Collected
- System Transparency Note
Community
Project Management