Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about feasibility of an idea, re: motion detection #30

Open
towolf opened this issue May 11, 2015 · 15 comments
Open

Question about feasibility of an idea, re: motion detection #30

towolf opened this issue May 11, 2015 · 15 comments

Comments

@towolf
Copy link

towolf commented May 11, 2015

One other project I’m following on Github is https://github.com/dickontoo/omxmotion.

What it is doing is to re-implement camera interface with OMX, not mmal. The clou is that it accesses the motion vectors exported by the hardware H264 encoder to detect motion. If enough vectors sum up to a threshold then file writing starts. In parallel a continuous multicast network stream is sent out.

Now this works as a proof-of-concept, but I was wondering if it weren’t easier to write a GStreamer module that acts as a "frame valve". It would receive video frames and motion vector data from
rpicamsrc and analyze ths data and decide to pass on video into its sink like this:

[rpicamsrc]  → [stuff] → [network streaming]
             ↓
             [omxmotion as frame "valve"] → [movmux] →[multifilesink]

omxmotion is using ffmpeg to implement all the other boxes and it is quite hard as it seems.

So my idea is a plug-in motion detection module. Would that be feasible in the GStreamer architecture?

@thaytan
Copy link
Owner

thaytan commented May 12, 2015

I haven't looked at what requesting motion vectors provides in mmal. I thought it just wrote things into the video - does it provide external data? If so, we could attach it to a buffer using a private Meta for a downstream element to access, or we just do the analysis within rpicamsrc and announce motion start/stop events (which we'd also invent) for the downstream valve element to process.

@towolf
Copy link
Author

towolf commented May 12, 2015

Yes, they provide raw data. Raspivid can write this to file

-x, --vectors : Output filename <filename> for inline motion vectors

On Tue, May 12, 2015 at 5:13 AM, Jan Schmidt [email protected]
wrote:

I haven't looked at what requesting motion vectors provides in mmal. I
thought it just wrote things into the video - does it provide external
data? If so, we could attach it to a buffer using a private Meta for a
downstream element to access, or we just do the analysis within rpicamsrc
and announce motion start/stop events (which we'd also invent) for the
downstream valve element to process.


Reply to this email directly or view it on GitHub
#30 (comment)
.

@towolf
Copy link
Author

towolf commented May 12, 2015

Also, would it be possible to have some kind of ring-buffer to enable pre-trigger recording in Gstreamer?

@thaytan
Copy link
Owner

thaytan commented May 12, 2015

Aha - should be doable then. And yes, it's possible to hold a pre-trigger ringbuffer using queue or queue2. Other people have done it, but I'm not sure if it's written up anywhere. Doing it on pre-encoded data, you'd want to do something to ensure the data starts on the earliest keyframe before the trigger - a custom buffer probe might be the best. way.

@towolf
Copy link
Author

towolf commented May 16, 2015

Just FYI, for whoever is interested in implementing this, the motion detection on the vector data seems to be fairly straight-forward: motion.c#L21.

Dunno about the map of weights read from an external PNG image.

@towolf
Copy link
Author

towolf commented May 18, 2015

A quick question, that is only tangentially related to this issue.

I was trying out multifilesink for segmented file writing via matroskamux. For some reason it muxes two H264 streams into the files (one with frames, one empty), which seems to annoy some tools like ffmpeg.

Where in this script do two streams come in?

#!/bin/sh
date=$(date +%Y%m%d%H%M)
gst-launch-1.0 -e rpicamsrc bitrate=0 quantisation-parameter=20 metering-mode=matrix preview=false  ! \
                'video/x-h264,profile=high,width=720,height=576,framerate=25/1' ! \
                h264parse ! matroskamux ! \
                multifilesink next-file=max-size max-file-size=10485760 location=/mnt/snopai-$date-%05d.mkv
exit $?

ffprobe info:

[matroska,webm @ 0x1d8e0e0] Read error at pos. 301 (0x12d)
[matroska,webm @ 0x1d8e0e0] Could not find codec parameters for stream 1 (Video: h264, none, 720x576): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, matroska,webm, from 'snopai-201505181157-00001.mkv':
  Metadata:
    encoder         : GStreamer plugin version 1.4.4
    creation_time   : 2015-05-18 09:57:29
  Duration: N/A, start: 26.679000, bitrate: N/A
    Stream #0:0(eng): Video: h264 (High), yuv420p, 720x576, SAR 1:1 DAR 5:4, 25 fps, 25 tbr, 1k tbn, 2k tbc (default)
    Metadata:
      title           : Video
    Stream #0:1(eng): Video: h264, none, 720x576, SAR 1:1 DAR 5:4, 25 fps, 25 tbr, 1k tbn, 2k tbc (default)
    Metadata:
      title           : Video

@thaytan
Copy link
Owner

thaytan commented May 18, 2015

Try with matroskamux streamable=true?

On 18/05/15 20:09, Tobias Wolf wrote:

A quick question, that is only tangentially related to this issue.

I was trying out |multifilesink| for segmented file writing via
|matroskamux|. For some reason it muxes two H264 streams into the files
(one with frames, one empty), which seems to annoy some tools like ffmpeg.

Where in this script do two stream come in?

#!/bin/sh
date=$(date +%Y%m%d%H%M)
gst-launch-1.0 -e rpicamsrc bitrate=0 quantisation-parameter=20 metering-mode=matrix preview=false !
'video/x-h264,profile=high,width=720,height=576,framerate=25/1' !
h264parse! matroskamux!
multifilesink next-file=max-size max-file-size=10485760 location=/mnt/snopai-$date-%05d.mkv
exit $?

ffprobe info:

|[matroska,webm @ 0x1d8e0e0] Read error at pos. 301 (0x12d)
[matroska,webm @ 0x1d8e0e0] Could not find codec parameters for stream 1 (Video: h264, none, 720x576): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, matroska,webm, from 'snopai-201505181157-00001.mkv':
Metadata:
encoder : GStreamer plugin version 1.4.4
creation_time : 2015-05-18 09:57:29
Duration: N/A, start: 26.679000, bitrate: N/A
Stream #0:0(eng): Video: h264 (High), yuv420p, 720x576, SAR 1:1 DAR 5:4, 25 fps, 25 tbr, 1k tbn, 2k tbc (default)
Metadata:
title : Video
Stream #0:1(eng): Video: h264, none, 720x576, SAR 1:1 DAR 5:4, 25 fps, 25 tbr, 1k tbn, 2k tbc (default)
Metadata:
title : Video

|
|


Reply to this email directly or view it on GitHub
#30 (comment).

@towolf
Copy link
Author

towolf commented May 18, 2015

Spot on. Thanks!

@thaytan
Copy link
Owner

thaytan commented May 18, 2015

Then it's about matroskamux trying to seek and rewrite the headers, which it can't do properly if multifilesink changed files downstream.

@towolf
Copy link
Author

towolf commented May 18, 2015

Actually, scratch what I wrote before. The first chunk has one stream and is accepted as valid, and the subsequent chunks have the two streams ...

I just looked at the first one.

@thaytan
Copy link
Owner

thaytan commented May 18, 2015

Ah, then I guess that doesn't work because multifilesink doesn't know how to put a valid matroska header on any files when it switches to a new one.

If you're using GStreamer from git, you can use the 'splitmuxsink' element I added earlier in the year instead, but it's not been in any release yet.

@towolf
Copy link
Author

towolf commented May 18, 2015

I was just trying to extract libgstmpegtsmux.so to try to mux mpegts. AFAIK that is more easily concatenatble? Or wouldn’t that work either.

BTW, why do the Gstreamer packages have so many dependencies in Debian? Don’t really want to blast tons of chains of libraries onto my slim system. Luckily I found out that extracting the so files I need is sufficient.

@thaytan
Copy link
Owner

thaytan commented May 18, 2015

Yes, mpegtsmux should be more splittable, at PAT/PMT boundaries.

The packages have a lot of dependencies because debian packages elements together in large packages, instead of splitting them out to individual packages.

@chetanbnaik
Copy link

hi,

it would be nice to have motion vector data from mmal through rpicamsrc. your suggestion to include motion vectors as meta data accessible to gstreamer downstream elements is feasible.

any progress on this enhancement?

Currently. I have implemented https://www.researchgate.net/publication/221315082_Fast_Compressed_Domain_Motion_Detection_in_H264_Video_Streams_for_Video_Surveillance_Applications algorithm using picamera. As soon as the motion is detected, it switches back to gstreamer implementation so as to stream the activity over a network. If rpicamsrc can provide motion vector data, I can have a better implementation .

@thaytan
Copy link
Owner

thaytan commented Jan 21, 2016

I don't know of anyone working on a patch for me. Getting the motion vectors doesn't look that hard - just add an element property and call mmal_port_parameter_set_boolean(encoder_output, MMAL_PARAMETER_VIDEO_ENCODE_INLINE_VECTORS, TRUE);

The open question is how to hand off the motion vectors usefully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants