diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml index 15479c1bb..26d6f75be 100644 --- a/.github/FUNDING.yml +++ b/.github/FUNDING.yml @@ -1,2 +1,2 @@ ko_fi: abhitronix -custom: https://paypal.me/AbhiTronix \ No newline at end of file +liberapay: abhiTronix \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 7b1b90e7a..44fd156b5 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -1,7 +1,8 @@ --- name: Bug report about: Create a bug-report for VidGear -labels: "issue: bug" +labels: ':beetle: BUG' +assignees: 'abhiTronix' --- - [ ] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful. -- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear). -- [ ] I have read the [Issue Guidelines](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines). +- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest). +- [ ] I have read the [Issue Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines). ### Environment diff --git a/.github/ISSUE_TEMPLATE/proposal.md b/.github/ISSUE_TEMPLATE/proposal.md index f4df01214..a75eefaa4 100644 --- a/.github/ISSUE_TEMPLATE/proposal.md +++ b/.github/ISSUE_TEMPLATE/proposal.md @@ -1,7 +1,7 @@ --- name: Proposal about: Suggest an idea for improving VidGear -labels: "issue: proposal" +labels: 'PROPOSAL :envelope_with_arrow:' --- diff --git a/.github/ISSUE_TEMPLATE/question.md b/.github/ISSUE_TEMPLATE/question.md index a7de08f00..3da69a789 100644 --- a/.github/ISSUE_TEMPLATE/question.md +++ b/.github/ISSUE_TEMPLATE/question.md @@ -1,7 +1,7 @@ --- name: Question about: Have any questions regarding VidGear? -labels: "issue: question" +labels: 'QUESTION :question:' --- @@ -18,8 +18,8 @@ _Kindly describe the issue here._ - [ ] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful. -- [ ] I have read the [FAQs](https://abhitronix.github.io/vidgear/help/get_help/#frequently-asked-questions). -- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear). +- [ ] I have read the [FAQs](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions). +- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest). diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 7f811e4f9..c3c1a176e 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -11,8 +11,8 @@ _Kindly explain the changes you made here._ -- [ ] I have read the [PR Guidelines](https://abhitronix.github.io/vidgear/contribution/PR/#submitting-pull-requestpr-guidelines). -- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear). +- [ ] I have read the [PR Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/PR/#submitting-pull-requestpr-guidelines). +- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest). - [ ] I have updated the documentation accordingly(if required). diff --git a/.github/config.yml b/.github/config.yml index 3f8db32a9..b118fb12d 100644 --- a/.github/config.yml +++ b/.github/config.yml @@ -2,9 +2,9 @@ newPRWelcomeComment: | Thanks so much for opening your first PR here, a maintainer will get back to you shortly! ### In the meantime: - - Read our [Pull Request(PR) Guidelines](https://abhitronix.github.io/vidgear/contribution/PR/#submitting-pull-requestpr-guidelines) for submitting a valid PR for VidGear. - - Submit a [issue](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines) beforehand for your Pull Request. - - Go briefly through our [PR FAQ section](https://abhitronix.github.io/vidgear/contribution/PR/#frequently-asked-questions). + - Read our [Pull Request(PR) Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/PR/#submitting-pull-requestpr-guidelines) for submitting a valid PR for VidGear. + - Submit a [issue](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines) beforehand for your Pull Request. + - Go briefly through our [PR FAQ section](https://abhitronix.github.io/vidgear/latest/contribution/PR/#frequently-asked-questions). firstPRMergeComment: | Congrats on merging your first pull request here! :tada: You're awesome! @@ -14,6 +14,6 @@ newIssueWelcomeComment: | Thanks for opening this issue, a maintainer will get back to you shortly! ### In the meantime: - - Read our [Issue Guidelines](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines), and update your issue accordingly. Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. - - Go comprehensively through our dedicated [FAQ & Troubleshooting section](https://abhitronix.github.io/vidgear/help/get_help/#frequently-asked-questions). + - Read our [Issue Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines), and update your issue accordingly. Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. + - Go comprehensively through our dedicated [FAQ & Troubleshooting section](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions). - For any quick questions and typos, please refrain from opening an issue, as you can reach us on [Gitter](https://gitter.im/vidgear/community) community channel. diff --git a/.github/needs-more-info.yml b/.github/needs-more-info.yml index e6dea748c..f3e05b1d5 100644 --- a/.github/needs-more-info.yml +++ b/.github/needs-more-info.yml @@ -1,9 +1,10 @@ checkTemplate: true miniTitleLength: 8 -labelToAdd: 'MISSING : INFORMATION :mag:' +labelToAdd: 'MISSING : TEMPLATE :grey_question:' issue: reactions: - eyes + - '-1' badTitles: - update - updates @@ -12,6 +13,7 @@ issue: - debug - demo - new + - help badTitleComment: > @{{ author }} Please re-edit this issue title to provide more relevant info. diff --git a/.github/no-response.yml b/.github/no-response.yml new file mode 100644 index 000000000..deffa6b17 --- /dev/null +++ b/.github/no-response.yml @@ -0,0 +1,13 @@ +# Configuration for probot-no-response - https://github.com/probot/no-response + +# Number of days of inactivity before an Issue is closed for lack of response +daysUntilClose: 1 +# Label requiring a response +responseRequiredLabel: 'MISSING : INFORMATION :mag:' +# Comment to post when closing an Issue for lack of response. Set to `false` to disable +closeComment: > + ### No Response :-1: + + This issue has been automatically closed because there has been no response + to our request for more information from the original author. Kindly provide + requested information so that we can investigate this issue further. \ No newline at end of file diff --git a/.github/workflows/ci_linux.yml b/.github/workflows/ci_linux.yml index b95c72a76..dd9dd851f 100644 --- a/.github/workflows/ci_linux.yml +++ b/.github/workflows/ci_linux.yml @@ -1,4 +1,4 @@ -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -51,7 +51,7 @@ jobs: pip install -U pip wheel numpy pip install -U .[asyncio] pip uninstall opencv-python -y - pip install -U flake8 six codecov pytest pytest-asyncio pytest-cov youtube-dl mpegdash + pip install -U flake8 six codecov pytest pytest-asyncio pytest-cov youtube-dl mpegdash paramiko m3u8 async-asgi-testclient if: success() - name: run prepare_dataset_script run: bash scripts/bash/prepare_dataset.sh diff --git a/.github/workflows/deploy_docs.yml b/.github/workflows/deploy_docs.yml index 48c92b485..cabaeebae 100644 --- a/.github/workflows/deploy_docs.yml +++ b/.github/workflows/deploy_docs.yml @@ -1,4 +1,4 @@ -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -64,7 +64,7 @@ jobs: - name: mike deploy docs release run: | echo "${{ env.NAME_RELEASE }}" - mike deploy --push --update-aliases ${{ env.NAME_RELEASE }} ${{ env.RELEASE_NAME }} + mike deploy --push --update-aliases --no-redirect ${{ env.NAME_RELEASE }} ${{ env.RELEASE_NAME }} --title=${{ env.RELEASE_NAME }} env: NAME_RELEASE: "v${{ env.RELEASE_NAME }}-release" if: success() @@ -101,14 +101,10 @@ jobs: echo "RELEASE_NAME=$(python -c 'import vidgear; print(vidgear.__version__)')" >>$GITHUB_ENV shell: bash if: success() - - name: mike remove previous stable - run: | - mike delete --push latest - if: success() - name: mike deploy docs stable run: | echo "${{ env.NAME_STABLE }}" - mike deploy --push --update-aliases ${{ env.NAME_STABLE }} latest + mike deploy --push --update-aliases --no-redirect ${{ env.NAME_STABLE }} latest --title=latest mike set-default --push latest env: NAME_STABLE: "v${{ env.RELEASE_NAME }}-stable" @@ -150,8 +146,8 @@ jobs: if: success() - name: mike deploy docs dev run: | - echo "${{ env.NAME_DEV }}" - mike deploy --push --update-aliases ${{ env.NAME_DEV }} dev + echo "Releasing ${{ env.NAME_DEV }}" + mike deploy --push --update-aliases --no-redirect ${{ env.NAME_DEV }} dev --title=dev env: NAME_DEV: "v${{ env.RELEASE_NAME }}-dev" if: success() diff --git a/LICENSE b/LICENSE index b37f76f9b..e36f93110 100644 --- a/LICENSE +++ b/LICENSE @@ -186,7 +186,7 @@ same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) + Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/README.md b/README.md index f468514bc..b45a5dec1 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -29,16 +29,16 @@ limitations under the License. [Releases][release]   |   [Gears][gears]   |   [Documentation][docs]   |   [Installation][installation]   |   [License](#license) -[![Build Status][github-cli]][github-flow] [![Codecov branch][codecov]][code] [![Build Status][appveyor]][app] +[![Build Status][github-cli]][github-flow] [![Codecov branch][codecov]][code] [![Azure DevOps builds (branch)][azure-badge]][azure-pipeline] -[![Azure DevOps builds (branch)][azure-badge]][azure-pipeline] [![PyPi version][pypi-badge]][pypi] [![Glitter chat][gitter-bagde]][gitter] +[![Glitter chat][gitter-bagde]][gitter] [![Build Status][appveyor]][app] [![PyPi version][pypi-badge]][pypi] [![Code Style][black-badge]][black]   -VidGear is a **High-Performance Video Processing Python Library** that provides an easy-to-use, highly extensible, thoroughly optimised **Multi-Threaded + Asyncio Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* serving at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering **robust error-handling and real-time performance 🔥** +VidGear is a **High-Performance Video Processing Python Library** that provides an easy-to-use, highly extensible, thoroughly optimised **Multi-Threaded + Asyncio API Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* serving at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering **robust error-handling and real-time performance 🔥** VidGear primarily focuses on simplicity, and thereby lets programmers and software developers to easily integrate and perform Complex Video Processing Tasks, in just a few lines of code. @@ -67,10 +67,10 @@ The following **functional block diagram** clearly depicts the generalized funct * [**WebGear**](#webgear) * [**WebGear_RTC**](#webgear_rtc) * [**NetGear_Async**](#netgear_async) -* [**Community Channel**](#community-channel) -* [**Contributions & Support**](#contributions--support) - * [**Support**](#support) +* [**Contributions & Community Support**](#contributions--community-support) + * [**Community Support**](#community-support) * [**Contributors**](#contributors) +* [**Donations**](#donations) * [**Citation**](#citation) * [**Copyright**](#copyright) @@ -81,21 +81,21 @@ The following **functional block diagram** clearly depicts the generalized funct -# TL;DR +## TL;DR #### What is vidgear? -> *"VidGear is a High-Performance Framework that provides an one-stop **Video-Processing** solution for building complex real-time media applications in python."* +> *"VidGear is a cross-platform High-Performance Framework that provides an one-stop **Video-Processing** solution for building complex real-time media applications in python."* #### What does it do? -> *"VidGear can read, write, process, send & receive video files/frames/streams from/to various devices in real-time, and faster than underline libraries."* +> *"VidGear can read, write, process, send & receive video files/frames/streams from/to various devices in real-time, and [**faster**][TQM-doc] than underline libraries."* #### What is its purpose? > *"Write Less and Accomplish More"* — **VidGear's Motto** -> *"Built with simplicity in mind, VidGear lets programmers and software developers to easily integrate and perform **Complex Video-Processing Tasks** in their existing or newer applications without going through hefty documentation and in just a [few lines of code][switch_from_cv]. Beneficial for both, if you're new to programming with Python language or already a pro at it."* +> *"Built with simplicity in mind, VidGear lets programmers and software developers to easily integrate and perform **Complex Video-Processing Tasks** in their existing or newer applications without going through hefty documentation and in just a [**few lines of code**][switch_from_cv]. Beneficial for both, if you're new to programming with Python language or already a pro at it."*   @@ -105,11 +105,11 @@ The following **functional block diagram** clearly depicts the generalized funct If this is your first time using VidGear, head straight to the [Installation ➶][installation] to install VidGear. -Once you have VidGear installed, **Checkout its Well-Documented Function-Specific [Gears ➶][gears]** +Once you have VidGear installed, **Checkout its Well-Documented [Function-Specific Gears ➶][gears]** Also, if you're already familiar with [OpenCV][opencv] library, then see [Switching from OpenCV Library ➶][switch_from_cv] -Or, if you're just getting started with OpenCV, then see [here ➶](https://abhitronix.github.io/vidgear/latest/help/general_faqs/#im-new-to-python-programming-or-its-usage-in-computer-vision-how-to-use-vidgear-in-my-projects) +Or, if you're just getting started with OpenCV-Python programming, then refer this [FAQ ➶](https://abhitronix.github.io/vidgear/latest/help/general_faqs/#im-new-to-python-programming-or-its-usage-in-opencv-library-how-to-use-vidgear-in-my-projects)   @@ -406,7 +406,9 @@ stream.stop() > *WriteGear handles various powerful Video-Writer Tools that provide us the freedom to do almost anything imaginable with multimedia data.* -WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**][ffmpeg], a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specification _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_. It is powerful enough to perform complex tasks such as [Live-Streaming][live-stream] _(such as for Twitch and YouTube)_ and [Multiplexing Video-Audio][live-audio-doc] with real-time frames in way fewer lines of code. +WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**][ffmpeg], a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specifications _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_. + +WriteGear also supports streaming with traditional protocols such as RTMP, RTSP/RTP. It is powerful enough to perform complex tasks such as [Live-Streaming][live-stream] _(such as for Twitch, YouTube etc.)_ and [Multiplexing Video-Audio][live-audio-doc] with real-time frames in just few lines of code. Best of all, WriteGear grants users the complete freedom to play with any FFmpeg parameter with its exclusive **Custom Commands function** _(see this [doc][custom-command-doc])_ without relying on any third-party API. @@ -416,7 +418,7 @@ In addition to this, WriteGear also provides flexible access to [**OpenCV's Vide * **Compression Mode:** In this mode, WriteGear utilizes powerful [**FFmpeg**][ffmpeg] inbuilt encoders to encode lossless multimedia files. This mode provides us the ability to exploit almost any parameter available within FFmpeg, effortlessly and flexibly, and while doing that it robustly handles all errors/warnings quietly. **You can find more about this mode [here ➶][cm-writegear-doc]** - * **Non-Compression Mode:** In this mode, WriteGear utilizes basic [**OpenCV's inbuilt VideoWriter API**][opencv-vw] tools. This mode also supports all parameters manipulation available within VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. **You can learn about this mode [here ➶][ncm-writegear-doc]** + * **Non-Compression Mode:** In this mode, WriteGear utilizes basic [**OpenCV's inbuilt VideoWriter API**][opencv-vw] tools. This mode also supports all parameter transformations available within OpenCV's VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. **You can learn about this mode [here ➶][ncm-writegear-doc]** ### WriteGear API Guide: @@ -434,22 +436,22 @@ In addition to this, WriteGear also provides flexible access to [**OpenCV's Vide

-> *StreamGear automates transcoding workflow for generating Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH) in just few lines of python code.* + +> *StreamGear automates transcoding workflow for generating Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH and Apple HLS) in just few lines of python code.* StreamGear provides a standalone, highly extensible, and flexible wrapper around [**FFmpeg**][ffmpeg] multimedia framework for generating chunked-encoded media segments of the content. -SteamGear easily transcodes source videos/audio files & real-time video-frames and breaks them into a sequence of multiple smaller chunks/segments of fixed length. These segments make it possible to stream videos at different quality levels _(different bitrates or spatial resolutions)_ and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests. +SteamGear is an out-of-the-box solution for transcoding source videos/audio files & real-time video frames and breaking them into a sequence of multiple smaller chunks/segments of suitable lengths. These segments make it possible to stream videos at different quality levels _(different bitrates or spatial resolutions)_ and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests. -SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ besides segments that describe these segment information _(timing, URL, media characteristics like video resolution and bit rates)_ and is provided to the client before the streaming session. +SteamGear currently supports [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_. But, Multiple DRM support is yet to be implemented. -SteamGear currently only supports [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ but other adaptive streaming technologies such as Apple HLS, Microsoft Smooth Streaming will be added soon. +SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ or a Master Playlist _(such as M3U8 in-case of Apple HLS)_ besides segments that describe these segment information _(timing, URL, media characteristics like video resolution and bit rates)_ and is provided to the client before the streaming session. **StreamGear primarily works in two Independent Modes for transcoding which serves different purposes:** - * **Single-Source Mode:** In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing. This mode can be easily activated by assigning suitable video path as input to `-video_source` attribute, during StreamGear initialization. ***Learn more about this mode [here ➶][ss-mode-doc]*** - - * **Real-time Frames Mode:** When no valid input is received on `-video_source` attribute, StreamGear API activates this mode where it directly transcodes video-frames _(as opposed to a entire file)_, into a sequence of multiple smaller chunks/segments for streaming. In this mode, StreamGear supports real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames, and process them over FFmpeg pipeline. But on the downside, audio has to added manually _(as separate source)_ for streams. ***Learn more about this mode [here ➶][rtf-mode-doc]*** + * **Single-Source Mode:** In this mode, StreamGear **transcodes entire video file** _(as opposed to frame-by-frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) for streaming that required no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing. ***Learn more about this mode [here ➶][ss-mode-doc]*** + * **Real-time Frames Mode:** In this mode, StreamGear directly **transcodes frame-by-frame** _(as opposed to a entire video file)_, into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you desire to flexibility manipulate or transform [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, audio has to added manually _(as separate source)_ for streams. ***Learn more about this mode [here ➶][rtf-mode-doc]*** ### StreamGear API Guide: @@ -469,10 +471,12 @@ SteamGear currently only supports [**MPEG-DASH**](https://www.encoding.com/mpeg- NetGear implements a high-level wrapper around [**PyZmQ**][pyzmq] python library that contains python bindings for [**ZeroMQ**][zmq] - a high-performance asynchronous distributed messaging library. -NetGear seamlessly supports [**Bidirectional data transmission**][netgear_bidata_doc] along with video-frames between receiver(client) and sender(server). +NetGear seamlessly supports additional [**bidirectional data transmission**][netgear_bidata_doc] between receiver(client) and sender(server) while transferring video-frames all in real-time. NetGear can also robustly handle [**Multiple Server-Systems**][netgear_multi_server_doc] and [**Multiple Client-Systems**][netgear_multi_client_doc] and at once, thereby providing access to a seamless exchange of video-frames & data between multiple devices across the network at the same time. +NetGear allows remote connection over [**SSH Tunnel**][netgear_sshtunnel_doc] that allows us to connect NetGear client and server via secure SSH connection over the untrusted network and access its intranet services across firewalls. + NetGear also enables real-time [**JPEG Frame Compression**][netgear_compression_doc] capabilities for boosting performance significantly while sending video-frames over the network in real-time. For security, NetGear implements easy access to ZeroMQ's powerful, smart & secure Security Layers that enable [**Strong encryption on data**][netgear_security_doc] and unbreakable authentication between the Server and the Client with the help of custom certificates. @@ -502,7 +506,7 @@ WebGear API works on [**Starlette**](https://www.starlette.io/)'s ASGI applicati WebGear API uses an intraframe-only compression scheme under the hood where the sequence of video-frames are first encoded as JPEG-DIB (JPEG with Device-Independent Bit compression) and then streamed over HTTP using Starlette's Multipart [Streaming Response](https://www.starlette.io/responses/#streamingresponse) and a [Uvicorn](https://www.uvicorn.org/#quickstart) ASGI Server. This method imposes lower processing and memory requirements, but the quality is not the best, since JPEG compression is not very efficient for motion video. -In layman's terms, WebGear acts as a powerful **Video Broadcaster** that transmits live video-frames to any web-browser in the network. Additionally, WebGear API also provides a special internal wrapper around [VideoGear](#videogear), which itself provides internal access to both [CamGear](#camgear) and [PiGear](#pigear) APIs, thereby granting it exclusive power of broadcasting frames from any incoming stream. It also allows us to define our custom Server as source to manipulate frames easily before sending them across the network(see this [doc][webgear-cs] example). +In layman's terms, WebGear acts as a powerful **Video Broadcaster** that transmits live video-frames to any web-browser in the network. Additionally, WebGear API also provides a special internal wrapper around [VideoGear](#videogear), which itself provides internal access to both [CamGear](#camgear) and [PiGear](#pigear) APIs, thereby granting it exclusive power of broadcasting frames from any incoming stream. It also allows us to define our custom Server as source to transform frames easily before sending them across the network(see this [doc][webgear-cs] example). **Below is a snapshot of a WebGear Video Server in action on Chrome browser:** @@ -553,7 +557,7 @@ web.shutdown() WebGear_RTC is implemented with the help of [**aiortc**][aiortc] library which is built on top of asynchronous I/O framework for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) and supports many features like SDP generation/parsing, Interactive Connectivity Establishment with half-trickle and mDNS support, DTLS key and certificate generation, DTLS handshake, etc. -WebGear_RTC can handle [multiple consumers][webgear_rtc-mc] seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to manipulate frames easily before sending them across the network(see this [doc][webgear_rtc-cs] example). +WebGear_RTC can handle [multiple consumers][webgear_rtc-mc] seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to transform frames easily before sending them across the network(see this [doc][webgear_rtc-cs] example). WebGear_RTC API works in conjunction with [**Starlette**][starlette]'s ASGI application and provides easy access to its complete framework. WebGear_RTC can also flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, [Response classes](https://www.starlette.io/responses/), [Routing tables](https://www.starlette.io/routing/), [Static Files](https://www.starlette.io/staticfiles/), [Templating engine(with Jinja2)](https://www.starlette.io/templates/), etc. @@ -606,11 +610,13 @@ web.shutdown()

. -> _NetGear_Async can generate the same performance as [NetGear API](#netgear) at about one-third the memory consumption, and also provide complete server-client handling with various options to use variable protocols/patterns similar to NetGear, but it doesn't support any of [NetGear's Exclusive Modes][netgear-exm] yet._ +> _NetGear_Async can generate the same performance as [NetGear API](#netgear) at about one-third the memory consumption, and also provide complete server-client handling with various options to use variable protocols/patterns similar to NetGear, but lacks in term of flexibility as it supports only a few [NetGear's Exclusive Modes][netgear-exm]._ NetGear_Async is built on [`zmq.asyncio`][asyncio-zmq], and powered by a high-performance asyncio event loop called [**`uvloop`**][uvloop] to achieve unmatchable high-speed and lag-free video streaming over the network with minimal resource constraints. NetGear_Async can transfer thousands of frames in just a few seconds without causing any significant load on your system. -NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](#netgear) but doesn't support any [NetGear Exclusive modes][netgear-exm] yet. Furthermore, NetGear_Async allows us to define our custom Server as source to manipulate frames easily before sending them across the network(see this [doc][netgear_Async-cs] example). +NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](#netgear). Furthermore, NetGear_Async allows us to define our custom Server as source to transform frames easily before sending them across the network(see this [doc][netgear_Async-cs] example). + +NetGear_Async now supports additional [**bidirectional data transmission**][btm_netgear_async] between receiver(client) and sender(server) while transferring video-frames. Users can easily build complex applications such as like [Real-Time Video Chat][rtvc] in just few lines of code. NetGear_Async as of now supports [all four ZeroMQ messaging patterns](#attributes-and-parameters-wrench): * [**`zmq.PAIR`**][zmq-pair] _(ZMQ Pair Pattern)_ @@ -629,19 +635,17 @@ Whereas supported protocol are: `tcp` and `ipc`.   -# Contributions & Support +# Contributions & Community Support -Contributions are welcome. We'd love to have your contributions to VidGear to fix bugs or to implement new features! +> Contributions are welcome. We'd love to have your contributions to fix bugs or to implement new features! Please see our **[Contribution Guidelines](contributing.md)** for more details. -### Support - -PiGear +### Community Support -Donations help keep VidGear's Development alive. Giving a little means a lot, even the smallest contribution can make a huge difference. +We ask contributors to join the Gitter community channel for quick discussions: -[![ko-fi][kofi-badge]][kofi] +[![Gitter](https://badges.gitter.im/vidgear/community.svg)](https://gitter.im/vidgear/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) ### Contributors @@ -655,9 +659,15 @@ Donations help keep VidGear's Development alive. Giving a little means a lot, ev   -# Community Channel +# Donations -If you've come up with some new idea, or looking for the fastest way troubleshoot your problems, then *join our [Gitter community channel ➶][gitter]* +PiGear + +> VidGear is free and open source and will always remain so. :heart: + +It is (like all open source software) a labour of love and something I am doing with my own free time. If you would like to say thanks, please feel free to make a donation: + +[![ko-fi][kofi-badge]][kofi]   @@ -668,15 +678,27 @@ If you've come up with some new idea, or looking for the fastest way troubleshoo # Citation + + + Here is a Bibtex entry you can use to cite this project in a publication: +[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4718616.svg)](https://doi.org/10.5281/zenodo.4718616) ```BibTeX -@misc{vidgear, - author = {Abhishek Thakur}, - title = {vidgear}, - howpublished = {\url{https://github.com/abhiTronix/vidgear}}, - year = {2019-2021} +@software{vidgear, + author = {Abhishek Thakur and + Christian Clauss and + Christian Hollinger and + Benjamin Lowe and + Mickaël Schoentgen and + Renaud Bouckenooghe}, + title = {abhiTronix/vidgear: VidGear v0.2.2}, + year = 2021 + publisher = {Zenodo}, + version = {vidgear-0.2.2}, + doi = {10.5281/zenodo.4718616}, + url = {https://doi.org/10.5281/zenodo.4718616} } ``` @@ -687,7 +709,7 @@ Here is a Bibtex entry you can use to cite this project in a publication: # Copyright -**Copyright © abhiTronix 2019-2021** +**Copyright © abhiTronix 2019** This library is released under the **[Apache 2.0 License][license]**. @@ -726,6 +748,8 @@ Internal URLs [azure-pipeline]:https://dev.azure.com/abhiuna12/public/_build?definitionId=2 [app]:https://ci.appveyor.com/project/abhiTronix/vidgear [code]:https://codecov.io/gh/abhiTronix/vidgear +[btm_netgear_async]: https://abhitronix.github.io/vidgear/latest/gears/netgear_async/advanced/bidirectional_mode/ +[rtvc]: https://abhitronix.github.io/vidgear/latest/gears/netgear_async/advanced/bidirectional_mode/#using-bidirectional-mode-for-video-frames-transfer [test-4k]:https://github.com/abhiTronix/vidgear/blob/e0843720202b0921d1c26e2ce5b11fadefbec892/vidgear/tests/benchmark_tests/test_benchmark_playback.py#L65 [bs_script_dataset]:https://github.com/abhiTronix/vidgear/blob/testing/scripts/bash/prepare_dataset.sh @@ -746,7 +770,7 @@ Internal URLs [cm-writegear-doc]:https://abhitronix.github.io/vidgear/latest/gears/writegear/compression/overview/ [ncm-writegear-doc]:https://abhitronix.github.io/vidgear/latest/gears/writegear/non_compression/overview/ [screengear-doc]:https://abhitronix.github.io/vidgear/latest/gears/screengear/overview/ -[streamgear-doc]:https://abhitronix.github.io/vidgear/latest/gears/streamgear/overview/ +[streamgear-doc]:https://abhitronix.github.io/vidgear/latest/gears/streamgear/introduction/ [writegear-doc]:https://abhitronix.github.io/vidgear/latest/gears/writegear/introduction/ [netgear-doc]:https://abhitronix.github.io/vidgear/latest/gears/netgear/overview/ [webgear-doc]:https://abhitronix.github.io/vidgear/latest/gears/webgear/overview/ @@ -759,14 +783,15 @@ Internal URLs [netgear_security_doc]:https://abhitronix.github.io/vidgear/latest/gears/netgear/advanced/secure_mode/ [netgear_multi_server_doc]:https://abhitronix.github.io/vidgear/latest/gears/netgear/advanced/multi_server/ [netgear_multi_client_doc]:https://abhitronix.github.io/vidgear/latest/gears/netgear/advanced/multi_client/ +[netgear_sshtunnel_doc]:https://abhitronix.github.io/vidgear/latest/gears/netgear/advanced/ssh_tunnel/ [netgear-exm]: https://abhitronix.github.io/vidgear/latest/gears/netgear/overview/#modes-of-operation [stabilize_webgear_doc]:https://abhitronix.github.io/vidgear/latest/gears/webgear/advanced/#using-webgear-with-real-time-video-stabilization-enabled [netgear_Async-cs]: https://abhitronix.github.io/vidgear/latest/gears/netgear_async/usage/#using-netgear_async-with-a-custom-sourceopencv [installation]:https://abhitronix.github.io/vidgear/latest/installation/ [gears]:https://abhitronix.github.io/vidgear/latest/gears [switch_from_cv]:https://abhitronix.github.io/vidgear/latest/switch_from_cv/ -[ss-mode-doc]: https://abhitronix.github.io/vidgear/latest/gears/streamgear/usage/#a-single-source-mode -[rtf-mode-doc]: https://abhitronix.github.io/vidgear/latest/gears/streamgear/usage/#b-real-time-frames-mode +[ss-mode-doc]: https://abhitronix.github.io/vidgear/latest/gears/streamgear/ssm/#overview +[rtf-mode-doc]: https://abhitronix.github.io/vidgear/latest/gears/streamgear/rtfm/#overview [webgear-cs]: https://abhitronix.github.io/vidgear/latest/gears/webgear/advanced/#using-webgear-with-a-custom-sourceopencv [webgear_rtc-cs]: https://abhitronix.github.io/vidgear/latest/gears/webgear_rtc/advanced/#using-webgear_rtc-with-a-custom-sourceopencv [webgear_rtc-mc]: https://abhitronix.github.io/vidgear/latest/gears/webgear_rtc/advanced/#using-webgear_rtc-as-real-time-broadcaster diff --git a/appveyor.yml b/appveyor.yml index 48f535951..45a4c9d3c 100644 --- a/appveyor.yml +++ b/appveyor.yml @@ -1,4 +1,4 @@ -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -51,8 +51,8 @@ install: - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%" - "python --version" - "python -m pip install --upgrade pip wheel" - - "python -m pip install --upgrade .[asyncio] six codecov pytest pytest-cov pytest-asyncio youtube-dl aiortc" - - "python -m pip install https://github.com/abhiTronix/python-mpegdash/releases/download/0.3.0-dev/mpegdash-0.3.0.dev0-py3-none-any.whl" + - "python -m pip install --upgrade .[asyncio] six codecov pytest pytest-cov pytest-asyncio youtube-dl aiortc paramiko m3u8 async-asgi-testclient" + - "python -m pip install https://github.com/abhiTronix/python-mpegdash/releases/download/0.3.0-dev2/mpegdash-0.3.0.dev2-py3-none-any.whl" - cmd: chmod +x scripts/bash/prepare_dataset.sh - cmd: bash scripts/bash/prepare_dataset.sh diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 0e564d73d..9f7c50f50 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -1,4 +1,4 @@ -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -19,7 +19,7 @@ pr: - testing pool: - vmImage: 'macOS-10.14' + vmImage: 'macOS-latest' strategy: matrix: @@ -55,8 +55,8 @@ steps: - script: | python -m pip install --upgrade pip wheel - pip install --upgrade .[asyncio] six codecov youtube-dl mpegdash - pip install --upgrade pytest pytest-asyncio pytest-cov pytest-azurepipelines + pip install --upgrade .[asyncio] six codecov youtube-dl mpegdash paramiko m3u8 async-asgi-testclient + pip install --upgrade pytest pytest-asyncio pytest-cov pytest-azurepipelines displayName: 'Install pip dependencies' - script: | diff --git a/codecov.yml b/codecov.yml index 306c6c4cc..7cef6987d 100644 --- a/codecov.yml +++ b/codecov.yml @@ -1,4 +1,4 @@ -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -28,5 +28,8 @@ coverage: ignore: - "vidgear/tests" + - "docs" + - "scripts" + - "vidgear/gears/__init__.py" #trivial - "vidgear/gears/asyncio/__main__.py" #trivial - "setup.py" \ No newline at end of file diff --git a/contributing.md b/contributing.md index 08b5a7aff..24215443b 100644 --- a/contributing.md +++ b/contributing.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/TQM.md b/docs/bonus/TQM.md index 3d4c6a6fe..b6e5ca362 100644 --- a/docs/bonus/TQM.md +++ b/docs/bonus/TQM.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -27,28 +27,33 @@ limitations under the License.
Threaded-Queue-Mode: generalized timing diagram
-> Threaded Queue Mode is designed exclusively for VidGear's Videocapture Gears _(namely CamGear, ScreenGear, VideoGear)_ and few Network Gears _(such as NetGear(Client's end))_ for achieving high-performance, synchronized, and error-free video-frames handling with their **Internal Multi-Threaded Frame Extractor Daemons**. +> Threaded Queue Mode is designed exclusively for VidGear's Videocapture Gears _(namely CamGear, ScreenGear, VideoGear)_ and few Network Gears _(such as NetGear(Client's end))_ for achieving high-performance, asynchronous, error-free video-frames handling. -!!! info "Threaded-Queue-Mode is enabled by default, but a user [can disable it](#manually-disabling-threaded-queue-mode), if extremely necessary." +!!! tip "Threaded-Queue-Mode is enabled by default, but [can be disabled](#manually-disabling-threaded-queue-mode), only if extremely necessary." + +!!! info "Threaded-Queue-Mode is **NOT** required and thereby automatically disabled for Live feed such as Camera Devices/Modules, since ."   ## What does Threaded-Queue-Mode exactly do? - -Threaded-Queue-Mode helps VidGear do the Threaded Video-Processing tasks in a well-organized, and most competent way possible: +Threaded-Queue-Mode helps VidGear do the Threaded Video-Processing tasks in highly optimized, well-organized, and most competent way possible: ### A. Enables Multi-Threading -In case you don't already know, OpenCV's' [`read()`](https://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html#a473055e77dd7faa4d26d686226b292c1) is a [blocking method](https://luminousmen.com/post/asynchronous-programming-blocking-and-non-blocking) for reading/decoding the next video-frame and consumes much of the I/O bound memory depending upon our Source-properties & System-hardware. This means, it blocks the function from returning until the next frame. As a result, this behavior halts our python script's main thread completely for that moment. +> In case you don't already know, OpenCV's' [`read()`](https://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html#a473055e77dd7faa4d26d686226b292c1) is a [**Blocking I/O**](https://luminousmen.com/post/asynchronous-programming-blocking-and-non-blocking) function for reading and decoding the next video-frame, and consumes much of the I/O bound memory depending upon our video source properties & system hardware. This essentially means, the corresponding thread that reads data from it, is continuously blocked from retrieving the next frame. As a result, our python program appears slow and sluggish even without any type of computationally expensive image processing operations. This problem is far more severe on low memory SBCs like Raspberry Pis. + +In Threaded-Queue-Mode, VidGear creates several [**Python Threads**](https://docs.python.org/3/library/threading.html) within one process to offload the frame-decoding task to a different thread. Thereby, VidGear is able to execute different Video I/O-bounded operations at the same time by overlapping there waiting times. Moreover, threads are managed by operating system itself and is capable of distributing them between available CPU cores efficiently. In this way, Threaded-Queue-Mode keeps on processing frames faster in the [background](https://en.wikipedia.org/wiki/Daemon_(computing)) without waiting for blocked I/O operations or sluggishness in our main python program thread. -Threaded-Queue-Mode employs [**Multi-Threading**](https://docs.python.org/3/library/threading.html) to separate frame-decoding like tasks to multiple independent threads in layman's word. Multiple-Threads helps it execute different Video Processing I/O-bound operations all at the same time by overlapping the waiting times. In this way, Threaded-Queue-Mode keeps on processing frames faster in the [background(daemon)](https://en.wikipedia.org/wiki/Daemon_(computing)) without waiting for blocked I/O operations and doesn't get affected by how sluggish our main python thread is. +### B. Utilizes Fixed-Size Queues -### B. Monitors Fix-Sized Queues +> Although Multi-threading is fast, easy, and efficient, it can lead to some serious undesired effects like _frame-skipping, GIL, race conditions, etc._ This is because there is no isolation whatsoever in python threads, if there is any crash, it may cause not only one particular thread to crash but the whole process to crash. That's not all, the biggest difficulty is that memory of the process where threads work is shared by different threads and that may result in frequent process crashes due to unwanted race conditions. -> Although Multi-threading is fast & easy, it may lead to undesired effects like _frame-skipping, deadlocks, and race conditions, etc._ +These problems are avoided in Threaded-Queue-Mode by utilizing **Thread-Safe, Memory-Efficient, and Fixed-Size [`Queues`](https://docs.python.org/3/library/queue.html#module-queue)** _(with approximately same O(1) performance in both directions)_, that indpendently monitors the synchronized access to frame-decoding thread and isolates it from any other parallel threads which in turn prevents [**Global Interpreter Lock**](https://realpython.com/python-gil/). -Threaded-Queue-Mode utilizes **Monitored, Thread-Safe, Memory-Efficient, and Fixed-Sized [`Synchronized Queues`](https://docs.python.org/3/library/queue.html#module-queue)** _(with approximately the same O(1) performance in either direction)_, that always maintains a fixed-length of frames buffer in the memory. It blocks the thread if the queue is full or otherwise pops out the frames synchronously and efficiently without any obstructions. Its fixed-length queues stops multiple threads from accessing the same source simultaneously and thus preventing Global Interpreter Lock _(a.k.a GIL)_. +### C. Accelerates Frame Processing + +With queues, VidGear always maintains a fixed-length frames buffer in the memory and blocks the thread temporarily if the queue is full to avoid possible frame drops or otherwise pops out the frames synchronously without any obstructions. This significantly accelerates frame processing rate (and therefore our overall video processing pipeline) comes from dramatically reducing latency — since we don’t have to wait for the `read()` method to finish reading and decoding a frame; instead, there is always a pre-decoded frame ready for us to process.   @@ -57,11 +62,13 @@ Threaded-Queue-Mode utilizes **Monitored, Thread-Safe, Memory-Efficient, and Fix - [x] _Enables Blocking, Sequential and Threaded LIFO Frame Handling._ -- [x] _Sequentially adds and releases frames to/from `deque` and handles the overflow of this queue._ +- [x] _Sequentially adds and releases frames from `queues` and handles the overflow._ - [x] _Utilizes thread-safe, memory efficient `queues` that appends and pops frames with same O(1) performance from either side._ -- [x] _Requires less RAM at due to buffered frames in the `queue`._ +- [x] _Faster frame access due to buffered frames in the `queue`._ + +- [x] _Provides isolation for source thread and prevents GIL._   @@ -71,15 +78,16 @@ Threaded-Queue-Mode utilizes **Monitored, Thread-Safe, Memory-Efficient, and Fix To manually disable Threaded-Queue-Mode, VidGear provides `THREADED_QUEUE_MODE` boolean attribute for `options` dictionary parameter in respective [VideoCapture APIs](../../gears/#a-videocapture-gears): -!!! warning "Important Warning" +!!! warning "Important Warnings" + + * Disabling Threaded-Queue-Mode does **NOT disables Multi-Threading.** - * This **`THREADED_QUEUE_MODE`** attribute does **NOT** work with Live feed, such as Camera Devices/Modules. + * `THREADED_QUEUE_MODE` attribute does **NOT** work with Live feed, such as Camera Devices/Modules. - * This **`THREADED_QUEUE_MODE`** attribute is **NOT** supported by ScreenGear & NetGear APIs, as Threaded Queue Mode is essential for their core operations. + * `THREADED_QUEUE_MODE` attribute is **NOT** supported by ScreenGear & NetGear APIs, as Threaded Queue Mode is essential for their core operations. - * Disabling Threaded-Queue-Mode will **NOT** disable Multi-Threading. - * Disabling Threaded-Queue-Mode may lead to **Random Intermittent Bugs** that can be quite difficult to discover. *More insight can be found [here ➶](https://github.com/abhiTronix/vidgear/issues/20#issue-452339596)* +!!! danger "Disabling Threaded-Queue-Mode may lead to Random Intermittent Bugs that can be quite difficult to discover. More insight can be found [here ➶](https://github.com/abhiTronix/vidgear/issues/20#issue-452339596)" **`THREADED_QUEUE_MODE`** _(boolean)_: This attribute can be used to override Threaded-Queue-Mode mode to manually disable it: diff --git a/docs/bonus/colorspace_manipulation.md b/docs/bonus/colorspace_manipulation.md index d11e92fd2..c55b7c41a 100644 --- a/docs/bonus/colorspace_manipulation.md +++ b/docs/bonus/colorspace_manipulation.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -27,7 +27,7 @@ limitations under the License. ## Source ColorSpace manipulation -> All VidGear's Videocapture Gears _(namely CamGear, ScreenGear, VideoGear)_ and some Streaming Gears _(WebGear)_ and Network Gears _(Client's end)_ - provides exclusive internal support for ==Source [Color Space](https://en.wikipedia.org/wiki/Color_space) manipulation==. +> All VidGear's Videocapture Gears _(namely CamGear, ScreenGear, VideoGear)_ and some Streaming Gears _(namely WebGear, WebGear_RTC)_ and Network Gears _(Client's end)_ - provides exclusive internal support for ==Source [Color Space](https://en.wikipedia.org/wiki/Color_space) manipulation==. **There are two ways to alter source colorspace:** diff --git a/docs/bonus/reference/camgear.md b/docs/bonus/reference/camgear.md index ceb18bbb3..83f0b6b73 100644 --- a/docs/bonus/reference/camgear.md +++ b/docs/bonus/reference/camgear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/helper.md b/docs/bonus/reference/helper.md index ccc7c4daf..2214e37f6 100644 --- a/docs/bonus/reference/helper.md +++ b/docs/bonus/reference/helper.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -42,7 +42,7 @@ limitations under the License.   -::: vidgear.gears.helper.delete_safe +::: vidgear.gears.helper.delete_ext_safe   @@ -98,6 +98,39 @@ limitations under the License.   +::: vidgear.gears.helper.import_dependency_safe + +  + ::: vidgear.gears.helper.get_video_bitrate -  \ No newline at end of file +  + +::: vidgear.gears.helper.check_WriteAccess + +  + +::: vidgear.gears.helper.check_open_port + +  + +::: vidgear.gears.helper.delete_file_safe + +  + +::: vidgear.gears.helper.get_supported_demuxers + +  + +::: vidgear.gears.helper.get_supported_vencoders + +  + +::: vidgear.gears.helper.youtube_url_validator + +  + + +::: vidgear.gears.helper.validate_auth_keys + +  diff --git a/docs/bonus/reference/helper_async.md b/docs/bonus/reference/helper_async.md index e28f99f3e..8e3e56b87 100644 --- a/docs/bonus/reference/helper_async.md +++ b/docs/bonus/reference/helper_async.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,14 +18,6 @@ limitations under the License. =============================================== --> -::: vidgear.gears.asyncio.helper.logger_handler - -  - -::: vidgear.gears.asyncio.helper.mkdir_safe - -  - ::: vidgear.gears.asyncio.helper.reducer   @@ -40,4 +32,8 @@ limitations under the License. ::: vidgear.gears.asyncio.helper.download_webdata -  \ No newline at end of file +  + +::: vidgear.gears.asyncio.helper.validate_webdata + +  diff --git a/docs/bonus/reference/netgear.md b/docs/bonus/reference/netgear.md index 70c6e190c..d4a5bf6f3 100644 --- a/docs/bonus/reference/netgear.md +++ b/docs/bonus/reference/netgear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/netgear_async.md b/docs/bonus/reference/netgear_async.md index e94ee0ef4..c59c749af 100644 --- a/docs/bonus/reference/netgear_async.md +++ b/docs/bonus/reference/netgear_async.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/pigear.md b/docs/bonus/reference/pigear.md index 00c66485e..a42586b71 100644 --- a/docs/bonus/reference/pigear.md +++ b/docs/bonus/reference/pigear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/screengear.md b/docs/bonus/reference/screengear.md index 26c31d0c7..a72d21448 100644 --- a/docs/bonus/reference/screengear.md +++ b/docs/bonus/reference/screengear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/stabilizer.md b/docs/bonus/reference/stabilizer.md index fc1896cf9..d22edf0a2 100644 --- a/docs/bonus/reference/stabilizer.md +++ b/docs/bonus/reference/stabilizer.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/streamgear.md b/docs/bonus/reference/streamgear.md index 6571789c8..d2d0820e0 100644 --- a/docs/bonus/reference/streamgear.md +++ b/docs/bonus/reference/streamgear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,11 +18,11 @@ limitations under the License. =============================================== --> -!!! example "StreamGear API usage examples can be found [here ➶](../../../gears/streamgear/usage/)" +!!! example "StreamGear API usage examples for: [Single-Source Mode ➶](../../../gears/streamgear/ssm/usage/) and [Real-time Frames Mode ➶](../../../gears/streamgear/rtfm/usage/)" !!! info "StreamGear API parameters are explained [here ➶](../../../gears/streamgear/params/)" -::: vidgear.gears.StreamGear +::: vidgear.gears.StreamGear   \ No newline at end of file diff --git a/docs/bonus/reference/videogear.md b/docs/bonus/reference/videogear.md index 5c64c1bee..8d493a04e 100644 --- a/docs/bonus/reference/videogear.md +++ b/docs/bonus/reference/videogear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/webgear.md b/docs/bonus/reference/webgear.md index c0c950ceb..0832e5948 100644 --- a/docs/bonus/reference/webgear.md +++ b/docs/bonus/reference/webgear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/webgear_rtc.md b/docs/bonus/reference/webgear_rtc.md index cc75cd117..8a2c69d92 100644 --- a/docs/bonus/reference/webgear_rtc.md +++ b/docs/bonus/reference/webgear_rtc.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/bonus/reference/writegear.md b/docs/bonus/reference/writegear.md index 72b944c5e..545a9329a 100644 --- a/docs/bonus/reference/writegear.md +++ b/docs/bonus/reference/writegear.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/changelog.md b/docs/changelog.md index e0bbe94e0..41d007afa 100644 --- a/docs/changelog.md +++ b/docs/changelog.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -20,46 +20,459 @@ limitations under the License. # Release Notes +## v0.2.2 (2021-09-02) + +??? tip "New Features" + - [x] **StreamGear:** + * Native Support for Apple HLS Multi-Bitrate Streaming format: + + Added support for new [Apple HLS](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ HTTP streaming format in StreamGear. + + Implemented default workflow for auto-generating primary HLS stream of same resolution and framerate as source. + + Added HLS support in *Single-Source* and *Real-time Frames* Modes. + + Implemented inherit support for `fmp4` and `mpegts` HLS segment types. + + Added adequate default parameters required for trans-coding HLS streams. + + Added native support for HLS live-streaming. + + Added `"hls"` value to `format` parameter for easily selecting HLS format. + + Added HLS support in `-streams` attribute for transcoding additional streams. + + Added support for `.m3u8` and `.ts` extensions in `clear_prev_assets` workflow. + + Added validity check for `.m3u8` extension in output when HLS format is used. + + Separated DASH and HLS command handlers. + + Created HLS format exclusive parameters. + + Implemented `-hls_base_url` FFMpeg parameter support. + * Added support for audio input from external device: + + Implemented support for audio input from external device. + + Users can now easily add audio device and decoder by formatting them as python list. + + Modified `-audio` parameter to support `list` data type as value. + + Modified `validate_audio` helper function to validate external audio devices. + * Added `-seg_duration` to control segment duration. + - [x] **NetGear:** + * New SSH Tunneling Mode for remote connection: + + New SSH Tunneling Mode for connecting ZMQ sockets across machines via SSH tunneling. + + Added new `ssh_tunnel_mode` attribute to enable ssh tunneling at provide address at server end only. + + Implemented new `check_open_port` helper method to validate availability of host at given open port. + + Added new attributes `ssh_tunnel_keyfile` and `ssh_tunnel_pwd` to easily validate ssh connection. + + Extended this feature to be compatible with bi-directional mode and auto-reconnection. + + Disabled support for exclusive Multi-Server and Multi-Clients modes. + + Implemented logic to automatically enable `paramiko` support if installed. + + Reserved port-`47` for testing. + * Additional colorspace support for input frames with Frame-Compression enabled: + + Allowed to manually select colorspace on-the-fly with JPEG frame compression. + + Updated `jpeg_compression` dict parameter to support colorspace string values. + + Added all supported colorspace values by underline `simplejpeg` library. + + Server enforced frame-compression colorspace on client(s). + + Enable "BGR" colorspace by default. + + Added Example for changing incoming frames colorspace with NetGear's Frame Compression. + + Updated Frame Compression parameters in NetGear docs. + + Updated existing CI tests to cover new frame compression functionality. + - [x] **NetGear_Async:** + * New exclusive Bidirectional Mode for bidirectional data transfer: + + NetGear_Async's first-ever exclusive Bidirectional mode with pure asyncio implementation. + + :warning: Bidirectional mode is only available with User-defined Custom Source(i.e. `source=None`) + + Added support for `PAIR` & `REQ/REP` bidirectional patterns for this mode. + + Added powerful `asyncio.Queues` for handling user data and frames in real-time. + + Implemented new `transceive_data` method to Transmit _(in Recieve mode)_ and Receive _(in Send mode)_ data in real-time. + + Implemented `terminate_connection` internal asyncio method to safely terminate ZMQ connection and queues. + + Added `msgpack` automatic compression encoding and decoding of data and frames in bidirectional mode. + + Added support for `np.ndarray` video frames. + + Added new `bidirectional_mode` attribute for enabling this mode. + + Added 8-digit random alphanumeric id generator for each device. + + :warning: NetGear_Async will throw `RuntimeError` if bidirectional mode is disabled at server or client but not both. + * Added new `disable_confirmation` used to force disable termination confirmation from client in `terminate_connection`. + * Added `task_done()` method after every `get()` call to gracefully terminate queues. + * Added new `secrets` and `string` imports. + - [x] **WebGear:** + * Updated JPEG Frame compression with `simplejpeg`: + + Implemented JPEG compression algorithm for 4-5% performance boost at cost of minor loss in quality. + + Utilized `encode_jpeg` and `decode_jpeg` methods to implement turbo-JPEG transcoding with `simplejpeg`. + + Added new options to control JPEG frames *quality*, enable fastest *dct*, fast *upsampling* to boost performance. + + Added new `jpeg_compression`, `jpeg_compression_quality`, `jpeg_compression_fastdct`, `jpeg_compression_fastupsample` attributes. + + Enabled fast dct by default with JPEG frames at `90%`. + + Incremented default frame reduction to `25%`. + + Implemented automated grayscale colorspace frames handling. + + Updated old and added new usage examples. + + Dropped support for depreciated attributes from WebGear and added new attributes. + * Added new WebGear Theme: _(Checkout at https://github.com/abhiTronix/vidgear-vitals)_ + - Added responsive image scaling according to screen aspect ratios. + - Added responsive text scaling. + - Added rounded border and auto-center to image tag. + - Added bootstrap css properties to implement auto-scaling. + - Removed old `resize()` hack. + - Improved text spacing and weight. + - Integrated toggle full-screen to new implementation. + - Hide Scrollbar both in WebGear_RTC and WebGear Themes. + - Beautify files syntax and updated files checksum. + - Refactor files and removed redundant code. + - Bumped theme version to `v0.1.2`. + - [x] **WebGear_RTC:** + * Added native support for middlewares: + + Added new global `middleware` variable for easily defining Middlewares as list. + + Added validity check for Middlewares. + + Added tests for middlewares support. + + Added example for middlewares support. + + Extended middlewares support to WebGear API too. + + Added related imports. + * Added new WebGear_RTC Theme: _(Checkout at https://github.com/abhiTronix/vidgear-vitals)_ + + Implemented new responsive video scaling according to screen aspect ratios. + + Added bootstrap CSS properties to implement auto-scaling. + + Removed old `resize()` hack. + + Beautify files syntax and updated files checksum. + + Refactored files and removed redundant code. + + Bumped theme version to `v0.1.2` + - [x] **Helper:** + * New automated interpolation selection for gears: + + Implemented `retrieve_best_interpolation` method to automatically select best available interpolation within OpenCV. + + Added support for this method in WebGear, WebGear_RTC and Stabilizer Classes/APIs. + + Added new CI tests for this feature. + * Implemented `get_supported_demuxers` method to get list of supported demuxers. + - [x] **CI:** + * Added new `no-response` work-flow for stale issues. + * Added new CI tests for SSH Tunneling Mode. + * Added `paramiko` to CI dependencies. + * Added support for `"hls"` format in existing CI tests. + * Added new functions `check_valid_m3u8` and `extract_meta_video` for validating HLS files. + * Added new `m3u8` dependency to CI workflows. + * Added complete CI tests for NetGear_Async's new Bidirectional Mode: + + Implemented new exclusive `Custom_Generator` class for testing bidirectional data dynamically on server-end. + + Implemented new exclusive `client_dataframe_iterator` method for testing bidirectional data on client-end. + + Implemented `test_netgear_async_options` and `test_netgear_async_bidirectionalmode` two new tests. + + Added `timeout` value on server end in CI tests. + - [x] **Setup.py:** + * Added new `cython` and `msgpack` dependency. + * Added `msgpack` and `msgpack_numpy` to auto-install latest. + - [x] **BASH:** + * Added new `temp_m3u8` folder for generating M3U8 assets in CI tests. + - [x] **Docs:** + * Added docs for new Apple HLS StreamGear format: + + Added StreamGear HLS transcoding examples for both StreamGear modes. + + Updated StreamGear parameters to w.r.t new HLS configurations. + + Added open-sourced *"Sintel" - project Durian Teaser Demo* with StreamGear's HLS stream using `Clappr` and raw.githack.com. + + Added new HLS chunks at https://github.com/abhiTronix/vidgear-docs-additionals for StreamGear + + Added support for HLS video in Clappr within `custom.js` using HlsjsPlayback plugin. + + Added support for Video Thumbnail preview for HLS video in Clappr within `custom.js` + + Added `hlsjs-playback.min.js` JS script and suitable configuration for HlsjsPlayback plugin. + + Added custom labels for quality levels selector in `custom.js`. + + Added new docs content related to new Apple HLS format. + + Updated DASH chunk folder at https://github.com/abhiTronix/vidgear-docs-additionals. + + Added example for audio input support from external device in StreamGear. + + Added steps for using `-audio` attribute on different OS platforms in StreamGear. + * Added usage examples for NetGear_Async's Bidirectional Mode: + + Added new Usage examples and Reference doc for NetGear_Async's Bidirectional Mode. + + Added new image asset for NetGear_Async's Bidirectional Mode. + + Added NetGear_Async's `option` parameter reference. + + Updated NetGear_Async definition in docs. + + Changed font size for Helper methods. + + Renamed `Bonus` section to `References` in `mkdocs.yml`. + * Added Gitter sidecard embed widget: + + Imported gitter-sidecar script to `main.html`. + + Updated `custom.js` to set global window option. + + Updated Sidecard UI in `custom.css`. + * Added bonus examples to help section: + + Implemented a curated list of more advanced examples with unusual configuration for each API. + * Added several new contents and updated context. + * Added support for search suggestions, search highlighting and search sharing _(i.e. deep linking)_ + * Added more content to docs to make it more user-friendly. + * Added warning that JPEG Frame-Compression is disabled with Custom Source in WebGear. + * Added steps for identifying and specifying sound card on different OS platforms in WriteGear. + * Added Zenodo DOI badge and its reference in BibTex citations. + * Added `extra.homepage` parameter, which allows for setting a dedicated URL for `site_url`. + * Added `pymdownx.striphtml` plugin for stripping comments. + * Added complete docs for SSH Tunneling Mode. + * Added complete docs for NetGear's SSH Tunneling Mode. + * Added `pip` upgrade related docs. + * Added docs for installing vidgear with only selective dependencies + * Added new `advance`/`experiment` admonition with new background color. + * Added new icons SVGs for `advance` and `warning` admonition. + * Added new usage example and related information. + * Added new image assets for ssh tunneling example. + * Added new admonitions + * Added new FAQs. + + +??? success "Updates/Improvements" + - [x] VidGear Core: + * New behavior to virtually isolate optional API specific dependencies by silencing `ImportError` on all VidGear's APIs import. + * Implemented algorithm to cache all imports on startup but silence any `ImportError` on missing optional dependency. + * :warning: Now `ImportError` will be raised only any certain API specific dependency is missing during given API's initialization. + * New `import_dependency_safe` to imports specified dependency safely with `importlib` module. + * Replaced all APIs imports with `import_dependency_safe`. + * Added support for relative imports in `import_dependency_safe`. + * Implemented `error` parameter to by default `ImportError` with a meaningful message if a dependency is missing, Otherwise if `error = log` a warning will be logged and on `error = silent` everything will be quit. But If a dependency is present, but older than specified, an error is raised if specified. + * Implemented behavior that if a dependency is present, but older than `min_version` specified, an error is raised always. + * Implemented `custom_message` to display custom message on error instead of default one. + * Implemented separate `import_core_dependency` function to import and check for specified core dependency. + * `ImportError` will be raised immediately if core dependency not found. + - [x] StreamGear: + * Replaced depreciated `-min_seg_duration` flag with `-seg_duration`. + * Removed redundant `-re` flag from RTFM. + * Improved Live-Streaming performance by disabling SegmentTimline + * Improved DASH assets detection for removal by using filename prefixes. + - [x] NetGear: + * Replaced `np.newaxis` with `np.expand_dims`. + * Replaced `random` module with `secrets` while generating system ID. + * Update array indexing with `np.copy`. + - [x] NetGear_Async: + * Improved custom source handling. + * Removed deprecated `loop` parameter from asyncio methods. + * Re-implemented `skip_loop` parameter in `close()` method. + * :warning: `run_until_complete` will not used if `skip_loop` is enabled. + * :warning: `skip_loop` now will create asyncio task instead and will enable `disable_confirmation` by default. + * Replaced `create_task` with `ensure_future` to ensure backward compatibility with python-3.6 legacies. + * Simplified code for `transceive_data` method. + - [x] WebGear_RTC: + * Improved handling of failed ICE connection. + * Made `is_running` variable globally available for internal use. + - [x] Helper: + * Added `4320p` resolution support to `dimensions_to_resolutions` method. + * Implemented new `delete_file_safe` to safely delete files at given path. + * Replaced `os.remove` calls with `delete_file_safe`. + * Added support for filename prefixes in `delete_ext_safe` method. + * Improved and simplified `create_blank_frame` functions frame channels detection. + * Added `logging` parameter to capPropId function to forcefully discard any error(if required). + - [x] Setup.py: + * Added patch for `numpy` dependency, `numpy` recently dropped support for python 3.6.x legacies. See https://github.com/numpy/numpy/releases/tag/v1.20.0 + * Removed version check on certain dependencies. + * Re-added `aiortc` to auto-install latest version. + - [x] Asyncio: + * Changed `asyncio.sleep` value to `0`. + + The amount of time sleep is irrelevant; the only purpose await asyncio.sleep() serves is to force asyncio to suspend execution to the event loop, and give other tasks a chance to run. Also, `await asyncio.sleep(0)` will achieve the same effect. https://stackoverflow.com/a/55782965/10158117 + - [x] License: + * Dropped publication year range to avoid confusion. _(Signed and Approved by @abhiTronix)_ + * Updated Vidgear license's year of first publication of the work in accordance with US copyright notices defined by Title 17, Chapter 4(Visually perceptible copies): https://www.copyright.gov/title17/92chap4.html + * Reflected changes in all copyright notices. + - [x] CI: + * Updated macOS VM Image to latest in azure devops. + * Updated VidGear Docs Deployer Workflow. + * Updated WebGear_RTC CI tests. + * Removed redundant code from CI tests. + * Updated tests to increase coverage. + * Enabled Helper tests for python 3.8+ legacies. + * Enabled logging in `validate_video` method. + * Added `-hls_base_url` to streamgear tests. + * Update `mpegdash` dependency to `0.3.0-dev2` version in Appveyor. + * Updated CI tests for new HLS support + * Updated CI tests from scratch for new native HLS support in StreamGear. + * Updated test patch for StreamGear. + * Added exception for RunTimeErrors in NetGear CI tests. + * Added more directories to Codecov ignore list. + * Imported relative `logger_handler` for asyncio tests. + - [x] Docs: + * Re-positioned few docs comments at bottom for easier detection during stripping. + * Updated to new extra `analytics` parameter in Material Mkdocs. + * Updated dark theme to `dark orange`. + * Changed fonts => text: `Muli` & code: `Fira Code` + * Updated fonts to `Source Sans Pro`. + * Updated `setup.py` update-link for modules. + * Re-added missing StreamGear Code docs. + * Several minor tweaks and typos fixed. + * Updated `404.html` page. + * Updated admonitions colors and beautified `custom.css`. + * Replaced VideoGear & CamGear with OpenCV in CPU intensive examples. + * Updated `mkdocs.yml` with new changes and URLs. + * Moved FAQ examples to bonus examples. + * Moved StreamGear primary modes to separate sections for better readability. + * Implemented separate overview and usage example pages for StreamGear primary modes. + * Improved StreamGear docs context and simplified language. + * Renamed StreamGear `overview` page to `introduction`. + * Re-written Threaded-Queue-Mode from scratch with elaborated functioning. + * Replace *Paypal* with *Liberpay* in `FUNDING.yml`. + * Updated FFmpeg Download links. + * Reverted UI change in CSS. + * Updated `changelog.md` and fixed clutter. + * Updated `README.md` and `mkdocs.yml` with new additions + * Updated context for CamGear example. + * Restructured and added more content to docs. + * Updated comments in source code. + * Removed redundant data table tweaks from `custom.css`. + * Re-aligned badges in README.md. + * Beautify `custom.css`. + * Updated `mkdocs.yml`. + * Updated context and fixed typos. + * Added missing helper methods in Reference. + * Updated Admonitions. + * Updates images assets. + * Bumped CodeCov. + - [x] Logging: + * Improved logging level-names. + * Updated logging messages. + - [x] Minor tweaks to `needs-more-info` template. + - [x] Updated issue templates and labels. + - [x] Removed redundant imports. + +??? danger "Breaking Updates/Changes" + - [ ] Virtually isolated all API specific dependencies, Now `ImportError` for API-specific dependencies will be raised only when any of them is missing at API's initialization. + - [ ] Renamed `delete_safe` to `delete_ext_safe`. + - [ ] Dropped support for `frame_jpeg_quality`, `frame_jpeg_optimize`, `frame_jpeg_progressive` attributes from WebGear. + +??? bug "Bug-fixes" + - [x] CamGear: + * Hot-fix for Live Camera Streams: + + Added new event flag to keep check on stream read. + + Implemented event wait for `read()` to block it when source stream is busy. + + Added and Linked `THREAD_TIMEOUT` with event wait timout. + + Improved backward compatibility of new additions. + * Enforced logging for YouTube live. + - [x] NetGear: + * Fixed Bidirectional Video-Frame Transfer broken with frame-compression: + + Fixed `return_data` interfering with return JSON-data in receive mode. + + Fixed logic. + * Fixed color-subsampling interfering with colorspace. + * Patched external `simplejpeg` bug. Issue: https://gitlab.com/jfolz/simplejpeg/-/issues/11 + + Added `np.squeeze` to drop grayscale frame's 3rd dimension on Client's end. + * Fixed bug that cause server end frame dimensions differ from client's end when frame compression enabled. + - [X] NetGear_Async: + * Fixed bug related asyncio queue freezing on calling `join()`. + * Fixed ZMQ connection bugs in bidirectional mode. + * Fixed several critical bugs in event loop handling. + * Fixed several bugs in bidirectional mode implementation. + * Fixed missing socket termination in both server and client end. + * Fixed `timeout` parameter logic. + * Fixed typos in error messages. + - [x] WebGear_RTC: + * Fixed stream freezes after web-page reloading: + + Implemented new algorithm to continue stream even when webpage is reloaded. + + Inherit and modified `next_timestamp` VideoStreamTrack method for generating accurate timestamps. + + Implemented `reset_connections` callable to reset all peer connections and recreate Video-Server timestamps. (Implemented by @kpetrykin) + + Added `close_connection` endpoint in JavaScript to inform server page refreshing.(Thanks to @kpetrykin) + + Added exclusive reset connection node `/close_connection` in routes. + + Added `reset()` method to Video-Server class for manually resetting timestamp clock. + + Added `reset_enabled` flag to keep check on reloads. + + Fixed premature webpage auto-reloading. + + Added additional related imports. + * Fixed web-page reloading bug after stream ended: + + Disable webpage reload behavior handling for Live broadcasting. + + Disable reload CI test on Windows machines due to random failures. + + Improved handling of failed ICE connection. + * Fixed Assertion error bug: + + Source must raise MediaStreamError when stream ends instead of returning None-type. + - [x] WebGear + * Removed format specific OpenCV decoding and encoding support for WebGear. + - [x] Helper: + * Regex bugs fixed: + + New improved regex for discovering supported encoders in `get_supported_vencoders`. + + Re-implemented check for extracting only valid output protocols in `is_valid_url`. + + Minor tweaks for better regex compatibility. + * Bugfix related to OpenCV import: + + Bug fixed for OpenCV import comparison test failing with Legacy versions and throwing `ImportError`. + + Replaced `packaging.parse_version` with more robust `distutils.version`. + * Fixed bug with `create_blank_frame` that throws error with gray frames: + + Implemented automatic output channel correction inside `create_blank_frame` function. + + Extended automatic output channel correction support to asyncio package. + * Implemented `RSTP` protocol validation as _demuxer_, since it's not a protocol but a demuxer. + * Removed redundant `logger_handler`, `mkdir_safe`, `retrieve_best_interpolation`, `capPropId` helper functions from asyncio package. Relatively imported helper functions from non-asyncio package. + * Removed unused `aiohttp` dependency. + * Removed `asctime` formatting from logging. + - [x] StreamGear: + * Fixed Multi-Bitrate HLS VOD streams: + + Re-implemented complete workflow for Multi-Bitrate HLS VOD streams. + + Extended support to both *Single-Source* and *Real-time Frames* Modes. + * Fixed bugs with audio-video mapping. + * Fixed master playlist not generating in output. + * Fixed improper `-seg_duration` value resulting in broken pipeline. + * Fixed expected aspect ratio not calculated correctly for additional streams. + * Fixed stream not terminating when provided input from external audio device. + * Fixed bugs related to external audio not mapped correctly in HLS format. + * Fixed OPUS audio fragments not supported with MP4 video in HLS. + * Fixed unsupported high audio bit-rate bug. + - [x] Setup.py: + * Fixed `latest_version` returning incorrect version for some PYPI packages. + * Removed `latest_version` variable support from `simplejpeg`. + * Fixed `streamlink` only supporting requests==2.25.1 on Windows. + * Removed all redundant dependencies like `colorama`, `aiofiles`, `aiohttp`. + * Fixed typos in dependencies. + - [x] Setup.cfg: + * Replaced dashes with underscores to remove warnings. + - [x] CI: + * Replaced buggy `starlette.TestClient` with `async-asgi-testclient` in WebGear_RTC + * Removed `run()` method and replaced with pure asyncio implementation. + * Added new `async-asgi-testclient` CI dependency. + * Fixed `fake_picamera` class logger calling `vidgear` imports prematurely before importing `picamera` class in tests. + + Implemented new `fake_picamera` class logger inherently with `logging` module. + + Moved `sys.module` logic for faking to `init.py`. + + Added `__init__.py` to ignore in Codecov. + * Fixed event loop closing prematurely while reloading: + + Internally disabled suspending event loop while reloading. + * Event Policy Loop patcher added for WebGear_RTC tests. + * Fixed `return_assets_path` path bug. + * Fixed typo in `TimeoutError` exception import. + * Fixed eventloop is already closed bug. + * Fixed eventloop bugs in Helper CI tests. + * Fixed several minor bugs related to new CI tests. + * Fixed bug in PiGear tests. + - [x] Docs: + * Fixed 404 page does not work outside the site root with mkdocs. + * Fixed markdown files comments not stripped when converted to HTML. + * Fixed missing heading in VideoGear. + * Typos in links and code comments fixed. + * Several minor tweaks and typos fixed. + * Fixed improper URLs/Hyperlinks and related typos. + * Fixed typos in usage examples. + * Fixed redundant properties in CSS. + * Fixed bugs in `mkdocs.yml`. + * Fixed docs contexts and typos. + * Fixed `stream.release()` missing in docs. + * Fixed several typos in code comments. + * Removed dead code from docs. + - [x] Refactored Code and reduced redundancy. + - [x] Fixed shutdown in `main.py`. + - [x] Fixed logging comments. + +??? question "Pull Requests" + * PR #210 + * PR #215 + * PR #222 + * PR #223 + * PR #227 + * PR #231 + * PR #233 + * PR #237 + * PR #239 + * PR #243 + + +  + +  + + ## v0.2.1 (2021-04-25) ??? tip "New Features" - [x] **WebGear_RTC:** - * [ ] A new API that is similar to WeGear API in all aspects but utilizes WebRTC standard instead of Motion JPEG for streaming. - * [ ] Now it is possible to share data and perform teleconferencing peer-to-peer, without requiring that the user install plugins or any other third-party software. - * [ ] Added a flexible backend for `aiortc` - a python library for Web Real-Time Communication (WebRTC). - * [ ] Integrated all functionality and parameters of WebGear into WebGear_RTC API. - * [ ] Implemented JSON Response with a WebRTC Peer Connection of Video Server. - * [ ] Added a internal `RTC_VideoServer` server on WebGear_RTC, a inherit-class to aiortc's VideoStreamTrack API. - * [ ] New Standalone UI Default theme v0.1.1 for WebGear_RTC from scratch without using 3rd-party assets. (by @abhiTronix) - * [ ] New `custom.js` and `custom.css` for custom responsive behavior. - * [ ] Added WebRTC support to `custom.js` and ensured compatibility with WebGear_RTC. - * [ ] Added example support for ICE framework and STUN protocol like WebRTC features to `custom.js`. - * [ ] Added `resize()` function to `custom.js` to automatically adjust `video` & `img` tags for smaller screens. - * [ ] Added WebGear_RTC support in main.py for easy access through terminal using `--mode` flag. - * [ ] Integrated all WebGear_RTC enhancements to WebGear Themes. - * [ ] Added CI test for WebGear_RTC. - * [ ] Added complete docs for WebGear_RTC API. - * [ ] Added bare-minimum as well as advanced examples usage code. - * [ ] Added new theme images. - * [ ] Added Reference and FAQs. + * A new API that is similar to WeGear API in all aspects but utilizes WebRTC standard instead of Motion JPEG for streaming. + * Now it is possible to share data and perform teleconferencing peer-to-peer, without requiring that the user install plugins or any other third-party software. + * Added a flexible backend for `aiortc` - a python library for Web Real-Time Communication (WebRTC). + * Integrated all functionality and parameters of WebGear into WebGear_RTC API. + * Implemented JSON Response with a WebRTC Peer Connection of Video Server. + * Added a internal `RTC_VideoServer` server on WebGear_RTC, a inherit-class to aiortc's VideoStreamTrack API. + * New Standalone UI Default theme v0.1.1 for WebGear_RTC from scratch without using 3rd-party assets. (by @abhiTronix) + * New `custom.js` and `custom.css` for custom responsive behavior. + * Added WebRTC support to `custom.js` and ensured compatibility with WebGear_RTC. + * Added example support for ICE framework and STUN protocol like WebRTC features to `custom.js`. + * Added `resize()` function to `custom.js` to automatically adjust `video` & `img` tags for smaller screens. + * Added WebGear_RTC support in main.py for easy access through terminal using `--mode` flag. + * Integrated all WebGear_RTC enhancements to WebGear Themes. + * Added CI test for WebGear_RTC. + * Added complete docs for WebGear_RTC API. + * Added bare-minimum as well as advanced examples usage code. + * Added new theme images. + * Added Reference and FAQs. - [x] **CamGear API:** - * [ ] New Improved Pure-Python Multiple-Threaded Implementation: + * New Improved Pure-Python Multiple-Threaded Implementation: + Optimized Threaded-Queue-Mode Performance. (PR by @bml1g12) + Replaced regular `queue.full` checks followed by sleep with implicit sleep with blocking `queue.put`. + Replaced regular `queue.empty` checks followed by queue. + Replaced `nowait_get` with a blocking `queue.get` natural empty check. + Up-to 2x performance boost than previous implementations. - * [ ] New `THREAD_TIMEOUT` attribute to prevent deadlocks: + * New `THREAD_TIMEOUT` attribute to prevent deadlocks: + Added support for `THREAD_TIMEOUT` attribute to its `options` parameter. + Updated CI Tests and docs. - [x] **WriteGear API:** - * [ ] New more robust handling of default video-encoder in compression mode: + * New more robust handling of default video-encoder in compression mode: + Implemented auto-switching of default video-encoder automatically based on availability. + API now selects Default encoder based on priority: `"libx264" > "libx265" > "libxvid" > "mpeg4"`. + Added `get_supported_vencoders` Helper method to enumerate Supported Video Encoders. + Added common handler for `-c:v` and `-vcodec` flags. - [x] **NetGear API:** - * [ ] New Turbo-JPEG compression with simplejpeg + * New Turbo-JPEG compression with simplejpeg + Implemented JPEG compression algorithm for 4-5% performance boost at cost of minor loss in quality. + Utilized `encode_jpeg` and `decode_jpeg` methods to implement turbo-JPEG transcoding with `simplejpeg`. + Added options to control JPEG frames quality, enable fastest dct, fast upsampling to boost performance. @@ -67,13 +480,13 @@ limitations under the License. + Enabled fast dct by default with JPEG frames at 90%. + Added Docs for JPEG Frame Compression. - [x] **WebGear API:** - * [ ] New modular and flexible configuration for Custom Sources: + * New modular and flexible configuration for Custom Sources: + Implemented more convenient approach for handling custom source configuration. + Added new `config` global variable for this new behavior. + Now None-type `source` parameter value is allowed for defining own custom sources. + Added new Example case and Updates Docs for this feature. + Added new CI Tests. - * [ ] New Browser UI Updates: + * New Browser UI Updates: + New Standalone UI Default theme v0.1.0 for browser (by @abhiTronix) + Completely rewritten theme from scratch with only local resources. + New `custom.js` and `custom.css` for custom responsive behavior. @@ -82,144 +495,144 @@ limitations under the License. + Removed all third-party theme dependencies. + Update links to new github server `abhiTronix/vidgear-vitals` + Updated docs with new theme's screenshots. - * [ ] Added `enable_infinite_frames` attribute for enabling infinite frames. - * [ ] Added New modular and flexible configuration for Custom Sources. - * [ ] Bumped WebGear Theme Version to v0.1.1. - * [ ] Updated Docs and CI tests. + * Added `enable_infinite_frames` attribute for enabling infinite frames. + * Added New modular and flexible configuration for Custom Sources. + * Bumped WebGear Theme Version to v0.1.1. + * Updated Docs and CI tests. - [x] **ScreenGear API:** - * [ ] Implemented Improved Pure-Python Multiple-Threaded like CamGear. - * [ ] Added support for `THREAD_TIMEOUT` attribute to its `options` parameter. + * Implemented Improved Pure-Python Multiple-Threaded like CamGear. + * Added support for `THREAD_TIMEOUT` attribute to its `options` parameter. - [X] **StreamGear API:** - * [ ] Enabled pseudo live-streaming flag `re` for live content. + * Enabled pseudo live-streaming flag `re` for live content. - [x] **Docs:** - * [ ] Added new native docs versioning to mkdocs-material. - * [ ] Added new examples and few visual tweaks. - * [ ] Updated Stylesheet for versioning. - * [ ] Added new DASH video chunks at https://github.com/abhiTronix/vidgear-docs-additionals for StreamGear and Stabilizer streams. - * [ ] Added open-sourced "Tears of Steel" * [ ] project Mango Teaser video chunks. - * [ ] Added open-sourced "Subspace Video Stabilization" http://web.cecs.pdx.edu/~fliu/project/subspace_stabilization/ video chunks. - * [ ] Added support for DASH Video Thumbnail preview in Clappr within `custom.js`. - * [ ] Added responsive clappr DASH player with bootstrap's `embed-responsive`. - * [ ] Added new permalink icon and slugify to toc. - * [ ] Added "back-to-top" button for easy navigation. + * Added new native docs versioning to mkdocs-material. + * Added new examples and few visual tweaks. + * Updated Stylesheet for versioning. + * Added new DASH video chunks at https://github.com/abhiTronix/vidgear-docs-additionals for StreamGear and Stabilizer streams. + * Added open-sourced "Tears of Steel" * project Mango Teaser video chunks. + * Added open-sourced "Subspace Video Stabilization" http://web.cecs.pdx.edu/~fliu/project/subspace_stabilization/ video chunks. + * Added support for DASH Video Thumbnail preview in Clappr within `custom.js`. + * Added responsive clappr DASH player with bootstrap's `embed-responsive`. + * Added new permalink icon and slugify to toc. + * Added "back-to-top" button for easy navigation. - [x] **Helper:** - * [ ] New GitHub Mirror with latest Auto-built FFmpeg Static Binaries: + * New GitHub Mirror with latest Auto-built FFmpeg Static Binaries: + Replaced new GitHub Mirror `abhiTronix/FFmpeg-Builds` in helper.py + New CI maintained Auto-built FFmpeg Static Binaries. + Removed all 3rd-party and old links for better compatibility and Open-Source reliability. + Updated Related CI tests. - - Added auto-font-scaling for `create_blank_frame` method. - * [ ] Added `c_name` parameter to `generate_webdata` and `download_webdata` to specify class. - * [ ] A more robust Implementation of Downloading Artifacts: - * [ ] Added a custom HTTP `TimeoutHTTPAdapter` Adapter with a default timeout for all HTTP calls based on [this GitHub comment](). - * [ ] Implemented http client and the `send()` method to ensure that the default timeout is used if a timeout argument isn't provided. - * [ ] Implemented Requests session`with` block to exit properly even if there are unhandled exceptions. - * [ ] Add a retry strategy to custom `TimeoutHTTPAdapter` Adapter with max 3 retries and sleep(`backoff_factor=1`) between failed requests. - * [ ] Added `create_blank_frame` method to create bland frames with suitable text. + * Added auto-font-scaling for `create_blank_frame` method. + * Added `c_name` parameter to `generate_webdata` and `download_webdata` to specify class. + * A more robust Implementation of Downloading Artifacts: + + Added a custom HTTP `TimeoutHTTPAdapter` Adapter with a default timeout for all HTTP calls based on [this GitHub comment](). + + Implemented http client and the `send()` method to ensure that the default timeout is used if a timeout argument isn't provided. + + Implemented Requests session`with` block to exit properly even if there are unhandled exceptions. + + Add a retry strategy to custom `TimeoutHTTPAdapter` Adapter with max 3 retries and sleep(`backoff_factor=1`) between failed requests. + * Added `create_blank_frame` method to create bland frames with suitable text. - [x] **[CI] Continuous Integration:** - * [ ] Added new fake frame generated for fake `picamera` class with numpy. - * [ ] Added new `create_bug` parameter to fake `picamera` class for emulating various artificial bugs. - * [ ] Added float/int instance check on `time_delay` for camgear and pigear. - * [ ] Added `EXIT_CODE` to new timeout implementation for pytests to upload codecov report when no timeout. - * [ ] Added auxiliary classes to fake `picamera` for facilitating the emulation. - * [ ] Added new CI tests for PiGear Class for testing on all platforms. - * [ ] Added `shutdown()` function to gracefully terminate WebGear_RTC API. - * [ ] Added new `coreutils` brew dependency. - * [ ] Added handler for variable check on exit and codecov upload. - * [ ] Added `is_running` flag to WebGear_RTC to exit safely. + * Added new fake frame generated for fake `picamera` class with numpy. + * Added new `create_bug` parameter to fake `picamera` class for emulating various artificial bugs. + * Added float/int instance check on `time_delay` for camgear and pigear. + * Added `EXIT_CODE` to new timeout implementation for pytests to upload codecov report when no timeout. + * Added auxiliary classes to fake `picamera` for facilitating the emulation. + * Added new CI tests for PiGear Class for testing on all platforms. + * Added `shutdown()` function to gracefully terminate WebGear_RTC API. + * Added new `coreutils` brew dependency. + * Added handler for variable check on exit and codecov upload. + * Added `is_running` flag to WebGear_RTC to exit safely. - [x] **Setup:** - * [ ] New automated latest version retriever for packages: + * New automated latest version retriever for packages: + Implemented new `latest_version` method to automatically retrieve latest version for packages. + Added Some Dependencies. - * [ ] Added `simplejpeg` package for all platforms. + * Added `simplejpeg` package for all platforms. ??? success "Updates/Improvements" - [x] Added exception for RunTimeErrors in NetGear CI tests. - [x] WriteGear: Critical file write access checking method: - * [ ] Added new `check_WriteAccess` Helper method. - * [ ] Implemented a new robust algorithm to check if given directory has write-access. - * [ ] Removed old behavior which gives irregular results. + * Added new `check_WriteAccess` Helper method. + * Implemented a new robust algorithm to check if given directory has write-access. + * Removed old behavior which gives irregular results. - [x] Helper: Maintenance Updates - * [ ] Added workaround for Python bug. - * [ ] Added `safe_mkdir` to `check_WriteAccess` to automatically create non-existential parent folder in path. - * [ ] Extended `check_WriteAccess` Patch to StreamGear. - * [ ] Simplified `check_WriteAccess` to handle Windows envs easily. - * [ ] Updated FFmpeg Static Download URL for WriteGear. - * [ ] Implemented fallback option for auto-calculating bitrate from extracted audio sample-rate in `validate_audio` method. + * Added workaround for Python bug. + * Added `safe_mkdir` to `check_WriteAccess` to automatically create non-existential parent folder in path. + * Extended `check_WriteAccess` Patch to StreamGear. + * Simplified `check_WriteAccess` to handle Windows envs easily. + * Updated FFmpeg Static Download URL for WriteGear. + * Implemented fallback option for auto-calculating bitrate from extracted audio sample-rate in `validate_audio` method. - [x] Docs: General UI Updates - * [ ] Updated Meta tags for og site and twitter cards. - * [ ] Replaced Custom dark theme toggle with mkdocs-material's official Color palette toggle - * [ ] Added example for external audio input and creating segmented MP4 video in WriteGear FAQ. - * [ ] Added example for YouTube streaming with WriteGear. - * [ ] Removed custom `dark-material.js` and `header.html` files from theme. - * [ ] Added blogpost link for detailed information on Stabilizer Working. - * [ ] Updated `mkdocs.yml` and `custom.css` configuration. - * [ ] Remove old hack to resize clappr DASH player with css. - * [ ] Updated Admonitions. - * [ ] Improved docs contexts. - * [ ] Updated CSS for version-selector-button. - * [ ] Adjusted files to match new themes. - * [ ] Updated welcome-bot message for typos. - * [ ] Removed redundant FAQs from NetGear Docs. - * [ ] Updated Assets Images. - * [ ] Updated spacing. + * Updated Meta tags for og site and twitter cards. + * Replaced Custom dark theme toggle with mkdocs-material's official Color palette toggle + * Added example for external audio input and creating segmented MP4 video in WriteGear FAQ. + * Added example for YouTube streaming with WriteGear. + * Removed custom `dark-material.js` and `header.html` files from theme. + * Added blogpost link for detailed information on Stabilizer Working. + * Updated `mkdocs.yml` and `custom.css` configuration. + * Remove old hack to resize clappr DASH player with css. + * Updated Admonitions. + * Improved docs contexts. + * Updated CSS for version-selector-button. + * Adjusted files to match new themes. + * Updated welcome-bot message for typos. + * Removed redundant FAQs from NetGear Docs. + * Updated Assets Images. + * Updated spacing. - [x] CI: - * [ ] Removed unused `github.ref` from yaml. - * [ ] Updated OpenCV Bash Script for Linux envs. - * [ ] Added `timeout-minutes` flag to github-actions workflow. - * [ ] Added `timeout` flag to pytest. - * [ ] Replaced Threaded Gears with OpenCV VideoCapture API. - * [ ] Moved files and Removed redundant code. - * [ ] Replaced grayscale frames with color frames for WebGear tests. - * [ ] Updated pytest timeout value to 15mins. - * [ ] Removed `aiortc` automated install on Windows platform within setup.py. - * [ ] Added new timeout logic to continue to run on external timeout for GitHub Actions Workflows. - * [ ] Removed unreliable old timeout solution from WebGear_RTC. - * [ ] Removed `timeout_decorator` and `asyncio_timeout` dependencies for CI. - * [ ] Removed WebGear_RTC API exception from codecov. - * [ ] Implemented new fake `picamera` class to CI utils for emulating RPi Camera-Module Real-time capabilities. - * [ ] Implemented new `get_RTCPeer_payload` method to receive WebGear_RTC peer payload. - * [ ] Removed PiGear from Codecov exceptions. - * [ ] Disable Frame Compression in few NetGear tests failing on frame matching. - * [ ] Updated NetGear CI tests to support new attributes - * [ ] Removed warnings and updated yaml - * [ ] Added `pytest.ini` to address multiple warnings. - * [ ] Updated azure workflow condition syntax. - * [ ] Update `mike` settings for mkdocs versioning. - * [ ] Updated codecov configurations. - * [ ] Minor logging and docs updates. - * [ ] Implemented pytest timeout for azure pipelines for macOS envs. - * [ ] Added `aiortc` as external dependency in `appveyor.yml`. - * [ ] Re-implemented WebGear_RTC improper offer-answer handshake in CI tests. - * [ ] WebGear_RTC CI Updated with `VideoTransformTrack` to test stream play. - * [ ] Implemented fake `AttributeError` for fake picamera class. - * [ ] Updated PiGear CI tests to increment codecov. - * [ ] Update Tests docs and other minor tweaks to increase overall coverage. - * [ ] Enabled debugging and disabled exit 1 on error in azure pipeline. - * [ ] Removed redundant benchmark tests. + * Removed unused `github.ref` from yaml. + * Updated OpenCV Bash Script for Linux envs. + * Added `timeout-minutes` flag to github-actions workflow. + * Added `timeout` flag to pytest. + * Replaced Threaded Gears with OpenCV VideoCapture API. + * Moved files and Removed redundant code. + * Replaced grayscale frames with color frames for WebGear tests. + * Updated pytest timeout value to 15mins. + * Removed `aiortc` automated install on Windows platform within setup.py. + * Added new timeout logic to continue to run on external timeout for GitHub Actions Workflows. + * Removed unreliable old timeout solution from WebGear_RTC. + * Removed `timeout_decorator` and `asyncio_timeout` dependencies for CI. + * Removed WebGear_RTC API exception from codecov. + * Implemented new fake `picamera` class to CI utils for emulating RPi Camera-Module Real-time capabilities. + * Implemented new `get_RTCPeer_payload` method to receive WebGear_RTC peer payload. + * Removed PiGear from Codecov exceptions. + * Disable Frame Compression in few NetGear tests failing on frame matching. + * Updated NetGear CI tests to support new attributes + * Removed warnings and updated yaml + + Added `pytest.ini` to address multiple warnings. + + Updated azure workflow condition syntax. + * Update `mike` settings for mkdocs versioning. + * Updated codecov configurations. + * Minor logging and docs updates. + * Implemented pytest timeout for azure pipelines for macOS envs. + * Added `aiortc` as external dependency in `appveyor.yml`. + * Re-implemented WebGear_RTC improper offer-answer handshake in CI tests. + * WebGear_RTC CI Updated with `VideoTransformTrack` to test stream play. + * Implemented fake `AttributeError` for fake picamera class. + * Updated PiGear CI tests to increment codecov. + * Update Tests docs and other minor tweaks to increase overall coverage. + * Enabled debugging and disabled exit 1 on error in azure pipeline. + * Removed redundant benchmark tests. - [x] Helper: Added missing RSTP URL scheme to `is_valid_url` method. - [x] NetGear_Async: Added fix for uvloop only supporting python>=3.7 legacies. - [x] Extended WebGear's Video-Handler scope to `https`. - [x] CI: Remove all redundant 32-bit Tests from Appveyor: - * [ ] Appveyor 32-bit Windows envs are actually running on 64-bit machines. - * [ ] More information here: https://help.appveyor.com/discussions/questions/20637-is-it-possible-to-force-running-tests-on-both-32-bit-and-64-bit-windows + * Appveyor 32-bit Windows envs are actually running on 64-bit machines. + * More information here: https://help.appveyor.com/discussions/questions/20637-is-it-possible-to-force-running-tests-on-both-32-bit-and-64-bit-windows - [x] Setup: Removed `latest_version` behavior from some packages. - [x] NetGear_Async: Revised logic for handling uvloop for all platforms and legacies. - [x] Setup: Updated logic to install uvloop-"v0.14.0" for python-3.6 legacies. - [x] Removed any redundant code from webgear. - [x] StreamGear: - * [ ] Replaced Ordinary dict with Ordered Dict to use `move_to_end` method. - * [ ] Moved external audio input to output parameters dict. - * [ ] Added additional imports. - * [ ] Updated docs to reflect changes. + * Replaced Ordinary dict with Ordered Dict to use `move_to_end` method. + * Moved external audio input to output parameters dict. + * Added additional imports. + * Updated docs to reflect changes. - [x] Numerous Updates to Readme and `mkdocs.yml`. - [x] Updated font to `FONT_HERSHEY_SCRIPT_COMPLEX` and enabled logging in create_blank_frame. - [x] Separated channels for downloading and storing theme files for WebGear and WebGear_RTC APIs. - [x] Removed `logging` condition to always inform user in a event of FFmpeg binary download failure. - [x] WebGear_RTC: - * [ ] Improved auto internal termination. - * [ ] More Performance updates through `setCodecPreferences`. - * [ ] Moved default Video RTC video launcher to `__offer`. + * Improved auto internal termination. + * More Performance updates through `setCodecPreferences`. + * Moved default Video RTC video launcher to `__offer`. - [x] NetGear_Async: Added timeout to client in CI tests. - [x] Reimplemented and updated `changelog.md`. - [x] Updated code comments. @@ -239,37 +652,37 @@ limitations under the License. - [x] NetGear_Async: Fixed `source` parameter missing `None` as default value. - [x] Fixed uvloops only supporting python>=3.7 in NetGear_Async. - [x] Helper: - * [ ] Fixed Zombie processes in `check_output` method due a hidden bug in python. For reference: https://bugs.python.org/issue37380 - * [ ] Fixed regex in `validate_video` method. + * Fixed Zombie processes in `check_output` method due a hidden bug in python. For reference: https://bugs.python.org/issue37380 + * Fixed regex in `validate_video` method. - [x] Docs: - * [ ] Invalid `site_url` bug patched in mkdocs.yml - * [ ] Remove redundant mike theme support and its files. - * [ ] Fixed video not centered when DASH video in fullscreen mode with clappr. - * [ ] Fixed Incompatible new mkdocs-docs theme. - * [ ] Fixed missing hyperlinks. + * Invalid `site_url` bug patched in mkdocs.yml + * Remove redundant mike theme support and its files. + * Fixed video not centered when DASH video in fullscreen mode with clappr. + * Fixed Incompatible new mkdocs-docs theme. + * Fixed missing hyperlinks. - [x] CI: - * [ ] Fixed NetGear Address bug - * [ ] Fixed bugs related to termination in WebGear_RTC. - * [ ] Fixed random CI test failures and code cleanup. - * [ ] Fixed string formating bug in Helper.py. - * [ ] Fixed F821 undefined name bugs in WebGear_RTC tests. - * [ ] NetGear_Async Tests fixes. - * [ ] Fixed F821 undefined name bugs. - * [ ] Fixed typo bugs in `main.py`. - * [ ] Fixed Relative import bug in PiGear. - * [ ] Fixed regex bug in warning filter. - * [ ] Fixed WebGear_RTC frozen threads on exit. - * [ ] Fixed bugs in codecov bash uploader setting for azure pipelines. - * [ ] Fixed False-positive `picamera` import due to improper sys.module settings. - * [ ] Fixed Frozen Threads on exit in WebGear_RTC API. - * [ ] Fixed deploy error in `VidGear Docs Deployer` workflow - * [ ] Fixed low timeout bug. - * [ ] Fixed bugs in PiGear tests. - * [ ] Patched F821 undefined name bug. + * Fixed NetGear Address bug + * Fixed bugs related to termination in WebGear_RTC. + * Fixed random CI test failures and code cleanup. + * Fixed string formating bug in Helper.py. + * Fixed F821 undefined name bugs in WebGear_RTC tests. + * NetGear_Async Tests fixes. + * Fixed F821 undefined name bugs. + * Fixed typo bugs in `main.py`. + * Fixed Relative import bug in PiGear. + * Fixed regex bug in warning filter. + * Fixed WebGear_RTC frozen threads on exit. + * Fixed bugs in codecov bash uploader setting for azure pipelines. + * Fixed False-positive `picamera` import due to improper sys.module settings. + * Fixed Frozen Threads on exit in WebGear_RTC API. + * Fixed deploy error in `VidGear Docs Deployer` workflow + * Fixed low timeout bug. + * Fixed bugs in PiGear tests. + * Patched F821 undefined name bug. - [x] StreamGear: - * [ ] Fixed StreamGear throwing `Picture size 0x0 is invalid` bug with external audio. - * [ ] Fixed default input framerate value getting discarded in Real-time Frame Mode. - * [ ] Fixed internal list-formatting bug. + * Fixed StreamGear throwing `Picture size 0x0 is invalid` bug with external audio. + * Fixed default input framerate value getting discarded in Real-time Frame Mode. + * Fixed internal list-formatting bug. - [x] Fixed E999 SyntaxError bug in `main.py`. - [x] Fixed Typo in bash script. - [x] Fixed WebGear freeze on reloading bug. @@ -298,20 +711,20 @@ limitations under the License. ??? tip "New Features" - [x] **CamGear API:** - * [ ] Support for various Live-Video-Streaming services: + * Support for various Live-Video-Streaming services: + Added seamless support for live video streaming sites like Twitch, LiveStream, Dailymotion etc. + Implemented flexible framework around `streamlink` python library with easy control over parameters and quality. + Stream Mode can now automatically detects whether `source` belong to YouTube or elsewhere, and handles it with appropriate API. - * [ ] Re-implemented YouTube URLs Handler: + * Re-implemented YouTube URLs Handler: + Re-implemented CamGear's YouTube URLs Handler completely from scratch. + New Robust Logic to flexibly handing video and video-audio streams. + Intelligent stream selector for selecting best possible stream compatible with OpenCV. + Added support for selecting stream qualities and parameters. + Implemented new `get_supported_quality` helper method for handling specified qualities + Fixed Live-Stream URLs not supported by OpenCV's Videocapture and its FFmpeg. - * [ ] Added additional `STREAM_QUALITY` and `STREAM_PARAMS` attributes. + * Added additional `STREAM_QUALITY` and `STREAM_PARAMS` attributes. - [x] **ScreenGear API:** - * [ ] Multiple Backends Support: + * Multiple Backends Support: + Added new multiple backend support with new [`pyscreenshot`](https://github.com/ponty/pyscreenshot) python library. + Made `pyscreenshot` the default API for ScreenGear, replaces `mss`. + Added new `backend` parameter for this feature while retaining previous behavior. @@ -322,90 +735,90 @@ limitations under the License. + Updated ScreenGear Docs. + Updated ScreenGear CI tests. - [X] **StreamGear API:** - * [ ] Changed default behaviour to support complete video transcoding. - * [ ] Added `-livestream` attribute to support live-streaming. - * [ ] Added additional parameters for `-livestream` attribute functionality. - * [ ] Updated StreamGear Tests. - * [ ] Updated StreamGear docs. + * Changed default behaviour to support complete video transcoding. + * Added `-livestream` attribute to support live-streaming. + * Added additional parameters for `-livestream` attribute functionality. + * Updated StreamGear Tests. + * Updated StreamGear docs. - [x] **Stabilizer Class:** - * [ ] New Robust Error Handling with Blank Frames: + * New Robust Error Handling with Blank Frames: + Elegantly handles all crashes due to Empty/Blank/Dark frames. + Stabilizer throws Warning with this new behavior instead of crashing. + Updated CI test for this feature. - [x] **Docs:** - * [ ] Automated Docs Versioning: + * Automated Docs Versioning: + Implemented Docs versioning through `mike` API. + Separate new workflow steps to handle different versions. + Updated docs deploy worflow to support `release` and `dev` builds. + Added automatic version extraction from github events. + Added `version-select.js` and `version-select.css` files. - * [ ] Toggleable Dark-White Docs Support: + * Toggleable Dark-White Docs Support: + Toggle-button to easily switch dark, white and preferred theme. + New Updated Assets for dark backgrounds + New css, js files/content to implement this behavior. + New material icons for button. + Updated scheme to `slate` in `mkdocs.yml`. - * [ ] New Theme and assets: + * New Theme and assets: + New `purple` theme with `dark-purple` accent color. + New images assets with updated transparent background. + Support for both dark and white theme. + Increased `rebufferingGoal` for dash videos. + New updated custom 404 page for docs. - * [ ] Issue and PR automated-bots changes + * Issue and PR automated-bots changes + New `need_info.yml` YAML Workflow. + New `needs-more-info.yml` Request-Info template. + Replaced Request-Info templates. + Improved PR and Issue welcome formatting. - * [ ] Added custom HTML pages. - * [ ] Added `show_root_heading` flag to disable headings in References. - * [ ] Added new `inserAfter` function to version-select.js. - * [ ] Adjusted hue for dark-theme for better contrast. - * [ ] New usage examples and FAQs. - * [ ] Added `gitmoji` for commits. + * Added custom HTML pages. + * Added `show_root_heading` flag to disable headings in References. + * Added new `inserAfter` function to version-select.js. + * Adjusted hue for dark-theme for better contrast. + * New usage examples and FAQs. + * Added `gitmoji` for commits. - [x] **Continuous Integration:** - * [ ] Maintenance Updates: + * Maintenance Updates: + Added support for new `VIDGEAR_LOGFILE` environment variable in Travis CI. + Added missing CI tests. + Added logging for helper functions. - * [ ] Azure-Pipeline workflow for MacOS envs + * Azure-Pipeline workflow for MacOS envs + Added Azure-Pipeline Workflow for testing MacOS environment. + Added codecov support. - * [ ] GitHub Actions workflow for Linux envs + * GitHub Actions workflow for Linux envs + Added GitHub Action work-flow for testing Linux environment. - * [ ] New YAML to implement GitHub Action workflow for python 3.6, 3.7, 3,8 & 3.9 matrices. - * [ ] Added Upload coverage to Codecov GitHub Action workflow. - * [ ] New codecov-bash uploader for Azure Pipelines. + * New YAML to implement GitHub Action workflow for python 3.6, 3.7, 3,8 & 3.9 matrices. + * Added Upload coverage to Codecov GitHub Action workflow. + * New codecov-bash uploader for Azure Pipelines. - [x] **Logging:** - * [ ] Added file support + * Added file support + Added `VIDGEAR_LOGFILE` environment variable to manually add file/dir path. + Reworked `logger_handler()` Helper methods (in asyncio too). + Added new formatter and Filehandler for handling logger files. - * [ ] Added `restore_levelnames` auxiliary method for restoring logging levelnames. + * Added `restore_levelnames` auxiliary method for restoring logging levelnames. - [x] Added auto version extraction from package `version.py` in setup.py. ??? success "Updates/Improvements" - [x] Added missing Lazy-pirate auto-reconnection support for Multi-Servers and Multi-Clients Mode in NetGear API. - [x] Added new FFmpeg test path to Bash-Script and updated README broken links. - [x] Asset Cleanup: - * [ ] Removed all third-party javascripts from projects. - * [ ] Linked all third-party javascript directly. - * [ ] Cleaned up necessary code from CSS and JS files. - * [ ] Removed any copyrighted material or links. + * Removed all third-party javascripts from projects. + * Linked all third-party javascript directly. + * Cleaned up necessary code from CSS and JS files. + * Removed any copyrighted material or links. - [x] Rewritten Docs from scratch: - * [ ] Improved complete docs formatting. - * [ ] Simplified language for easier understanding. - * [ ] Fixed `mkdocstrings` showing root headings. - * [ ] Included all APIs methods to `mkdocstrings` docs. - * [ ] Removed unnecessary information from docs. - * [ ] Corrected Spelling and typos. - * [ ] Fixed context and grammar. - * [ ] Removed `motivation.md`. - * [ ] Renamed many terms. - * [ ] Fixed hyper-links. - * [ ] Reformatted missing or improper information. - * [ ] Fixed context and spellings in Docs files. - * [ ] Simplified language for easy understanding. - * [ ] Updated image sizes for better visibility. + * Improved complete docs formatting. + * Simplified language for easier understanding. + * Fixed `mkdocstrings` showing root headings. + * Included all APIs methods to `mkdocstrings` docs. + * Removed unnecessary information from docs. + * Corrected Spelling and typos. + * Fixed context and grammar. + * Removed `motivation.md`. + * Renamed many terms. + * Fixed hyper-links. + * Reformatted missing or improper information. + * Fixed context and spellings in Docs files. + * Simplified language for easy understanding. + * Updated image sizes for better visibility. - [x] Bash Script: Updated to Latest OpenCV Binaries version and related changes - [x] Docs: Moved version-selector to header and changed default to alias. - [x] Docs: Updated `deploy_docs.yml` for releasing dev, stable, and release versions. @@ -425,8 +838,8 @@ limitations under the License. - [x] Docs: Version Selector UI reworked and other minor changes. ??? danger "Breaking Updates/Changes" - - [ ] :warning: `y_tube` parameter renamed as `stream_mode` in CamGear API! - - [ ] :warning: Removed Travis support and `travis.yml` deleted. + - [ ] :warning: `y_tube` parameter renamed as `stream_mode` in CamGear API! + - [ ] :warning: Removed Travis support and `travis.yml` deleted. ??? bug "Bug-fixes" - [x] Fixed StreamGear API Limited Segments Bug @@ -446,7 +859,7 @@ limitations under the License. - [x] CI: Codecov bugfixes. - [x] Azure-Pipelines Codecov BugFixes. - [x] Fixed `version.json` not detecting properly in `version-select.js`. - - [x] Fixed images not centered inside
tag. + - [x] Fixed images not centered inside `
` tag. - [x] Fixed Asset Colors. - [x] Fixed failing CI tests. - [x] Fixed Several logging bugs. @@ -469,23 +882,23 @@ limitations under the License. ??? tip "New Features" - [x] **StreamGear API:** - * [ ] New API that automates transcoding workflow for generating Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats. - * [ ] Implemented multi-platform , standalone, highly extensible and flexible wrapper around FFmpeg for generating chunked-encoded media segments of the media, and easily accessing almost all of its parameters. - * [ ] API automatically transcodes videos/audio files & real-time frames into a sequence of multiple smaller chunks/segments and also creates a Manifest file. - * [ ] Added initial support for [MPEG-DASH](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_. - * [ ] Constructed default behavior in StreamGear, for auto-creating a Primary Stream of same resolution and framerate as source. - * [ ] Added [TQDM](https://github.com/tqdm/tqdm) progress bar in non-debugged output for visual representation of internal processes. - * [ ] Implemented several internal methods for preprocessing FFmpeg and internal parameters for producing streams. - * [ ] Several standalone internal checks to ensure robust performance. - * [ ] New `terminate()` function to terminate StremGear Safely. - * [ ] New StreamGear Dual Modes of Operation: + * New API that automates transcoding workflow for generating Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats. + * Implemented multi-platform , standalone, highly extensible and flexible wrapper around FFmpeg for generating chunked-encoded media segments of the media, and easily accessing almost all of its parameters. + * API automatically transcodes videos/audio files & real-time frames into a sequence of multiple smaller chunks/segments and also creates a Manifest file. + * Added initial support for [MPEG-DASH](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_. + * Constructed default behavior in StreamGear, for auto-creating a Primary Stream of same resolution and framerate as source. + * Added [TQDM](https://github.com/tqdm/tqdm) progress bar in non-debugged output for visual representation of internal processes. + * Implemented several internal methods for preprocessing FFmpeg and internal parameters for producing streams. + * Several standalone internal checks to ensure robust performance. + * New `terminate()` function to terminate StremGear Safely. + * New StreamGear Dual Modes of Operation: + Implemented *Single-Source* and *Real-time Frames* like independent Transcoding Modes. + Linked `-video_source` attribute for activating these modes + **Single-Source Mode**, transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller segments for streaming + **Real-time Frames Mode**, directly transcodes video-frames _(as opposed to a entire file)_, into a sequence of multiple smaller segments for streaming + Added separate functions, `stream()` for Real-time Frame Mode and `transcode_source()` for Single-Source Mode for easy transcoding. + Included auto-colorspace detection and RGB Mode like features _(extracted from WriteGear)_, into StreamGear. - * [ ] New StreamGear Parameters: + * New StreamGear Parameters: + Developed several new parameters such as: + `output`: handles assets directory + `formats`: handles adaptive HTTP streaming format. @@ -501,7 +914,7 @@ limitations under the License. + `-gop` to manually specify GOP length. + `-ffmpeg_download_path` to handle custom FFmpeg download path on windows. + `-clear_prev_assets` to remove any previous copies of SteamGear Assets. - * [ ] New StreamGear docs, MPEG-DASH demo, and recommended DASH players list: + * New StreamGear docs, MPEG-DASH demo, and recommended DASH players list: + Added new StreamGear docs, usage examples, parameters, references, new FAQs. + Added Several StreamGear usage examples w.r.t Mode of Operation. + Implemented [**Clappr**](https://github.com/clappr/clappr) based on [**Shaka-Player**](https://github.com/google/shaka-player), as Demo Player. @@ -512,65 +925,65 @@ limitations under the License. + Recommended tested Online, Command-line and GUI Adaptive Stream players. + Implemented separate FFmpeg installation doc for StreamGear API. + Reduced `rebufferingGoal` for faster response. - * [ ] New StreamGear CI tests: + * New StreamGear CI tests: + Added IO and API initialization CI tests for its Modes. + Added various mode Streaming check CI tests. - [x] **NetGear_Async API:** - * [ ] Added new `send_terminate_signal` internal method. - * [ ] Added `WindowsSelectorEventLoopPolicy()` for windows 3.8+ envs. - * [ ] Moved Client auto-termination to separate method. - * [ ] Implemented graceful termination with `signal` API on UNIX machines. - * [ ] Added new `timeout` attribute for controlling Timeout in Connections. - * [ ] Added missing termination optimizer (`linger=0`) flag. - * [ ] Several ZMQ Optimizer Flags added to boost performance. + * Added new `send_terminate_signal` internal method. + * Added `WindowsSelectorEventLoopPolicy()` for windows 3.8+ envs. + * Moved Client auto-termination to separate method. + * Implemented graceful termination with `signal` API on UNIX machines. + * Added new `timeout` attribute for controlling Timeout in Connections. + * Added missing termination optimizer (`linger=0`) flag. + * Several ZMQ Optimizer Flags added to boost performance. - [x] **WriteGear API:** - * [ ] Added support for adding duplicate FFmpeg parameters to `output_params`: + * Added support for adding duplicate FFmpeg parameters to `output_params`: + Added new `-clones` attribute in `output_params` parameter for handing this behavior.. + Support to pass FFmpeg parameters as list, while maintaining the exact order it was specified. + Built support for `zmq.REQ/zmq.REP` and `zmq.PUB/zmq.SUB` patterns in this mode. + Added new CI tests debugging this behavior. + Updated docs accordingly. - * [ ] Added support for Networks URLs in Compression Mode: + * Added support for Networks URLs in Compression Mode: + `output_filename` parameter supports Networks URLs in compression modes only + Added automated handling of non path/file Networks URLs as input. + Implemented new `is_valid_url` helper method to easily validate assigned URLs value. + Validates whether the given URL value has scheme/protocol supported by assigned/installed ffmpeg or not. + WriteGear will throw `ValueError` if `-output_filename` is not supported. + Added related CI tests and docs. - * [ ] Added `disable_force_termination` attribute in WriteGear to disable force-termination. + * Added `disable_force_termination` attribute in WriteGear to disable force-termination. - [x] **NetGear API:** - * [ ] Added option to completely disable Native Frame-Compression: + * Added option to completely disable Native Frame-Compression: + Checks if any Incorrect/Invalid value is assigned on `compression_format` attribute. + Completely disables Native Frame-Compression. + Updated docs accordingly. - [x] **CamGear API:** - * [ ] Added new and robust regex for identifying YouTube URLs. - * [ ] Moved `youtube_url_validator` to Helper. + * Added new and robust regex for identifying YouTube URLs. + * Moved `youtube_url_validator` to Helper. - [x] **New `helper.py` methods:** - * [ ] Added `validate_video` function to validate video_source. - * [ ] Added `extract_time` Extract time from give string value. - * [ ] Added `get_video_bitrate` to calculate video birate from resolution, framerate, bits-per-pixels values. - * [ ] Added `delete_safe` to safely delete files of given extension. - * [ ] Added `validate_audio` to validate audio source. - * [ ] Added new Helper CI tests. + * Added `validate_video` function to validate video_source. + * Added `extract_time` Extract time from give string value. + * Added `get_video_bitrate` to calculate video birate from resolution, framerate, bits-per-pixels values. + * Added `delete_safe` to safely delete files of given extension. + * Added `validate_audio` to validate audio source. + * Added new Helper CI tests. + Added new `check_valid_mpd` function to test MPD files validity. + Added `mpegdash` library to CI requirements. - [x] **Deployed New Docs Upgrades:** - * [ ] Added new assets like _images, gifs, custom scripts, javascripts fonts etc._ for achieving better visual graphics in docs. - * [ ] Added `clappr.min.js`, `dash-shaka-playback.js`, `clappr-level-selector.min.js` third-party javascripts locally. - * [ ] Extended Overview docs Hyperlinks to include all major sub-pages _(such as Usage Examples, Reference, FAQs etc.)_. - * [ ] Replaced GIF with interactive MPEG-DASH Video Example in Stabilizer Docs. - * [ ] Added new `pymdownx.keys` to replace `[Ctrl+C]/[⌘+C]` formats. - * [ ] Added new `custom.css` stylescripts variables for fluid animations in docs. - * [ ] Overridden announce bar and added donation button. - * [ ] Lossless WEBP compressed all PNG assets for faster loading. - * [ ] Enabled lazy-loading for GIFS and Images for performance. - * [ ] Reimplemented Admonitions contexts and added new ones. - * [ ] Added StreamGear and its different modes Docs Assets. - * [ ] Added patch for images & unicodes for PiP flavored markdown in `setup.py`. + * Added new assets like _images, gifs, custom scripts, javascripts fonts etc._ for achieving better visual graphics in docs. + * Added `clappr.min.js`, `dash-shaka-playback.js`, `clappr-level-selector.min.js` third-party javascripts locally. + * Extended Overview docs Hyperlinks to include all major sub-pages _(such as Usage Examples, Reference, FAQs etc.)_. + * Replaced GIF with interactive MPEG-DASH Video Example in Stabilizer Docs. + * Added new `pymdownx.keys` to replace `[Ctrl+C]/[⌘+C]` formats. + * Added new `custom.css` stylescripts variables for fluid animations in docs. + * Overridden announce bar and added donation button. + * Lossless WEBP compressed all PNG assets for faster loading. + * Enabled lazy-loading for GIFS and Images for performance. + * Reimplemented Admonitions contexts and added new ones. + * Added StreamGear and its different modes Docs Assets. + * Added patch for images & unicodes for PiP flavored markdown in `setup.py`. - [x] **Added `Request Info` and `Welcome` GitHub Apps to automate PR and issue workflow** - * [ ] Added new `config.yml` for customizations. - * [ ] Added various suitable configurations. + * Added new `config.yml` for customizations. + * Added various suitable configurations. - [x] Added new `-clones` attribute to handle FFmpeg parameter clones in StreamGear and WriteGear API. - [x] Added new Video-only and Audio-Only sources in bash script. - [x] Added new paths in bash script for storing StreamGear & WriteGear assets temporarily. @@ -684,47 +1097,48 @@ limitations under the License. ## v0.1.8 (2020-06-12) ??? tip "New Features" - - [x] **Multiple Clients support in NetGear API:** - * [ ] Implemented support for handling any number of Clients simultaneously with a single Server in this mode. - * [ ] Added new `multiclient_mode` attribute for enabling this mode easily. - * [ ] Built support for `zmq.REQ/zmq.REP` and `zmq.PUB/zmq.SUB` patterns in this mode. - * [ ] Implemented ability to receive data from all Client(s) along with frames with `zmq.REQ/zmq.REP` pattern only. - * [ ] Updated related CI tests - - [x] **Support for robust Lazy Pirate pattern(auto-reconnection) in NetGear API for both server and client ends:** - * [ ] Implemented a algorithm where NetGear rather than doing a blocking receive, will now: - + Poll the socket and receive from it only when it's sure a reply has arrived. - + Attempt to reconnect, if no reply has arrived within a timeout period. - + Abandon the connection if there is still no reply after several requests. - * [ ] Implemented its default support for `REQ/REP` and `PAIR` messaging patterns internally. - * [ ] Added new `max_retries` and `request_timeout`(in seconds) for handling polling. - * [ ] Added `DONTWAIT` flag for interruption-free data receiving. - * [ ] Both Server and Client can now reconnect even after a premature termination. - - [x] **Performance Updates for NetGear API:** - * [ ] Added default Frame Compression support for Bidirectional frame transmission in Bidirectional mode. - * [ ] Added support for `Reducer()` function in Helper.py to aid reducing frame-size on-the-go for more performance. - * [ ] Added small delay in `recv()` function at client's end to reduce system load. - * [ ] Reworked and Optimized NetGear termination, and also removed/changed redundant definitions and flags. - - [x] **Docs Migration to Mkdocs:** - * [ ] Implemented a beautiful, static documentation site based on [MkDocs](https://www.mkdocs.org/) which will then be hosted on GitHub Pages. - * [ ] Crafted base mkdocs with third-party elegant & simplistic [`mkdocs-material`](https://squidfunk.github.io/mkdocs-material/) theme. - * [ ] Implemented new `mkdocs.yml` for Mkdocs with relevant data. - * [ ] Added new `docs` folder to handle markdown pages and its assets. - * [ ] Added new Markdown pages(`.md`) to docs folder, which are carefully crafted documents - [x] based on previous Wiki's docs, and some completely new additions. - * [ ] Added navigation under tabs for easily accessing each document. - * [ ] **New Assets:** + - [x] **NetGear API:** + * Multiple Clients support: + + Implemented support for handling any number of Clients simultaneously with a single Server in this mode. + + Added new `multiclient_mode` attribute for enabling this mode easily. + + Built support for `zmq.REQ/zmq.REP` and `zmq.PUB/zmq.SUB` patterns in this mode. + + Implemented ability to receive data from all Client(s) along with frames with `zmq.REQ/zmq.REP` pattern only. + + Updated related CI tests + * Support for robust Lazy Pirate pattern(auto-reconnection) in NetGear API for both server and client ends: + + Implemented a algorithm where NetGear rather than doing a blocking receive, will now: + + Poll the socket and receive from it only when it's sure a reply has arrived. + + Attempt to reconnect, if no reply has arrived within a timeout period. + + Abandon the connection if there is still no reply after several requests. + + Implemented its default support for `REQ/REP` and `PAIR` messaging patterns internally. + + Added new `max_retries` and `request_timeout`(in seconds) for handling polling. + + Added `DONTWAIT` flag for interruption-free data receiving. + + Both Server and Client can now reconnect even after a premature termination. + * Performance Updates: + + Added default Frame Compression support for Bidirectional frame transmission in Bidirectional mode. + + Added support for `Reducer()` function in Helper.py to aid reducing frame-size on-the-go for more performance. + + Added small delay in `recv()` function at client's end to reduce system load. + + Reworked and Optimized NetGear termination, and also removed/changed redundant definitions and flags. + - [x] **Docs: Migration to Mkdocs** + * Implemented a beautiful, static documentation site based on [MkDocs](https://www.mkdocs.org/) which will then be hosted on GitHub Pages. + * Crafted base mkdocs with third-party elegant & simplistic [`mkdocs-material`](https://squidfunk.github.io/mkdocs-material/) theme. + * Implemented new `mkdocs.yml` for Mkdocs with relevant data. + * Added new `docs` folder to handle markdown pages and its assets. + * Added new Markdown pages(`.md`) to docs folder, which are carefully crafted documents - [x] based on previous Wiki's docs, and some completely new additions. + * Added navigation under tabs for easily accessing each document. + * New Assets: + Added new assets like _gifs, images, custom scripts, favicons, site.webmanifest etc._ for bringing standard and quality to docs visual design. + Designed brand new logo and banner for VidGear Documents. + Deployed all assets under separate [*Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License*](https://creativecommons.org/licenses/by-nc-sa/4.0/). - * [ ] **Added Required Plugins and Extensions:** + * Added Required Plugins and Extensions: + Added support for all [pymarkdown-extensions](https://facelessuser.github.io/pymdown-extensions/). + Added support for some important `admonition`, `attr_list`, `codehilite`, `def_list`, `footnotes`, `meta`, and `toc` like Mkdocs extensions. + Enabled `search`, `minify` and `git-revision-date-localized` plugins support. + Added various VidGear's social links to yaml. + Added support for `en` _(English)_ language. - * [ ] **Auto-Build API Reference with `mkdocstrings:`** + * Auto-Build API Reference with `mkdocstrings:` + Added support for [`mkdocstrings`](https://github.com/pawamoy/mkdocstrings) plugin for auto-building each VidGear's API references. + Added python handler for parsing python source-code to `mkdocstrings`. - * [ ] **Auto-Deploy Docs with GitHub Actions:** + * Auto-Deploy Docs with GitHub Actions: + Implemented Automated Docs Deployment on gh-pages through GitHub Actions workflow. + Added new workflow yaml with minimal configuration for automated docs deployment. + Added all required python dependencies and environment for this workflow. @@ -793,38 +1207,38 @@ limitations under the License. ??? tip "New Features" - [x] **WebGear API:** - * [ ] Added a robust Live Video Server API that can transfer live video frames to any web browser on the network in real-time. - * [ ] Implemented a flexible asyncio wrapper around [`starlette`](https://www.starlette.io/) ASGI Application Server. - * [ ] Added seamless access to various starlette's Response classes, Routing tables, Static Files, Template engine(with Jinja2), etc. - * [ ] Added a special internal access to VideoGear API and all its parameters. - * [ ] Implemented a new Auto-Generation Work-flow to generate/download & thereby validate WebGear API data files from its GitHub server automatically. - * [ ] Added on-the-go dictionary parameter in WebGear to tweak performance, Route Tables and other internal properties easily. - * [ ] Added new simple & elegant default Bootstrap Cover Template for WebGear Server. - * [ ] Added `__main__.py` to directly run WebGear Server through the terminal. - * [ ] Added new gif and related docs for WebGear API. - * [ ] Added and Updated various CI tests for this API. + * Added a robust Live Video Server API that can transfer live video frames to any web browser on the network in real-time. + * Implemented a flexible asyncio wrapper around [`starlette`](https://www.starlette.io/) ASGI Application Server. + * Added seamless access to various starlette's Response classes, Routing tables, Static Files, Template engine(with Jinja2), etc. + * Added a special internal access to VideoGear API and all its parameters. + * Implemented a new Auto-Generation Work-flow to generate/download & thereby validate WebGear API data files from its GitHub server automatically. + * Added on-the-go dictionary parameter in WebGear to tweak performance, Route Tables and other internal properties easily. + * Added new simple & elegant default Bootstrap Cover Template for WebGear Server. + * Added `__main__.py` to directly run WebGear Server through the terminal. + * Added new gif and related docs for WebGear API. + * Added and Updated various CI tests for this API. - [x] **NetGear_Async API:** - * [ ] Designed NetGear_Async asynchronous network API built upon ZeroMQ's asyncio API. - * [ ] Implemented support for state-of-the-art asyncio event loop [`uvloop`](https://github.com/MagicStack/uvloop) at its backend. - * [ ] Achieved Unmatchable high-speed and lag-free video streaming over the network with minimal resource constraint. - * [ ] Added exclusive internal wrapper around VideoGear API for this API. - * [ ] Implemented complete server-client handling and options to use variable protocols/patterns for this API. - * [ ] Implemented support for all four ZeroMQ messaging patterns: i.e `zmq.PAIR`, `zmq.REQ/zmq.REP`, `zmq.PUB/zmq.SUB`, and `zmq.PUSH/zmq.PULL`. - * [ ] Implemented initial support for `tcp` and `ipc` protocols. - * [ ] Added new Coverage CI tests for NetGear_Async Network Gear. - * [ ] Added new Benchmark tests for benchmarking NetGear_Async against NetGear. + * Designed NetGear_Async asynchronous network API built upon ZeroMQ's asyncio API. + * Implemented support for state-of-the-art asyncio event loop [`uvloop`](https://github.com/MagicStack/uvloop) at its backend. + * Achieved Unmatchable high-speed and lag-free video streaming over the network with minimal resource constraint. + * Added exclusive internal wrapper around VideoGear API for this API. + * Implemented complete server-client handling and options to use variable protocols/patterns for this API. + * Implemented support for all four ZeroMQ messaging patterns: i.e `zmq.PAIR`, `zmq.REQ/zmq.REP`, `zmq.PUB/zmq.SUB`, and `zmq.PUSH/zmq.PULL`. + * Implemented initial support for `tcp` and `ipc` protocols. + * Added new Coverage CI tests for NetGear_Async Network Gear. + * Added new Benchmark tests for benchmarking NetGear_Async against NetGear. - [x] **Asynchronous Enhancements:** - * [ ] Added `asyncio` package to for handling asynchronous APIs. - * [ ] Moved WebGear API(webgear.py) to `asyncio` and created separate asyncio `helper.py` for it. - * [ ] Various Performance tweaks for Asyncio APIs with concurrency within a single thread. - * [ ] Moved `__main__.py` to asyncio for easier access to WebGear API through the terminal. - * [ ] Updated `setup.py` with new dependencies and separated asyncio dependencies. + * Added `asyncio` package to for handling asynchronous APIs. + * Moved WebGear API(webgear.py) to `asyncio` and created separate asyncio `helper.py` for it. + * Various Performance tweaks for Asyncio APIs with concurrency within a single thread. + * Moved `__main__.py` to asyncio for easier access to WebGear API through the terminal. + * Updated `setup.py` with new dependencies and separated asyncio dependencies. - [x] **General Enhancements:** - * [ ] Added new highly-precise Threaded FPS class for accurate benchmarking with `time.perf_counter` python module. - * [ ] Added a new [Gitter](https://gitter.im/vidgear/community) community channel. - * [ ] Added a new *Reducer* function to reduce the frame size on-the-go. - * [ ] Add *Flake8* tests to Travis CI to find undefined names. (PR by @cclauss) - * [ ] Added a new unified `logging handler` helper function for vidgear. + * Added new highly-precise Threaded FPS class for accurate benchmarking with `time.perf_counter` python module. + * Added a new [Gitter](https://gitter.im/vidgear/community) community channel. + * Added a new *Reducer* function to reduce the frame size on-the-go. + * Add *Flake8* tests to Travis CI to find undefined names. (PR by @cclauss) + * Added a new unified `logging handler` helper function for vidgear. ??? success "Updates/Improvements" - [x] Re-implemented and simplified logic for NetGear Async server-end. @@ -898,49 +1312,49 @@ limitations under the License. ??? tip "New Features" - [x] **NetGear API:** - * [ ] **Added powerful ZMQ Authentication & Data Encryption features for NetGear API:** + * Added powerful ZMQ Authentication & Data Encryption features for NetGear API: + Added exclusive `secure_mode` param for enabling it. + Added support for two most powerful `Stonehouse` & `Ironhouse` ZMQ security mechanisms. + Added smart auth-certificates/key generation and validation features. - * [ ] **Implemented Robust Multi-Servers support for NetGear API:** + * Implemented Robust Multi-Servers support for NetGear API: + Enables Multiple Servers messaging support with a single client. + Added exclusive `multiserver_mode` param for enabling it. + Added support for `REQ/REP` & `PUB/SUB` patterns for this mode. + Added ability to send additional data of any datatype along with the frame in realtime in this mode. - * [ ] **Introducing exclusive Bidirectional Mode for bidirectional data transmission:** + * Introducing exclusive Bidirectional Mode for bidirectional data transmission: + Added new `return_data` parameter to `recv()` function. + Added new `bidirectional_mode` attribute for enabling this mode. + Added support for `PAIR` & `REQ/REP` patterns for this mode + Added support for sending data of any python datatype. + Added support for `message` parameter for non-exclusive primary modes for this mode. - * [ ] **Implemented compression support with on-the-fly flexible frame encoding for the Server-end:** + * Implemented compression support with on-the-fly flexible frame encoding for the Server-end: + Added initial support for `JPEG`, `PNG` & `BMP` encoding formats . + Added exclusive options attribute `compression_format` & `compression_param` to tweak this feature. + Client-end will now decode frame automatically based on the encoding as well as support decoding flags. - * [ ] Added `force_terminate` attribute flag for handling force socket termination at the Server-end if there's latency in the network. - * [ ] Implemented new *Publish/Subscribe(`zmq.PUB/zmq.SUB`)* pattern for seamless Live Streaming in NetGear API. + * Added `force_terminate` attribute flag for handling force socket termination at the Server-end if there's latency in the network. + * Implemented new *Publish/Subscribe(`zmq.PUB/zmq.SUB`)* pattern for seamless Live Streaming in NetGear API. - [x] **PiGear API:** - * [ ] Added new threaded internal timing function for PiGear to handle any hardware failures/frozen threads. - * [ ] PiGear will not exit safely with `SystemError` if Picamera ribbon cable is pulled out to save resources. - * [ ] Added support for new user-defined `HWFAILURE_TIMEOUT` options attribute to alter timeout. + * Added new threaded internal timing function for PiGear to handle any hardware failures/frozen threads. + * PiGear will not exit safely with `SystemError` if Picamera ribbon cable is pulled out to save resources. + * Added support for new user-defined `HWFAILURE_TIMEOUT` options attribute to alter timeout. - [x] **VideoGear API:** - * [ ] Added `framerate` global variable and removed redundant function. - * [ ] Added `CROP_N_ZOOM` attribute in Videogear API for supporting Crop and Zoom stabilizer feature. + * Added `framerate` global variable and removed redundant function. + * Added `CROP_N_ZOOM` attribute in Videogear API for supporting Crop and Zoom stabilizer feature. - [x] **WriteGear API:** - * [ ] Added new `execute_ffmpeg_cmd` function to pass a custom command to its FFmpeg pipeline. + * Added new `execute_ffmpeg_cmd` function to pass a custom command to its FFmpeg pipeline. - [x] **Stabilizer class:** - * [ ] Added new Crop and Zoom feature. + * Added new Crop and Zoom feature. + Added `crop_n_zoom` param for enabling this feature. - * [ ] Updated docs. + * Updated docs. - [x] **CI & Tests updates:** - * [ ] Replaced python 3.5 matrices with latest python 3.8 matrices in Linux environment. - * [ ] Added full support for **Codecov** in all CI environments. - * [ ] Updated OpenCV to v4.2.0-pre(master branch). - * [ ] Added various Netgear API tests. - * [ ] Added initial Screengear API test. - * [ ] More test RTSP feeds added with better error handling in CamGear network test. - * [ ] Added tests for ZMQ authentication certificate generation. - * [ ] Added badge and Minor doc updates. + * Replaced python 3.5 matrices with latest python 3.8 matrices in Linux environment. + * Added full support for **Codecov** in all CI environments. + * Updated OpenCV to v4.2.0-pre(master branch). + * Added various Netgear API tests. + * Added initial Screengear API test. + * More test RTSP feeds added with better error handling in CamGear network test. + * Added tests for ZMQ authentication certificate generation. + * Added badge and Minor doc updates. - [x] Added VidGear's official native support for MacOS environments. @@ -1138,7 +1552,7 @@ limitations under the License. ??? bug "Bug-fixes" - [x] Patched Major PiGear Bug: Incorrect import of PiRGBArray function in PiGear Class - - [x] Several Fixes** for backend `picamera` API handling during frame capture(PiGear) + - [x] Several Fixes for backend `picamera` API handling during frame capture(PiGear) - [x] Fixed missing frame variable initialization. - [x] Fixed minor typos diff --git a/docs/contribution.md b/docs/contribution.md index 84d7c0612..19b50b66b 100644 --- a/docs/contribution.md +++ b/docs/contribution.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -54,7 +54,7 @@ There's no need to contribute for some typos. Just reach us on [Gitter ➶](http ### Found a bug? -If you encountered a bug, you can help us by submitting an issue in our GitHub repository. Even better, you can submit a Pull Request(PR) with a fix, but make sure to read the [guidelines ➶](#submission-guidelines). +If you encountered a bug, you can help us by [submitting an issue](../contribution/issue/) in our GitHub repository. Even better, you can submit a Pull Request(PR) with a fix, but make sure to read the [guidelines ➶](#submission-guidelines). ### Request for a feature/improvement? diff --git a/docs/contribution/PR.md b/docs/contribution/PR.md index cf1d73138..49f2dbcbb 100644 --- a/docs/contribution/PR.md +++ b/docs/contribution/PR.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -106,7 +106,7 @@ You can clone your [**Forked**](https://docs.github.com/en/free-pro-team@latest/ Typically any feature/improvement/bug-fix code flows as follows:
- +
```sh @@ -184,22 +184,22 @@ All Pull Request(s) must be tested, formatted & linted against our library stand Testing VidGear requires additional test dependencies and dataset, which can be handled manually as follows: -* **Install additional python libraries:** +- [x] **Install additional python libraries:** You can easily install these dependencies via pip: - ??? warning "Note for Windows" - The [`mpegdash`](https://github.com/sangwonl/python-mpegdash) library has not yet been updated and bugs on windows machines. Kindly instead try the forked [DEV-version of `mpegdash`](https://github.com/abhiTronix/python-mpegdash) as follows: + ??? info "MPEGDASH for Windows" + The [`mpegdash`](https://github.com/sangwonl/python-mpegdash) library has not yet been updated and bugs on windows machines. Therefore install the forked [DEV-version of `mpegdash`](https://github.com/abhiTronix/python-mpegdash) as follows: ```sh - python -m pip install https://github.com/abhiTronix/python-mpegdash/releases/download/0.3.0-dev/mpegdash-0.3.0.dev0-py3-none-any.whl + python -m pip install https://github.com/abhiTronix/python-mpegdash/releases/download/0.3.0-dev2/mpegdash-0.3.0.dev2-py3-none-any.whl ``` ```sh - pip install --upgrade six, flake8, black, pytest, pytest-asyncio, mpegdash + pip install --upgrade six flake8 black pytest pytest-asyncio mpegdash paramiko m3u8 async-asgi-testclient ``` -* **Download Tests Dataset:** +- [x] **Download Tests Dataset:** To perform tests, you also need to download additional dataset *(to your temp dir)* by running [`prepare_dataset.sh`](https://github.com/abhiTronix/vidgear/blob/master/scripts/bash/prepare_dataset.sh) bash script as follows: @@ -223,13 +223,13 @@ All tests can be run with [`pytest`](https://docs.pytest.org/en/stable/)(*in Vid For formatting and linting, following libraries are used: -* **Flake8:** You must run [`flake8`](https://flake8.pycqa.org/en/latest/manpage.html) linting for checking the code base against the coding style (PEP8), programming errors and other cyclomatic complexity: +- [x] **Flake8:** You must run [`flake8`](https://flake8.pycqa.org/en/latest/manpage.html) linting for checking the code base against the coding style (PEP8), programming errors and other cyclomatic complexity: ```sh - flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics + flake8 {source_file_or_directory} --count --select=E9,F63,F7,F82 --show-source --statistics ``` -* **Black:** Vidgear follows [`black`](https://github.com/psf/black) formatting to make code review faster by producing the smallest diffs possible. You must run it with sensible defaults as follows: +- [x] **Black:** Vidgear follows [`black`](https://github.com/psf/black) formatting to make code review faster by producing the smallest diffs possible. You must run it with sensible defaults as follows: ```sh black {source_file_or_directory} diff --git a/docs/contribution/issue.md b/docs/contribution/issue.md index c111c481c..18571fc53 100644 --- a/docs/contribution/issue.md +++ b/docs/contribution/issue.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -39,11 +39,11 @@ If you've found a new bug or you've come up with some new feature which can impr * All VidGear APIs provides a `logging` boolean flag in parameters, to log debugged output to terminal. Kindly turn this parameter `True` in the respective API for getting debug output, and paste it with your Issue. * In order to reproduce bugs we will systematically ask you to provide a minimal reproduction code for your report. -* Check and paste, exact VidGear version by running command `python -c "import vidgear; print(vidgear.__version__)"`. +* Check and paste, exact VidGear version by running command `#!python python -c "import vidgear; print(vidgear.__version__)"`. ### Follow the Issue Template -* Please stick to the issue template. +* Please format your issue by choosing the appropriate template. * Any improper/insufficient reports will be marked with **MISSING : INFORMATION :mag:** and **MISSING : TEMPLATE :grey_question:** like labels, and if we don't hear back from you we may close the issue. ### Raise the Issue diff --git a/docs/gears.md b/docs/gears.md index 5437fe71d..c9a934137 100644 --- a/docs/gears.md +++ b/docs/gears.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -22,14 +22,14 @@ limitations under the License.
@Vidgear Functional Block Diagram -
Gears: generalized workflow diagram
+
Gears: generalized workflow
## Gears, What are these? VidGear is built on Standalone APIs - also known as **Gears**, each with some unique functionality. Each Gears is designed exclusively to handle/control/process different data-specific & device-specific video streams, network streams, and media encoders/decoders. -These Gears provides the user an easy-to-use, dynamic, extensible, and exposed Multi-Threaded + Asyncio optimized internal layer above state-of-the-art libraries to work with, while silently delivering robust error-handling. +Gears allows users to work with an inherently optimized, easy-to-use, extensible, and exposed API Framework on top of many state-of-the-art libraries, while silently delivering robust error handling and unmatched real-time performance. ## Gears Classification @@ -54,7 +54,9 @@ These Gears can be classified as follows: > **Basic Function:** Transcodes/Broadcasts files & [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames for streaming. -* [StreamGear](streamgear/overview/): Handles Transcoding of High-Quality, Dynamic & Adaptive Streaming Formats. +!!! tip "You can also use [WriteGear](writegear/introduction/) for streaming with traditional protocols such as RTMP, RTSP/RTP." + +* [StreamGear](streamgear/introduction/): Handles Transcoding of High-Quality, Dynamic & Adaptive Streaming Formats. * **Asynchronous I/O Streaming Gear:** @@ -64,7 +66,7 @@ These Gears can be classified as follows: ### D. Network Gears -> **Basic Function:** Sends/Receives [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames over connected network. +> **Basic Function:** Sends/Receives [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames over connected networks. * [NetGear](netgear/overview/): Handles High-Performance Video-Frames & Data Transfer between interconnecting systems over the network. diff --git a/docs/gears/camgear/advanced/source_params.md b/docs/gears/camgear/advanced/source_params.md index 380a4780d..0c91b9cdb 100644 --- a/docs/gears/camgear/advanced/source_params.md +++ b/docs/gears/camgear/advanced/source_params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -20,17 +20,22 @@ limitations under the License. # Source Tweak Parameters for CamGear API -  +
+ Source Tweak Parameters +
## Overview -The [`options`](../../params/#options) dictionary parameter in CamGear, gives user the ability to alter various **Source Tweak Parameters** available within [OpenCV's VideoCapture Class](https://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html#a57c0e81e83e60f36c83027dc2a188e80). These tweak parameters can be used to manipulate input source Camera-Device properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly. Thereby, All Source Tweak Parameters supported by CamGear API are disscussed in this document. +The [`options`](../../params/#options) dictionary parameter in CamGear gives user the ability to alter various parameters available within [OpenCV's VideoCapture Class](https://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html#a57c0e81e83e60f36c83027dc2a188e80). + +These tweak parameters can be used to transform input Camera-Source properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly. All parameters supported by CamGear API are disscussed in this document.   -!!! quote "" - ### Exclusive CamGear Parameters +### Exclusive CamGear Parameters + +!!! quote "" In addition to Source Tweak Parameters, CamGear also provides some exclusive attributes for its [`options`](../../params/#options) dictionary parameters. These attributes are as follows: diff --git a/docs/gears/camgear/overview.md b/docs/gears/camgear/overview.md index 48d08e41b..1d01e6238 100644 --- a/docs/gears/camgear/overview.md +++ b/docs/gears/camgear/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/camgear/params.md b/docs/gears/camgear/params.md index ac6f2d0ae..54fe891ac 100644 --- a/docs/gears/camgear/params.md +++ b/docs/gears/camgear/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -115,9 +115,9 @@ Its valid input can be one of the following: This parameter controls the Stream Mode, .i.e if enabled(`stream_mode=True`), the CamGear API will interpret the given `source` input as YouTube URL address. -!!! bug "Due to a [**FFmpeg bug**](https://github.com/abhiTronix/vidgear/issues/133#issuecomment-638263225) that causes video to freeze frequently in OpenCV, It is advised to always use [GStreamer backend _(`backend=cv2.CAP_GSTREAMER`)_](#backend) for any livestreams _(such as Twitch)_." +!!! bug "Due to a [**FFmpeg bug**](https://github.com/abhiTronix/vidgear/issues/133#issuecomment-638263225) that causes video to freeze frequently in OpenCV, It is advised to always use [GStreamer backend](#backend) for any livestreams _(such as Twitch)_." -!!! warning "CamGear automatically enforce GStreamer backend _(backend=`cv2.CAP_GSTREAMER`)_ for YouTube-livestreams!" +!!! warning "CamGear automatically enforce [GStreamer backend](#backend) for YouTube-livestreams!" !!! error "CamGear will exit with `RuntimeError` for YouTube livestreams, if OpenCV is not compiled with GStreamer(`>=v1.0.0`) support. Checkout [this FAQ](../../../help/camgear_faqs/#how-to-compile-opencv-with-gstreamer-support) for compiling OpenCV with GStreamer support." @@ -160,7 +160,7 @@ CamGear(source=0, colorspace="COLOR_BGR2HSV") This parameter manually selects the backend for OpenCV's VideoCapture class _(only if specified)_. -!!! warning "To workaround a [**FFmpeg bug**](https://github.com/abhiTronix/vidgear/issues/133#issuecomment-638263225), CamGear automatically enforce GStreamer backend(`backend=cv2.CAP_GSTREAMER`) for YouTube-livestreams in [Stream Mode](#stream_mode). This behavior discards any `backend` parameter value for those streams." +!!! warning "To workaround a [**FFmpeg bug**](https://github.com/abhiTronix/vidgear/issues/133#issuecomment-638263225), CamGear automatically enforce GStreamer backend for YouTube-livestreams in [Stream Mode](#stream_mode). This behavior discards any `backend` parameter value for those streams." **Data-Type:** Integer diff --git a/docs/gears/camgear/usage.md b/docs/gears/camgear/usage.md index 9f6ff90c3..67f8e9b05 100644 --- a/docs/gears/camgear/usage.md +++ b/docs/gears/camgear/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -66,9 +66,12 @@ stream.stop() ## Using Camgear with Streaming Websites -CamGear API provides direct support for piping video streams from various popular streaming services like [Twitch](https://www.twitch.tv/), [Livestream](https://livestream.com/), [Dailymotion](https://www.dailymotion.com/live), and [many more ➶](https://streamlink.github.io/plugin_matrix.html#plugins). All you have to do is to provide the desired Video's URL to its `source` parameter, and enable the [`stream_mode`](../params/#stream_mode) parameter. The complete usage example is as follows: +CamGear API provides direct support for piping video streams from various popular streaming services like [Twitch](https://www.twitch.tv/), [Vimeo](https://vimeo.com/), [Dailymotion](https://www.dailymotion.com), and [many more ➶](https://streamlink.github.io/plugin_matrix.html#plugins). All you have to do is to provide the desired Video's URL to its `source` parameter, and enable the [`stream_mode`](../params/#stream_mode) parameter. The complete usage example is as follows: -!!! bug "To workaround a [**FFmpeg bug**](https://github.com/abhiTronix/vidgear/issues/133#issuecomment-638263225) that causes video to freeze frequently, You must always use [GStreamer backend _(`backend=cv2.CAP_GSTREAMER`)_](../params/#backend) for Livestreams _(such as Twitch URLs)_. Checkout [this FAQ ➶](../../../help/camgear_faqs/#how-to-compile-opencv-with-gstreamer-support) for compiling OpenCV with GStreamer support." +!!! bug "Bug in OpenCV's FFmpeg" + To workaround a [**FFmpeg bug**](https://github.com/abhiTronix/vidgear/issues/133#issuecomment-638263225) that causes video to freeze frequently, You must always use [GStreamer backend](../params/#backend) for Livestreams _(such as Twitch URLs)_. + + **Checkout [this FAQ ➶](../../../help/camgear_faqs/#how-to-compile-opencv-with-gstreamer-support) for compiling OpenCV with GStreamer support.** ???+ info "Exclusive CamGear Attributes" CamGear also provides exclusive attributes: @@ -87,10 +90,10 @@ import cv2 options = {"STREAM_RESOLUTION": "720p"} # Add any desire Video URL as input source -# for e.g https://www.dailymotion.com/video/x7xsoud +# for e.g https://vimeo.com/151666798 # and enable Stream Mode (`stream_mode = True`) stream = CamGear( - source="https://www.dailymotion.com/video/x7xsoud", + source="https://vimeo.com/151666798", stream_mode=True, logging=True, **options @@ -183,7 +186,7 @@ stream.stop() ## Using CamGear with Variable Camera Properties -CamGear API also flexibly support various **Source Tweak Parameters** available within [OpenCV's VideoCapture API](https://docs.opencv.org/master/d4/d15/group__videoio__flags__base.html#gaeb8dd9c89c10a5c63c139bf7c4f5704d). These tweak parameters can be used to manipulate input source Camera-Device properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly, and can be easily applied in CamGear API through its `options` dictionary parameter by formatting them as its attributes. The complete usage example is as follows: +CamGear API also flexibly support various **Source Tweak Parameters** available within [OpenCV's VideoCapture API](https://docs.opencv.org/master/d4/d15/group__videoio__flags__base.html#gaeb8dd9c89c10a5c63c139bf7c4f5704d). These tweak parameters can be used to transform input source Camera-Device properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly, and can be easily applied in CamGear API through its `options` dictionary parameter by formatting them as its attributes. The complete usage example is as follows: !!! tip "All the supported Source Tweak Parameters can be found [here ➶](../advanced/source_params/#source-tweak-parameters-for-camgear-api)" @@ -298,4 +301,10 @@ cv2.destroyAllWindows() stream.stop() ``` +  + +## Bonus Examples + +!!! example "Checkout more advanced CamGear examples with unusual configuration [here ➶](../../../help/camgear_ex/)" +   \ No newline at end of file diff --git a/docs/gears/netgear/advanced/bidirectional_mode.md b/docs/gears/netgear/advanced/bidirectional_mode.md index 1bd28c2ec..cfea17b39 100644 --- a/docs/gears/netgear/advanced/bidirectional_mode.md +++ b/docs/gears/netgear/advanced/bidirectional_mode.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -36,7 +36,7 @@ This mode can be easily activated in NetGear through `bidirectional_mode` attrib   -!!! danger "Important Information" +!!! danger "Important Information regarding Bidirectional Mode" * In Bidirectional Mode, `zmq.PAIR`(ZMQ Pair) & `zmq.REQ/zmq.REP`(ZMQ Request/Reply) are **ONLY** Supported messaging patterns. Accessing this mode with any other messaging pattern, will result in `ValueError`. @@ -69,7 +69,7 @@ This mode can be easily activated in NetGear through `bidirectional_mode` attrib   -## Method Parameters +## Exclusive Parameters To send data bidirectionally, NetGear API provides two exclusive parameters for its methods: @@ -291,6 +291,7 @@ Now, Open the terminal on another Server System _(a Raspberry Pi with Camera Mod ```python # import required libraries from vidgear.gears import VideoGear +from vidgear.gears import NetGear from vidgear.gears import PiGear # add various Picamera tweak parameters to dictionary @@ -363,9 +364,9 @@ server.close() In this example we are going to implement a bare-minimum example, where we will be sending video-frames _(3-Dimensional numpy arrays)_ of the same Video bidirectionally at the same time, for testing the real-time performance and synchronization between the Server and the Client using this(Bidirectional) Mode. -!!! tip "This feature is great for building applications like Real-Time Video Chat." +!!! tip "This example is useful for building applications like Real-Time Video Chat." -!!! info "We're also using [`reducer()`](../../../../../bonus/reference/helper/#reducer) method for reducing frame-size on-the-go for additional performance." +!!! info "We're also using [`reducer()`](../../../../../bonus/reference/helper/#vidgear.gears.helper.reducer--reducer) method for reducing frame-size on-the-go for additional performance." !!! warning "Remember, Sending large HQ video-frames may required more network bandwidth and packet size which may lead to video latency!" @@ -377,14 +378,13 @@ Open your favorite terminal and execute the following python code: ```python # import required libraries -from vidgear.gears import VideoGear from vidgear.gears import NetGear from vidgear.gears.helper import reducer import numpy as np import cv2 # open any valid video stream(for e.g `test.mp4` file) -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # activate Bidirectional mode options = {"bidirectional_mode": True} @@ -397,10 +397,10 @@ while True: try: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want more performance, otherwise comment this line @@ -427,7 +427,7 @@ while True: break # safely close video stream -stream.stop() +stream.release() # safely close server server.close() @@ -444,7 +444,6 @@ Then open another terminal on the same system and execute the following python c ```python # import required libraries from vidgear.gears import NetGear -from vidgear.gears import VideoGear from vidgear.gears.helper import reducer import cv2 @@ -452,7 +451,7 @@ import cv2 options = {"bidirectional_mode": True} # again open the same video stream -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # define NetGear Client with `receive_mode = True` and defined parameter client = NetGear(receive_mode=True, pattern=1, logging=True, **options) @@ -461,10 +460,10 @@ client = NetGear(receive_mode=True, pattern=1, logging=True, **options) while True: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want more performance, otherwise comment this line @@ -502,7 +501,7 @@ while True: cv2.destroyAllWindows() # safely close video stream -stream.stop() +stream.release() # safely close client client.close() @@ -513,9 +512,7 @@ client.close() ## Using Bidirectional Mode for Video-Frames Transfer with Frame Compression :fire: - -See complete usage example [here ➶](../../advanced/compression/#using-bidirectional-mode-for-video-frames-transfer-with-frame-compression) - +!!! example "This usage examples can be found [here ➶](../../advanced/compression/#using-bidirectional-mode-for-video-frames-transfer-with-frame-compression)"   diff --git a/docs/gears/netgear/advanced/compression.md b/docs/gears/netgear/advanced/compression.md index d2187598f..6187ddef9 100644 --- a/docs/gears/netgear/advanced/compression.md +++ b/docs/gears/netgear/advanced/compression.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -49,32 +49,49 @@ Frame Compression is enabled by default in NetGear, and can be easily controlled   -## Supported Attributes +## Exclusive Attributes -For implementing Frame Compression, NetGear API currently provide following attribute for its [`options`](../../params/#options) dictionary parameter to leverage performance with Frame Compression: +For implementing Frame Compression, NetGear API currently provide following exclusive attribute for its [`options`](../../params/#options) dictionary parameter to leverage performance with Frame Compression: -* `jpeg_compression` _(bool)_: This attribute can be used to activate(if True)/deactivate(if False) Frame Compression. Its default value is also `True`, and its usage is as follows: +* `jpeg_compression`: _(bool/str)_ This internal attribute is used to activate/deactivate JPEG Frame Compression as well as to specify incoming frames colorspace with compression. Its usage is as follows: + + - [x] **For activating JPEG Frame Compression _(Boolean)_:** - ```python - # disable jpeg encoding - options = {"jpeg_compression": False} - ``` + !!! alert "In this case, colorspace will default to `BGR`." + + !!! note "You can set `jpeg_compression` value to `False` at Server end to completely disable Frame Compression." + + ```python + # enable jpeg encoding + options = {"jpeg_compression": True} + ``` + + - [x] **For specifying Input frames colorspace _(String)_:** + + !!! alert "In this case, JPEG Frame Compression is activated automatically." + + !!! info "Supported colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)" + + ```python + # Specify incoming frames are `grayscale` + options = {"jpeg_compression": "GRAY"} + ``` * ### Performance Attributes :zap: - * `jpeg_compression_quality`: _(int/float)_ It controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows: + * `jpeg_compression_quality`: _(int/float)_ This attribute controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows: ```python # activate jpeg encoding and set quality 95% options = {"jpeg_compression": True, "jpeg_compression_quality": 95} ``` - * `jpeg_compression_fastdct` _(bool)_: This attribute if True, NetGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows: + * `jpeg_compression_fastdct`: _(bool)_ This attribute if True, NetGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows: ```python # activate jpeg encoding and enable fast dct options = {"jpeg_compression": True, "jpeg_compression_fastdct": True} ``` - * `jpeg_compression_fastupsample` _(bool)_: This attribute if True, NetGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows: + * `jpeg_compression_fastupsample`: _(bool)_ This attribute if True, NetGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows: ```python # activate jpeg encoding and enable fast upsampling @@ -160,7 +177,7 @@ from vidgear.gears import NetGear import cv2 # define NetGear Client with `receive_mode = True` and defined parameter -client = NetGear(receive_mode=True, pattern=1, logging=True, **options) +client = NetGear(receive_mode=True, pattern=1, logging=True) # loop over while True: @@ -194,6 +211,120 @@ client.close()   +### Bare-Minimum Usage with Variable Colorspace + +Frame Compression also supports specify incoming frames colorspace with compression. In following bare-minimum code, we will be sending [**GRAY**](https://en.wikipedia.org/wiki/Grayscale) frames from Server to Client: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! example "This example works in conjunction with [Source ColorSpace manipulation for VideoCapture Gears ➶](../../../../../bonus/colorspace_manipulation/#source-colorspace-manipulation)" + +!!! info "Supported colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)" + +#### Server End + +Open your favorite terminal and execute the following python code: + +!!! tip "You can terminate both sides anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear +import cv2 + +# open any valid video stream(for e.g `test.mp4` file) and change its colorspace to grayscale +stream = VideoGear(source="test.mp4", colorspace="COLOR_BGR2GRAY").start() + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": "GRAY", # set grayscale + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# Define NetGear Server with defined parameters +server = NetGear(pattern=1, logging=True, **options) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read grayscale frames from stream + frame = stream.read() + + # check for frame if None-type + if frame is None: + break + + # {do something with the frame here} + + # send grayscale frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  + +#### Client End + +Then open another terminal on the same system and execute the following python code and see the output: + +!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" + +!!! note "If compression is enabled at Server, then Client will automatically enforce Frame Compression with its performance attributes." + +!!! info "Client's end also automatically enforces Server's colorspace, there's no need to define it again." + +```python +# import required libraries +from vidgear.gears import NetGear +import cv2 + +# define NetGear Client with `receive_mode = True` and defined parameter +client = NetGear(receive_mode=True, pattern=1, logging=True) + +# loop over +while True: + + # receive grayscale frames from network + frame = client.recv() + + # check for received frame if Nonetype + if frame is None: + break + + # {do something with the grayscale frame here} + + # Show output window + cv2.imshow("Output Grayscale Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close client +client.close() +``` + +  + +  + ### Using Frame Compression with Variable Parameters @@ -331,7 +462,7 @@ In this example we are going to implement a bare-minimum example, where we will !!! note "This Dual Frame Compression feature also available for [Multi-Clients](../../advanced/multi_client/) Mode." -!!! info "We're also using [`reducer()`](../../../../../bonus/reference/helper/#reducer) Helper method for reducing frame-size on-the-go for additional performance." +!!! info "We're also using [`reducer()`](../../../../../bonus/reference/helper/#vidgear.gears.helper.reducer--reducer) Helper method for reducing frame-size on-the-go for additional performance." !!! success "Remember to define Frame Compression's [performance attributes](#performance-attributes) both on Server and Client ends in Dual Frame Compression to boost performance bidirectionally!" @@ -344,14 +475,13 @@ Open your favorite terminal and execute the following python code: ```python # import required libraries -from vidgear.gears import VideoGear from vidgear.gears import NetGear from vidgear.gears.helper import reducer import numpy as np import cv2 # open any valid video stream(for e.g `test.mp4` file) -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # activate Bidirectional mode and Frame Compression options = { @@ -370,10 +500,10 @@ while True: try: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want even more performance, otherwise comment this line @@ -400,7 +530,7 @@ while True: break # safely close video stream -stream.stop() +stream.release() # safely close server server.close() @@ -417,7 +547,6 @@ Then open another terminal on the same system and execute the following python c ```python # import required libraries from vidgear.gears import NetGear -from vidgear.gears import VideoGear from vidgear.gears.helper import reducer import cv2 @@ -431,7 +560,7 @@ options = { } # again open the same video stream -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # define NetGear Client with `receive_mode = True` and defined parameter client = NetGear(receive_mode=True, pattern=1, logging=True, **options) @@ -440,10 +569,10 @@ client = NetGear(receive_mode=True, pattern=1, logging=True, **options) while True: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want even more performance, otherwise comment this line @@ -481,7 +610,7 @@ while True: cv2.destroyAllWindows() # safely close video stream -stream.stop() +stream.release() # safely close client client.close() diff --git a/docs/gears/netgear/advanced/multi_client.md b/docs/gears/netgear/advanced/multi_client.md index ca49d8e40..9c7e6016f 100644 --- a/docs/gears/netgear/advanced/multi_client.md +++ b/docs/gears/netgear/advanced/multi_client.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -20,33 +20,33 @@ limitations under the License. # Multi-Clients Mode for NetGear API - -## Overview -
NetGear's Multi-Clients Mode
NetGear's Multi-Clients Mode
+## Overview In Multi-Clients Mode, NetGear robustly handles Multiple Clients at once thereby able to broadcast frames and data across multiple Clients/Consumers in the network at same time. This mode works almost contrary to [Multi-Servers Mode](../multi_server/) but here data transfer works unidirectionally with pattern `1` _(i.e. Request/Reply `zmq.REQ/zmq.REP`)_ only. Every new Client that connects to single Server can be identified by its unique port address on the network. The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) and Request/Reply(`zmq.REQ/zmq.REP`) and can be easily activated in NetGear API through `multiclient_mode` attribute of its [`options`](../../params/#options) dictionary parameter during initialization. -!!! warning "Multi-Clients is best for tranferring **Data with Video-frames** to specific multiple Clients at the same time. But if you're looking for sheer performance for broadcasting see [WebGear API](../../../webgear/overview/)." +!!! tip "Multi-Clients Mode is best for broadcasting **Meta-Data with Video-frames** to specific limited number of clients in real time. But if you're looking to scale broadcast to a very large pool of clients, then see our [WebGear](../../../webgear/overview/) or [WebGear_RTC](../../../webgear_rtc/overview/) APIs."   -!!! danger "Multi-Clients Mode Requirements" +!!! danger "Important Information regarding Multi-Clients Mode" * A unique PORT address **MUST** be assigned to each Client on the network using its [`port`](../../params/#port) parameter. - * A list/tuple of PORT addresses of all unique Cients **MUST** be assigned at Server's end using its [`port`](../../params/#port) parameter for a successful connection. + * A list/tuple of PORT addresses of all unique Clients **MUST** be assigned at Server's end using its [`port`](../../params/#port) parameter for a successful connection. * Patterns `1` _(i.e. Request/Reply `zmq.REQ/zmq.REP`)_ and `2` _(i.e. Publish/Subscribe `zmq.PUB/zmq.SUB`)_ are the only supported pattern values for this Mode. Therefore, calling any other pattern value with is mode will result in `ValueError`. + * Multi-Clients and Multi-Servers exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. + * The [`address`](../../params/#address) parameter value of each Client **MUST** exactly match the Server.   @@ -67,20 +67,16 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a - [x] If the server gets disconnected, all the clients will automatically exit to save resources. -    - ## Usage Examples -!!! info "Important Information" +!!! alert "Important" * ==Frame/Data transmission will **NOT START** until all given Client(s) are connected to the Server.== - * Multi-Clients and Multi-Servers exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. - * For sake of simplicity, in these examples we will use only two unique Clients, but the number of these Clients can be extended to **SEVERAL** numbers depending upon your Network bandwidth and System Capabilities. @@ -89,7 +85,9 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a ### Bare-Minimum Usage -In this example, we will capturing live video-frames from a source _(a.k.a Servers)_ with a webcam connected to it. Afterwards, those captured frame will be transferred over the network to a two independent system _(a.k.a Client)_ at the same time, and will be displayed in Output Window at real-time. All this by using this Multi-Clients Mode in NetGear API. +In this example, we will capturing live video-frames from a source _(a.k.a Server)_ with a webcam connected to it. Afterwards, those captured frame will be sent over the network to two independent system _(a.k.a Clients)_ using this Multi-Clients Mode in NetGear API. Finally, both Clients will be displaying recieved frames in Output Windows in real time. + +!!! tip "This example is useful for building applications like Real-Time Video Broadcasting to multiple clients in local network." #### Server's End diff --git a/docs/gears/netgear/advanced/multi_server.md b/docs/gears/netgear/advanced/multi_server.md index 94840f9dc..d0fa4caaa 100644 --- a/docs/gears/netgear/advanced/multi_server.md +++ b/docs/gears/netgear/advanced/multi_server.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -20,13 +20,13 @@ limitations under the License. # Multi-Servers Mode for NetGear API -## Overview -
NetGear's Multi-Servers Mode
NetGear's Multi-Servers Mode
+## Overview + In Multi-Servers Mode, NetGear API robustly handles Multiple Servers at once, thereby providing seamless access to frames and unidirectional data transfer across multiple Publishers/Servers in the network at the same time. Each new server connects to a single client can be identified by its unique port address on the network. The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) and Request/Reply(`zmq.REQ/zmq.REP`) and can be easily activated in NetGear API through `multiserver_mode` attribute of its [`options`](../../params/#options) dictionary parameter during initialization. @@ -35,7 +35,7 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a   -!!! danger "Multi-Servers Mode Requirements" +!!! danger "Important Information regarding Multi-Servers Mode" * A unique PORT address **MUST** be assigned to each Server on the network using its [`port`](../../params/#port) parameter. @@ -43,6 +43,8 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a * Patterns `1` _(i.e. Request/Reply `zmq.REQ/zmq.REP`)_ and `2` _(i.e. Publish/Subscribe `zmq.PUB/zmq.SUB`)_ are the only supported values for this Mode. Therefore, calling any other pattern value with is mode will result in `ValueError`. + * Multi-Servers and Multi-Clients exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. + * The [`address`](../../params/#address) parameter value of each Server **MUST** exactly match the Client.   @@ -65,29 +67,27 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a   -  - - ## Usage Examples -!!! info "Important Information" +!!! alert "Example Assumptions" * For sake of simplicity, in these examples we will use only two unique Servers, but, the number of these Servers can be extended to several numbers depending upon your system hardware limits. - * All of Servers will be transferring frames to a single Client system at the same time, which will be displaying received frames as a montage _(multiple frames concatenated together)_. + * All of Servers will be transferring frames to a single Client system at the same time, which will be displaying received frames as a live montage _(multiple frames concatenated together)_. * For building Frames Montage at Client's end, We are going to use `imutils` python library function to build montages, by concatenating together frames recieved from different servers. Therefore, Kindly install this library with `pip install imutils` terminal command. - * Multi-Servers and Multi-Clients exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. -   ### Bare-Minimum Usage -In this example, we will capturing live video-frames on two independent sources _(a.k.a Servers)_, each with a webcam connected to it. Then, those frames will be transferred over the network to a single system _(a.k.a Client)_ at the same time, and will be displayed as a real-time montage. All this by using this Multi-Servers Mode in NetGear API. +In this example, we will capturing live video-frames on two independent sources _(a.k.a Servers)_, each with a webcam connected to it. Afterwards, these frames will be sent over the network to a single system _(a.k.a Client)_ using this Multi-Servers Mode in NetGear API in real time, and will be displayed as a live montage. + + +!!! tip "This example is useful for building applications like Real-Time Security System with multiple cameras." #### Client's End diff --git a/docs/gears/netgear/advanced/secure_mode.md b/docs/gears/netgear/advanced/secure_mode.md index da79ae686..200ebd92a 100644 --- a/docs/gears/netgear/advanced/secure_mode.md +++ b/docs/gears/netgear/advanced/secure_mode.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -48,7 +48,7 @@ Secure mode supports the two most powerful ZMQ security layers:   -!!! danger "Secure Mode Requirements" +!!! danger "Important Information regarding Secure Mode" * The `secure_mode` attribute value at the Client's end **MUST** match exactly the Server's end _(i.e. **IronHouse** security layer is only compatible with **IronHouse**, and **NOT** with **StoneHouse**)_. @@ -83,9 +83,9 @@ Secure mode supports the two most powerful ZMQ security layers:   -## Supported Attributes +## Exclusive Attributes -For implementing Secure Mode, NetGear API currently provide following attribute for its [`options`](../../params/#options) dictionary parameter: +For implementing Secure Mode, NetGear API currently provide following exclusive attribute for its [`options`](../../params/#options) dictionary parameter: * `secure_mode` (_integer_) : This attribute activates and sets the ZMQ security Mechanism. Its possible values are: `1`(_StoneHouse_) & `2`(_IronHouse_), and its default value is `0`(_Grassland(no security)_). Its usage is as follows: @@ -125,7 +125,7 @@ For implementing Secure Mode, NetGear API currently provide following attribute Following is the bare-minimum code you need to get started with Secure Mode in NetGear API: -#### Server End +#### Server's End Open your favorite terminal and execute the following python code: @@ -171,7 +171,7 @@ stream.stop() server.close() ``` -#### Client End +#### Client's End Then open another terminal on the same system and execute the following python code and see the output: @@ -282,7 +282,7 @@ client.close()   -#### Server End +#### Server's End Now, Open the terminal on another Server System _(with a webcam connected to it at index `0`)_, and execute the following python code: diff --git a/docs/gears/netgear/advanced/ssh_tunnel.md b/docs/gears/netgear/advanced/ssh_tunnel.md new file mode 100644 index 000000000..8b0d41a5c --- /dev/null +++ b/docs/gears/netgear/advanced/ssh_tunnel.md @@ -0,0 +1,320 @@ + + +# SSH Tunneling Mode for NetGear API + +

+ NetGear's SSH Tunneling Mode +
NetGear's Bidirectional Mode
+

+ + +## Overview + +!!! new "New in v0.2.2" + This document was added in `v0.2.2`. + + +SSH Tunneling Mode allows you to connect NetGear client and server via secure SSH connection over the untrusted network and access its intranet services across firewalls. This mode works with pyzmq's [`zmq.ssh`](https://github.com/zeromq/pyzmq/tree/main/zmq/ssh) module for tunneling ZeroMQ connections over ssh. + +This mode implements [SSH Remote Port Forwarding](https://www.ssh.com/academy/ssh/tunneling/example) which enables accessing Host(client) machine outside the network by exposing port to the public Internet. Thereby, once you have established the tunnel, connections to local machine will actually be connections to remote machine as seen from the server. + +??? danger "Beware ☠️" + Cybercriminals or malware could exploit SSH tunnels to hide their unauthorized communications, or to exfiltrate stolen data from the network. More information can be found [here ➶](https://www.ssh.com/academy/ssh/tunneling) + +All patterns are valid for this mode and it can be easily activated in NetGear API at server end through `ssh_tunnel_mode` string attribute of its [`options`](../../params/#options) dictionary parameter during initialization. + +!!! warning "Important" + * ==SSH tunneling can only be enabled on Server-end to establish remote SSH connection with Client.== + * SSH tunneling Mode is **NOT** compatible with [Multi-Servers](../../advanced/multi_server) and [Multi-Clients](../../advanced/multi_client) Exclusive Modes yet. + +!!! tip "Useful Tips" + * It is advise to use `pattern=2` to overcome random disconnection due to delays in network. + * SSH tunneling Mode is fully supports [Bidirectional Mode](../../advanced/multi_server), [Secure Mode](../../advanced/secure_mode/) and [JPEG-Frame Compression](../../advanced/compression/). + * It is advised to enable logging (`logging = True`) on the first run, to easily identify any runtime errors. + +  + + +## Requirements + +SSH Tunnel Mode requires [`pexpect`](http://www.noah.org/wiki/pexpect) or [`paramiko`](http://www.lag.net/paramiko/) as an additional dependency which is not part of standard VidGear package. It can be easily installed via pypi as follows: + + +=== "Pramiko" + + !!! success "`paramiko` is compatible with all platforms." + + !!! info "`paramiko` support is automatically enabled in ZeroMQ if installed." + + ```sh + # install paramiko + pip install paramiko + ``` + +=== "Pexpect" + + !!! fail "`pexpect` is NOT compatible with Windows Machines." + + ```sh + # install pexpect + pip install pexpect + ``` + +  + + +## Exclusive Attributes + +!!! warning "All these attributes will work on Server end only whereas Client end will simply discard them." + +For implementing SSH Tunneling Mode, NetGear API currently provide following exclusive attribute for its [`options`](../../params/#options) dictionary parameter: + +* **`ssh_tunnel_mode`** (_string_) : This attribute activates SSH Tunneling Mode and sets the fully specified `"@:"` SSH URL for tunneling at Server end. Its usage is as follows: + + !!! fail "On Server end, NetGear automatically validates if the `port` is open at specified SSH URL or not, and if it fails _(i.e. port is closed)_, NetGear will throw `AssertionError`!" + + === "With Default Port" + !!! info "The `port` value in SSH URL is the forwarded port on host(client) machine. Its default value is `22`_(meaning default SSH port is forwarded)_." + + ```python + # activates SSH Tunneling and assign SSH URL + options = {"ssh_tunnel_mode":"userid@52.194.1.73"} + # only connections from the public IP address 52.194.1.73 on default port 22 are allowed + ``` + + === "With Custom Port" + !!! quote "You can also define your custom forwarded port instead." + + ```python + # activates SSH Tunneling and assign SSH URL + options = {"ssh_tunnel_mode":"userid@52.194.1.73:8080"} + # only connections from the public IP address 52.194.1.73 on custom port 8080 are allowed + ``` + +* **`ssh_tunnel_pwd`** (_string_): This attribute sets the password required to authorize Host for SSH Connection at Server end. This password grant access and controls SSH user can access what. It can be used as follows: + + ```python + # set password for our SSH conection + options = { + "ssh_tunnel_mode":"userid@52.194.1.73", + "ssh_tunnel_pwd":"mypasswordstring", + } + ``` + +* **`ssh_tunnel_keyfile`** (_string_): This attribute sets path to Host key that provide another way to authenticate host for SSH Connection at Server end. Its purpose is to prevent man-in-the-middle attacks. Certificate-based host authentication can be a very attractive alternative in large organizations. It allows device authentication keys to be rotated and managed conveniently and every connection to be secured. It can be used as follows: + + !!! tip "You can use [Ssh-keygen](https://www.ssh.com/academy/ssh/keygen) tool for creating new authentication key pairs for SSH Tunneling." + + ```python + # set keyfile path for our SSH conection + options = { + "ssh_tunnel_mode":"userid@52.194.1.73", + "ssh_tunnel_keyfile":"/home/foo/.ssh/id_rsa", + } + ``` + + +  + + +## Usage Example + + +???+ alert "Assumptions for this Example" + + In this particular example, we assume that: + + - **Server:** + * [x] Server end is a **Raspberry Pi** with USB camera connected to it. + * [x] Server is located at remote location and outside the Client's network. + + - **Client:** + * [x] Client end is a **Regular PC/Computer** located at `52.155.1.89` public IP address for displaying frames received from the remote Server. + * [x] This Client is Port Forwarded by its Router to a default SSH Port(22), which allows Server to connect to its TCP port `22` remotely. This connection will then be tunneled back to our PC/Computer(Client) and makes TCP connection to it again via port `22` on localhost(`127.0.0.1`). + * [x] Also, there's a username `test` present on the PC/Computer(Client) to SSH login with password `pas$wd`. + + - **Setup Diagram:** + + Assumed setup can be visualized throw diagram as follows: + + ![Placeholder](../../../../assets/images/ssh_tunnel_ex.png){ loading=lazy } + + + +??? question "How to Port Forward in Router" + + For more information on Forwarding Port in Popular Home Routers. See [this document ➶](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/)" + + + +#### Client's End + +Open a terminal on Client System _(A Regular PC where you want to display the input frames received from the Server)_ and execute the following python code: + + +!!! warning "Prerequisites for Client's End" + + To ensure a successful Remote NetGear Connection with Server: + + * **Install OpenSSH Server: (Tested)** + + === "On Linux" + + ```sh + # Debian-based + sudo apt-get install openssh-server + + # RHEL-based + sudo yum install openssh-server + ``` + + === "On Windows" + + See [this official Microsoft doc ➶](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse) + + + === "On OSX" + + ```sh + brew install openssh + ``` + + * Make sure to note down the Client's public IP address required by Server end. Use https://www.whatismyip.com/ to determine it. + + * Make sure that Client Machine is Port Forward by its Router to expose it to the public Internet. Also, this forwarded port value is needed at Server end." + + +??? fail "Secsh channel X open FAILED: open failed: Administratively prohibited" + + **Error:** This error means that installed OpenSSH is preventing connections to forwarded ports from outside your Client Machine. + + **Solution:** You need to change `GatewayPorts no` option to `GatewayPorts yes` in the **OpenSSH server configuration file** [`sshd_config`](https://www.ssh.com/ssh/sshd_config/) to allows anyone to connect to the forwarded ports on Client Machine. + + +??? tip "Enabling Dynamic DNS" + SSH tunneling requires public IP address to able to access host on public Internet. Thereby, if it's troublesome to remember Public IP address or your IP address change constantly, then you can use dynamic DNS services like https://www.noip.com/ + +!!! info "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import required libraries +from vidgear.gears import NetGear +import cv2 + +# Define NetGear Client at given IP address and define parameters +client = NetGear( + address="127.0.0.1", # don't change this + port="5454", + pattern=2, + receive_mode=True, + logging=True, +) + +# loop over +while True: + + # receive frames from network + frame = client.recv() + + # check for received frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close client +client.close() +``` + +  + +#### Server's End + +Now, Open the terminal on Remote Server System _(A Raspberry Pi with a webcam connected to it at index `0`)_, and execute the following python code: + +!!! danger "Make sure to replace the SSH URL in following example with yours." + +!!! warning "On Server end, NetGear automatically validates if the `port` is open at specified SSH URL or not, and if it fails _(i.e. port is closed)_, NetGear will throw `AssertionError`!" + +!!! info "You can terminate stream on both side anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear + +# activate SSH tunneling with SSH URL, and +# [BEWARE!!!] Change SSH URL and SSH password with yours for this example !!! +options = { + "ssh_tunnel_mode": "test@52.155.1.89", # defaults to port 22 + "ssh_tunnel_pwd": "pas$wd", +} + +# Open live video stream on webcam at first index(i.e. 0) device +stream = VideoGear(source=0).start() + +# Define NetGear server at given IP address and define parameters +server = NetGear( + address="127.0.0.1", # don't change this + port="5454", + pattern=2, + logging=True, + **options +) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  \ No newline at end of file diff --git a/docs/gears/netgear/overview.md b/docs/gears/netgear/overview.md index 86e4a73cc..6c24ce05b 100644 --- a/docs/gears/netgear/overview.md +++ b/docs/gears/netgear/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -77,6 +77,9 @@ In addition to the primary modes, NetGear API also offers applications-specific * **Bidirectional Mode:** _This exclusive mode ==provides seamless support for bidirectional data transmission between between Server and Client along with video frames==. Using this mode, the user can now send or receive any data(of any datatype) between Server and Client easily in real-time. **You can learn more about this mode [here ➶](../advanced/bidirectional_mode/).**_ +* **SSH Tunneling Mode:** _This exclusive mode ==allows you to connect NetGear client and server via secure SSH connection over the untrusted network== and access its intranet services across firewalls. This mode implements SSH Remote Port Forwarding which enables accessing Host(client) machine outside the network by exposing port to the public Internet. **You can learn more about this mode [here ➶](../advanced/ssh_tunnel/).**_ + + * **Secure Mode:** _In this exclusive mode, NetGear API ==provides easy access to powerful, smart & secure ZeroMQ's Security Layers== that enables strong encryption on data, and unbreakable authentication between the Server and Client with the help of custom certificates/keys that brings cheap, standardized privacy and authentication for distributed systems over the network. **You can learn more about this mode [here ➶](../advanced/secure_mode/).**_ diff --git a/docs/gears/netgear/params.md b/docs/gears/netgear/params.md index 1bc72ecfb..c9562e362 100644 --- a/docs/gears/netgear/params.md +++ b/docs/gears/netgear/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -149,24 +149,28 @@ This parameter provides the flexibility to alter various NetGear API's internal * **`bidirectional_mode`** (_boolean_) : This internal attribute activates the exclusive [**Bidirectional Mode**](../advanced/bidirectional_mode/), if enabled(`True`). + * **`ssh_tunnel_mode`** (_string_) : This internal attribute activates the exclusive [**SSH Tunneling Mode**](../advanced/ssh_tunnel/) ==at the Server-end only==. + + * **`ssh_tunnel_pwd`** (_string_): In SSH Tunneling Mode, This internal attribute sets the password required to authorize Host for SSH Connection ==at the Server-end only==. More information can be found [here ➶](../advanced/ssh_tunnel/#supported-attributes) + + * **`ssh_tunnel_keyfile`** (_string_): In SSH Tunneling Mode, This internal attribute sets path to Host key that provide another way to authenticate host for SSH Connection ==at the Server-end only==. More information can be found [here ➶](../advanced/ssh_tunnel/#supported-attributes) + * **`custom_cert_location`** (_string_) : In Secure Mode, This internal attribute assigns user-defined location/path to directory for generating/storing Public+Secret Keypair necessary for encryption. More information can be found [here ➶](../advanced/secure_mode/#supported-attributes) * **`overwrite_cert`** (_boolean_) : In Secure Mode, This internal attribute decides whether to overwrite existing Public+Secret Keypair/Certificates or not, ==at the Server-end only==. More information can be found [here ➶](../advanced/secure_mode/#supported-attributes) - * **`jpeg_compression`**(_bool_): This attribute can be used to activate(if True)/deactivate(if False) Frame Compression. Its default value is also `True`. More information can be found [here ➶](../advanced/compression/#supported-attributes) + * **`jpeg_compression`**(_bool/str_): This internal attribute is used to activate(if `True`)/deactivate(if `False`) JPEG Frame Compression as well as to specify incoming frames colorspace with compression. By default colorspace is `BGR` and compression is enabled(`True`). More information can be found [here ➶](../advanced/compression/#supported-attributes) - * **`jpeg_compression_quality`**(_int/float_): It controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. More information can be found [here ➶](../advanced/compression/#supported-attributes) + * **`jpeg_compression_quality`**(_int/float_): This internal attribute controls the JPEG quantization factor in JPEG Frame Compression. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. More information can be found [here ➶](../advanced/compression/#supported-attributes) - * **`jpeg_compression_fastdct`**(_bool_): This attribute if True, use fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`. More information can be found [here ➶](../advanced/compression/#supported-attributes) + * **`jpeg_compression_fastdct`**(_bool_): This internal attributee if True, use fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality in JPEG Frame Compression. Its default value is also `True`. More information can be found [here ➶](../advanced/compression/#supported-attributes) - * **`jpeg_compression_fastupsample`**(_bool_): This attribute if True, use fastest color upsampling method. Its default value is `False`. More information can be found [here ➶](../advanced/compression/#supported-attributes) + * **`jpeg_compression_fastupsample`**(_bool_): This internal attribute if True, use fastest color upsampling method. Its default value is `False`. More information can be found [here ➶](../advanced/compression/#supported-attributes) * **`max_retries`**(_integer_): This internal attribute controls the maximum retries before Server/Client exit itself, if it's unable to get any response/reply from the socket before a certain amount of time, when synchronous messaging patterns like (`zmq.PAIR` & `zmq.REQ/zmq.REP`) are being used. It's value can anything greater than `0`, and its default value is `3`. - * **`request_timeout`**(_integer_): This internal attribute controls the timeout value _(in seconds)_, after which the Server/Client exit itself if it's unable to get any response/reply from the socket, when synchronous messaging patterns like (`zmq.PAIR` & `zmq.REQ/zmq.REP`) are being used. It's value can anything greater than `0`, and its default value is `10` seconds. - * **`flag`**(_integer_): This PyZMQ attribute value can be either `0` or `zmq.NOBLOCK`_( i.e. 1)_. More information can be found [here ➶](https://pyzmq.readthedocs.io/en/latest/api/zmq.html). * **`copy`**(_boolean_): This PyZMQ attribute selects if message be received in a copying or non-copying manner. If `False` a object is returned, if `True` a string copy of the message is returned. diff --git a/docs/gears/netgear/usage.md b/docs/gears/netgear/usage.md index 357a49819..e2aaa2d14 100644 --- a/docs/gears/netgear/usage.md +++ b/docs/gears/netgear/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -471,4 +471,10 @@ stream.stop() server.close() ``` -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced NetGear examples with unusual configuration [here ➶](../../../help/netgear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/netgear_async/advanced/bidirectional_mode.md b/docs/gears/netgear_async/advanced/bidirectional_mode.md new file mode 100644 index 000000000..0341f372b --- /dev/null +++ b/docs/gears/netgear_async/advanced/bidirectional_mode.md @@ -0,0 +1,582 @@ + + +# Bidirectional Mode for NetGear_Async API + +

+ Bidirectional Mode +
NetGear_Async's Bidirectional Mode
+

+ +## Overview + +!!! new "New in v0.2.2" + This document was added in `v0.2.2`. + +Bidirectional Mode enables seamless support for Bidirectional data transmission between Client and Sender along with video-frames through its synchronous messaging patterns such as `zmq.PAIR` (ZMQ Pair Pattern) & `zmq.REQ/zmq.REP` (ZMQ Request/Reply Pattern) in NetGear_Async API. + +In Bidirectional Mode, we utilizes the NetGear_Async API's [`transceive_data`](../../../../bonus/reference/NetGear_Async/#vidgear.gears.asyncio.netgear_async.NetGear_Async.transceive_data) method for transmitting data _(at Client's end)_ and receiving data _(in Server's end)_ all while transferring frames in real-time. + +This mode can be easily activated in NetGear_Async through `bidirectional_mode` attribute of its [`options`](../../params/#options) dictionary parameter during initialization. + +  + + +!!! danger "Important" + + * In Bidirectional Mode, `zmq.PAIR`(ZMQ Pair) & `zmq.REQ/zmq.REP`(ZMQ Request/Reply) are **ONLY** Supported messaging patterns. Accessing this mode with any other messaging pattern, will result in `ValueError`. + + * Bidirectional Mode ==only works with [**User-defined Custom Source**](../../usage/#using-netgear_async-with-a-custom-sourceopencv) on Server end==. Otherwise, NetGear_Async API will throw `ValueError`. + + * Bidirectional Mode enables you to send data of **ANY**[^1] Data-type along with frame bidirectionally. + + * NetGear_Async API will throw `RuntimeError` if Bidirectional Mode is disabled at Server end or Client end but not both. + + * Bidirectional Mode may lead to additional **LATENCY** depending upon the size of data being transfer bidirectionally. User discretion is advised! + + +  + +  + +## Exclusive Method and Parameter + +To send data bidirectionally, NetGear_Async API provides following exclusive method and parameter: + +!!! alert "`transceive_data` only works when Bidirectional Mode is enabled." + +* [`transceive_data`](../../../../bonus/reference/NetGear_Async/#vidgear.gears.asyncio.netgear_async.NetGear_Async.transceive_data): It's a bidirectional mode exclusive method to transmit data _(in Receive mode)_ and receive data _(in Send mode)_, all while transferring frames in real-time. + + * `data`: In `transceive_data` method, this parameter enables user to inputs data _(of **ANY**[^1] datatype)_ for sending back to Server at Client's end. + +  + +  + + +## Usage Examples + + +!!! warning "For Bidirectional Mode, NetGear_Async must need [User-defined Custom Source](../../usage/#using-netgear_async-with-a-custom-sourceopencv) at its Server end otherwise it will throw ValueError." + + +### Bare-Minimum Usage with OpenCV + +Following is the bare-minimum code you need to get started with Bidirectional Mode over Custom Source Server built using OpenCV and NetGear_Async API: + +#### Server End + +Open your favorite terminal and execute the following python code: + +!!! tip "You can terminate both sides anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import library +from vidgear.gears.asyncio import NetGear_Async +import cv2, asyncio + +# activate Bidirectional mode +options = {"bidirectional_mode": True} + +# initialize Server without any source +server = NetGear_Async(source=None, logging=True, **options) + +# Create a async frame generator as custom source +async def my_frame_generator(): + + # !!! define your own video source here !!! + # Open any valid video stream(for e.g `foo.mp4` file) + stream = cv2.VideoCapture("foo.mp4") + + # loop over stream until its terminated + while True: + # read frames + (grabbed, frame) = stream.read() + + # check for empty frame + if not grabbed: + break + + # {do something with the frame to be sent here} + + # prepare data to be sent(a simple text in our case) + target_data = "Hello, I am a Server." + + # receive data from Client + recv_data = await server.transceive_data() + + # print data just received from Client + if not (recv_data is None): + print(recv_data) + + # send our frame & data + yield (target_data, frame) + + # sleep for sometime + await asyncio.sleep(0) + + # safely close video stream + stream.release() + + +if __name__ == "__main__": + # set event loop + asyncio.set_event_loop(server.loop) + # Add your custom source generator to Server configuration + server.config["generator"] = my_frame_generator() + # Launch the Server + server.launch() + try: + # run your main function task until it is complete + server.loop.run_until_complete(server.task) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + finally: + # finally close the server + server.close() +``` + +#### Client End + +Then open another terminal on the same system and execute the following python code and see the output: + +!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import libraries +from vidgear.gears.asyncio import NetGear_Async +import cv2, asyncio + +# activate Bidirectional mode +options = {"bidirectional_mode": True} + +# define and launch Client with `receive_mode=True` +client = NetGear_Async(receive_mode=True, logging=True, **options).launch() + + +# Create a async function where you want to show/manipulate your received frames +async def main(): + # loop over Client's Asynchronous Frame Generator + async for (data, frame) in client.recv_generator(): + + # do something with receive data from server + if not (data is None): + # let's print it + print(data) + + # {do something with received frames here} + + # Show output window(comment these lines if not required) + cv2.imshow("Output Frame", frame) + cv2.waitKey(1) & 0xFF + + # prepare data to be sent + target_data = "Hi, I am a Client here." + # send our data to server + await client.transceive_data(data=target_data) + + # await before continuing + await asyncio.sleep(0) + + +if __name__ == "__main__": + # Set event loop to client's + asyncio.set_event_loop(client.loop) + try: + # run your main function task until it is complete + client.loop.run_until_complete(main()) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + + # close all output window + cv2.destroyAllWindows() + + # safely close client + client.close() +``` + +  + +  + + +### Using Bidirectional Mode with Variable Parameters + + +#### Client's End + +Open a terminal on Client System _(where you want to display the input frames received from the Server)_ and execute the following python code: + +!!! info "Note down the IP-address of this system(required at Server's end) by executing the command: `hostname -I` and also replace it in the following code." + +!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import libraries +from vidgear.gears.asyncio import NetGear_Async +import cv2, asyncio + +# activate Bidirectional mode +options = {"bidirectional_mode": True} + +# Define NetGear_Async Client at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with yours !!! +client = NetGear_Async( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + receive_mode=True, + logging=True, + **options +) + +# Create a async function where you want to show/manipulate your received frames +async def main(): + # loop over Client's Asynchronous Frame Generator + async for (data, frame) in client.recv_generator(): + + # do something with receive data from server + if not (data is None): + # let's print it + print(data) + + # {do something with received frames here} + + # Show output window(comment these lines if not required) + cv2.imshow("Output Frame", frame) + cv2.waitKey(1) & 0xFF + + # prepare data to be sent + target_data = "Hi, I am a Client here." + # send our data to server + await client.transceive_data(data=target_data) + + # await before continuing + await asyncio.sleep(0) + + +if __name__ == "__main__": + # Set event loop to client's + asyncio.set_event_loop(client.loop) + try: + # run your main function task until it is complete + client.loop.run_until_complete(main()) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + + # close all output window + cv2.destroyAllWindows() + + # safely close client + client.close() +``` + +  + +#### Server End + +Now, Open the terminal on another Server System _(a Raspberry Pi with Camera Module)_, and execute the following python code: + +!!! info "Replace the IP address in the following code with Client's IP address you noted earlier." + +!!! tip "You can terminate stream on both side anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import library +from vidgear.gears.asyncio import NetGear_Async +from vidgear.gears import VideoGear +import cv2, asyncio + +# activate Bidirectional mode +options = {"bidirectional_mode": True} + +# initialize Server without any source at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with client's IP address !!! +server = NetGear_Async( + source=None, + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# Create a async frame generator as custom source +async def my_frame_generator(): + + # !!! define your own video source here !!! + # Open any video stream such as live webcam + # video stream on first index(i.e. 0) device + # add various Picamera tweak parameters to dictionary + options = { + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, + } + + # open pi video stream with defined parameters + stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start() + + # loop over stream until its terminated + while True: + # read frames + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame to be sent here} + + # prepare data to be sent(a simple text in our case) + target_data = "Hello, I am a Server." + + # receive data from Client + recv_data = await server.transceive_data() + + # print data just received from Client + if not (recv_data is None): + print(recv_data) + + # send our frame & data + yield (target_data, frame) + + # sleep for sometime + await asyncio.sleep(0) + + # safely close video stream + stream.stop() + + +if __name__ == "__main__": + # set event loop + asyncio.set_event_loop(server.loop) + # Add your custom source generator to Server configuration + server.config["generator"] = my_frame_generator() + # Launch the Server + server.launch() + try: + # run your main function task until it is complete + server.loop.run_until_complete(server.task) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + finally: + # finally close the server + server.close() +``` + +  + +  + + +### Using Bidirectional Mode for Video-Frames Transfer + + +In this example we are going to implement a bare-minimum example, where we will be sending video-frames _(3-Dimensional numpy arrays)_ of the same Video bidirectionally at the same time, for testing the real-time performance and synchronization between the Server and the Client using this(Bidirectional) Mode. + +!!! tip "This feature is great for building applications like Real-Time Video Chat." + +!!! info "We're also using [`reducer()`](../../../../../bonus/reference/helper/#vidgear.gears.helper.reducer--reducer) method for reducing frame-size on-the-go for additional performance." + +!!! warning "Remember, Sending large HQ video-frames may required more network bandwidth and packet size which may lead to video latency!" + +#### Server End + +Open your favorite terminal and execute the following python code: + +!!! tip "You can terminate both side anytime by pressing ++ctrl+"C"++ on your keyboard!" + +!!! alert "Server end can only send [numpy.ndarray](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) datatype as frame but not as data." + +```python +# import library +from vidgear.gears.asyncio import NetGear_Async +from vidgear.gears.asyncio.helper import reducer +import cv2, asyncio +import numpy as np + +# activate Bidirectional mode +options = {"bidirectional_mode": True} + +# Define NetGear Server without any source and with defined parameters +server = NetGear_Async(source=None, pattern=1, logging=True, **options) + +# Create a async frame generator as custom source +async def my_frame_generator(): + # !!! define your own video source here !!! + # Open any valid video stream(for e.g `foo.mp4` file) + stream = cv2.VideoCapture("foo.mp4") + # loop over stream until its terminated + while True: + + # read frames + (grabbed, frame) = stream.read() + + # check for empty frame + if not grabbed: + break + + # reducer frames size if you want more performance, otherwise comment this line + frame = await reducer(frame, percentage=30) # reduce frame by 30% + + # {do something with the frame to be sent here} + + # send frame & data and also receive data from Client + recv_data = await server.transceive_data() + + # receive data from Client + if not (recv_data is None): + # check data is a numpy frame + if isinstance(recv_data, np.ndarray): + + # {do something with received numpy frame here} + + # Let's show it on output window + cv2.imshow("Received Frame", recv_data) + cv2.waitKey(1) & 0xFF + else: + # otherwise just print data + print(recv_data) + + # prepare data to be sent(a simple text in our case) + target_data = "Hello, I am a Server." + + # send our frame & data to client + yield (target_data, frame) + + # sleep for sometime + await asyncio.sleep(0) + + # safely close video stream + stream.release() + + +if __name__ == "__main__": + # set event loop + asyncio.set_event_loop(server.loop) + # Add your custom source generator to Server configuration + server.config["generator"] = my_frame_generator() + # Launch the Server + server.launch() + try: + # run your main function task until it is complete + server.loop.run_until_complete(server.task) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + finally: + # finally close the server + server.close() +``` + +  + +#### Client End + +Then open another terminal on the same system and execute the following python code and see the output: + +!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import libraries +from vidgear.gears.asyncio import NetGear_Async +from vidgear.gears.asyncio.helper import reducer +import cv2, asyncio + +# activate Bidirectional mode +options = {"bidirectional_mode": True} + +# define and launch Client with `receive_mode=True` +client = NetGear_Async(pattern=1, receive_mode=True, logging=True, **options).launch() + +# Create a async function where you want to show/manipulate your received frames +async def main(): + # !!! define your own video source here !!! + # again open the same video stream for comparison + stream = cv2.VideoCapture("foo.mp4") + # loop over Client's Asynchronous Frame Generator + async for (server_data, frame) in client.recv_generator(): + + # check for server data + if not (server_data is None): + + # {do something with the server data here} + + # lets print extracted server data + print(server_data) + + # {do something with received frames here} + + # Show output window + cv2.imshow("Output Frame", frame) + key = cv2.waitKey(1) & 0xFF + + # read frame target data from stream to be sent to server + (grabbed, target_data) = stream.read() + # check for frame + if grabbed: + # reducer frames size if you want more performance, otherwise comment this line + target_data = await reducer( + target_data, percentage=30 + ) # reduce frame by 30% + # send our frame data + await client.transceive_data(data=target_data) + + # await before continuing + await asyncio.sleep(0) + + # safely close video stream + stream.release() + + +if __name__ == "__main__": + # Set event loop to client's + asyncio.set_event_loop(client.loop) + try: + # run your main function task until it is complete + client.loop.run_until_complete(main()) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + # close all output window + cv2.destroyAllWindows() + # safely close client + client.close() +``` + +  + + +[^1]: + + !!! warning "Additional data of [numpy.ndarray](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) datatype is **ONLY SUPPORTED** at Client's end with [`transceive_data`](../../../../bonus/reference/NetGear_Async/#vidgear.gears.asyncio.netgear_async.NetGear_Async.transceive_data) method using its `data` parameter. Whereas Server end can only send [numpy.ndarray](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) datatype as frame but not as data." + + +  \ No newline at end of file diff --git a/docs/gears/netgear_async/overview.md b/docs/gears/netgear_async/overview.md index fd982bc6d..fc2e505cf 100644 --- a/docs/gears/netgear_async/overview.md +++ b/docs/gears/netgear_async/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -26,11 +26,13 @@ limitations under the License. ## Overview -> _NetGear_Async can generate the same performance as [NetGear API](../../netgear/overview/) at about one-third the memory consumption, and also provide complete server-client handling with various options to use variable protocols/patterns similar to NetGear, but it doesn't support any [NetGear's Exclusive Modes](../../netgear/overview/#exclusive-modes) yet._ +> _NetGear_Async can generate the same performance as [NetGear API](../../netgear/overview/) at about one-third the memory consumption, and also provide complete server-client handling with various options to use variable protocols/patterns similar to NetGear, but lacks in term of flexibility as it supports only a few [NetGear's Exclusive Modes](../../netgear/overview/#exclusive-modes)._ NetGear_Async is built on [`zmq.asyncio`](https://pyzmq.readthedocs.io/en/latest/api/zmq.asyncio.html), and powered by a high-performance asyncio event loop called [**`uvloop`**](https://github.com/MagicStack/uvloop) to achieve unmatchable high-speed and lag-free video streaming over the network with minimal resource constraints. NetGear_Async can transfer thousands of frames in just a few seconds without causing any significant load on your system. -NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](../../netgear/overview/) but doesn't support any [NetGear's Exclusive Modes](../../netgear/overview/#exclusive-modes) yet. Furthermore, NetGear_Async allows us to define our custom Server as source to manipulate frames easily before sending them across the network(see this [doc](../usage/#using-netgear_async-with-a-custom-sourceopencv) example). +NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](../../netgear/overview/). Furthermore, NetGear_Async allows us to define our custom Server as source to transform frames easily before sending them across the network(see this [doc](../usage/#using-netgear_async-with-a-custom-sourceopencv) example). + +NetGear_Async now supports additional [**bidirectional data transmission**](../advanced/bidirectional_mode) between receiver(client) and sender(server) while transferring frames. Users can easily build complex applications such as like [Real-Time Video Chat](../advanced/bidirectional_mode/#using-bidirectional-mode-for-video-frames-transfer) in just few lines of code. In addition to all this, NetGear_Async API also provides internal wrapper around [VideoGear](../../videogear/overview/), which itself provides internal access to both [CamGear](../../camgear/overview/) and [PiGear](../../pigear/overview/) APIs, thereby granting it exclusive power for transferring frames incoming from any source to the network. diff --git a/docs/gears/netgear_async/params.md b/docs/gears/netgear_async/params.md index f727ae5ee..4b1180ba4 100644 --- a/docs/gears/netgear_async/params.md +++ b/docs/gears/netgear_async/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -150,6 +150,34 @@ In NetGear_Async, the Receiver-end keeps tracks if frames are received from Serv NetGear_Async(timeout=5.0) # sets 5secs timeout ``` +## **`options`** + +This parameter provides the flexibility to alter various NetGear_Async API's internal properties and modes. + +**Data-Type:** Dictionary + +**Default Value:** Its default value is `{}` + +**Usage:** + + +!!! abstract "Supported dictionary attributes for NetGear_Async API" + + * **`bidirectional_mode`** (_boolean_) : This internal attribute activates the exclusive [**Bidirectional Mode**](../advanced/bidirectional_mode/), if enabled(`True`). + + +The desired attributes can be passed to NetGear_Async API as follows: + +```python +# formatting parameters as dictionary attributes +options = { + "bidirectional_mode": True, +} +# assigning it +NetGear_Async(logging=True, **options) +``` + +     diff --git a/docs/gears/netgear_async/usage.md b/docs/gears/netgear_async/usage.md index 38a1933f7..63323b049 100644 --- a/docs/gears/netgear_async/usage.md +++ b/docs/gears/netgear_async/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -100,7 +100,7 @@ async def main(): key = cv2.waitKey(1) & 0xFF # await before continuing - await asyncio.sleep(0.00001) + await asyncio.sleep(0) if __name__ == "__main__": @@ -162,7 +162,7 @@ async def main(): key = cv2.waitKey(1) & 0xFF # await before continuing - await asyncio.sleep(0.00001) + await asyncio.sleep(0) if __name__ == "__main__": @@ -223,7 +223,9 @@ if __name__ == "__main__": ## Using NetGear_Async with a Custom Source(OpenCV) -NetGear_Async allows you to easily define your own custom Source at Server-end that you want to use to manipulate your frames before sending them onto the network. Let's implement a bare-minimum example with a Custom Source using NetGear_Async API and OpenCV: +NetGear_Async allows you to easily define your own custom Source at Server-end that you want to use to transform your frames before sending them onto the network. + +Let's implement a bare-minimum example with a Custom Source using NetGear_Async API and OpenCV: ### Server's End @@ -237,16 +239,16 @@ from vidgear.gears.asyncio import NetGear_Async import cv2, asyncio # initialize Server without any source -server = NetGear_Async(logging=True) +server = NetGear_Async(source=None, logging=True) + +# !!! define your own video source here !!! +# Open any video stream such as live webcam +# video stream on first index(i.e. 0) device +stream = cv2.VideoCapture(0) # Create a async frame generator as custom source async def my_frame_generator(): - # !!! define your own video source here !!! - # Open any video stream such as live webcam - # video stream on first index(i.e. 0) device - stream = cv2.VideoCapture(0) - # loop over stream until its terminated while True: @@ -255,7 +257,6 @@ async def my_frame_generator(): # check if frame empty if not grabbed: - # if True break the infinite loop break # do something with the frame to be sent here @@ -263,7 +264,7 @@ async def my_frame_generator(): # yield frame yield frame # sleep for sometime - await asyncio.sleep(0.00001) + await asyncio.sleep(0) if __name__ == "__main__": @@ -280,6 +281,8 @@ if __name__ == "__main__": # wait for interrupts pass finally: + # close stream + stream.release() # finally close the server server.close() ``` @@ -313,7 +316,7 @@ async def main(): key = cv2.waitKey(1) & 0xFF # await before continuing - await asyncio.sleep(0.01) + await asyncio.sleep(0) if __name__ == "__main__": @@ -371,6 +374,7 @@ if __name__ == "__main__": ``` ### Client's End + Then open another terminal on the same system and execute the following python code and see the output: !!! warning "Client will throw TimeoutError if it fails to connect to the Server in given [`timeout`](../params/#timeout) value!" @@ -404,7 +408,7 @@ async def main(): key = cv2.waitKey(1) & 0xFF # await before continuing - await asyncio.sleep(0.00001) + await asyncio.sleep(0) if __name__ == "__main__": @@ -425,4 +429,10 @@ if __name__ == "__main__": writer.close() ``` -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced NetGear_Async examples with unusual configuration [here ➶](../../../help/netgear_async_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/pigear/overview.md b/docs/gears/pigear/overview.md index d5b93d461..2cba27deb 100644 --- a/docs/gears/pigear/overview.md +++ b/docs/gears/pigear/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/pigear/params.md b/docs/gears/pigear/params.md index 9ce6e0f1f..b28e72fd1 100644 --- a/docs/gears/pigear/params.md +++ b/docs/gears/pigear/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/pigear/usage.md b/docs/gears/pigear/usage.md index 95de21793..78ec04348 100644 --- a/docs/gears/pigear/usage.md +++ b/docs/gears/pigear/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -204,4 +204,76 @@ cv2.destroyAllWindows() stream.stop() ``` +  + +## Using PiGear with WriteGear API + +PiGear can be easily used with WriteGear API directly without any compatibility issues. The suitable example is as follows: + +```python +# import required libraries +from vidgear.gears import PiGear +from vidgear.gears import WriteGear +import cv2 + +# add various Picamera tweak parameters to dictionary +options = { + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, +} + +# define suitable (Codec,CRF,preset) FFmpeg parameters for writer +output_params = {"-vcodec": "libx264", "-crf": 0, "-preset": "fast"} + +# open pi video stream with defined parameters +stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start() + +# Define writer with defined parameters and suitable output filename for e.g. `Output.mp4` +writer = WriteGear(output_filename="Output.mp4", logging=True, **output_params) + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + # lets convert frame to gray for this example + gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) + + # write gray frame to writer + writer.write(gray) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() + +# safely close writer +writer.close() +``` + +  + +## Bonus Examples + +!!! example "Checkout more advanced PiGear examples with unusual configuration [here ➶](../../../help/pigear_ex/)" +   \ No newline at end of file diff --git a/docs/gears/screengear/overview.md b/docs/gears/screengear/overview.md index 9b1dbec55..e6505cc79 100644 --- a/docs/gears/screengear/overview.md +++ b/docs/gears/screengear/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/screengear/params.md b/docs/gears/screengear/params.md index 0c22d9ae6..bc782b063 100644 --- a/docs/gears/screengear/params.md +++ b/docs/gears/screengear/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/screengear/usage.md b/docs/gears/screengear/usage.md index e959a12d5..9dd7c6ce3 100644 --- a/docs/gears/screengear/usage.md +++ b/docs/gears/screengear/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -122,7 +122,7 @@ from vidgear.gears import ScreenGear import cv2 # open video stream with defined parameters with monitor at index `1` selected -stream = ScreenGear(monitor=1, logging=True, **options).start() +stream = ScreenGear(monitor=1, logging=True).start() # loop over while True: @@ -167,7 +167,7 @@ from vidgear.gears import ScreenGear import cv2 # open video stream with defined parameters and `mss` backend for extracting frames. -stream = ScreenGear(backend="mss", logging=True, **options).start() +stream = ScreenGear(backend="mss", logging=True).start() # loop over while True: @@ -321,4 +321,10 @@ stream.stop() writer.close() ``` -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced NetGear examples with unusual configuration [here ➶](../../../help/screengear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/stabilizer/overview.md b/docs/gears/stabilizer/overview.md index d6fe2f3b7..7d6af0d99 100644 --- a/docs/gears/stabilizer/overview.md +++ b/docs/gears/stabilizer/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -29,7 +29,7 @@ limitations under the License.

VidGear's Stabilizer in Action
(Video Credits @SIGGRAPH2013)

-!!! info "This video is transcoded with [**StreamGear API**](../../streamgear/overview/) and hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)" +!!! info "This video is transcoded with [**StreamGear API**](../../streamgear/introduction/) and hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)" @@ -37,7 +37,7 @@ limitations under the License. > Stabilizer is an auxiliary class that enables Video Stabilization for vidgear with minimalistic latency, and at the expense of little to no additional computational requirements. -The basic idea behind it is to tracks and save the salient feature array for the given number of frames and then uses these anchor point to cancel out all perturbations relative to it for the incoming frames in the queue. This class relies heavily on [**Threaded Queue mode**](../../../bonus/TQM/) for error-free & ultra-fast frame handling. +The basic idea behind it is to tracks and save the salient feature array for the given number of frames and then uses these anchor point to cancel out all perturbations relative to it for the incoming frames in the queue. This class relies on [**Fixed-Size Python Queues**](../../../bonus/TQM/#b-utilizes-fixed-size-queues) for error-free & ultra-fast frame handling. !!! tip "For more detailed information on Stabilizer working, See [this blogpost ➶](https://learnopencv.com/video-stabilization-using-point-feature-matching-in-opencv/)" diff --git a/docs/gears/stabilizer/params.md b/docs/gears/stabilizer/params.md index 8d9560de9..f49b21519 100644 --- a/docs/gears/stabilizer/params.md +++ b/docs/gears/stabilizer/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/stabilizer/usage.md b/docs/gears/stabilizer/usage.md index 6931c1c32..acd7ca2ae 100644 --- a/docs/gears/stabilizer/usage.md +++ b/docs/gears/stabilizer/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -67,7 +67,7 @@ while True: if stabilized_frame is None: continue - # {do something with the stabilized_frame frame here} + # {do something with the stabilized frame here} # Show output window cv2.imshow("Output Stabilized Frame", stabilized_frame) @@ -121,7 +121,7 @@ while True: if stabilized_frame is None: continue - # {do something with the frame here} + # {do something with the stabilized frame here} # Show output window cv2.imshow("Stabilized Frame", stabilized_frame) @@ -145,7 +145,7 @@ stream.release() ## Using Stabilizer with Variable Parameters -Stabilizer class provide certain [parameters](../params/) which you can use to manipulate its internal properties. The complete usage example is as follows: +Stabilizer class provide certain [parameters](../params/) which you can use to tweak its internal properties. The complete usage example is as follows: ```python # import required libraries @@ -176,7 +176,7 @@ while True: if stabilized_frame is None: continue - # {do something with the stabilized_frame frame here} + # {do something with the stabilized frame here} # Show output window cv2.imshow("Output Stabilized Frame", stabilized_frame) @@ -203,6 +203,8 @@ stream.stop() VideoGear's stabilizer can be used in conjunction with WriteGear API directly without any compatibility issues. The complete usage example is as follows: +!!! tip "You can also add live audio input to WriteGear pipeline. See this [bonus example](../../../help)" + ```python # import required libraries from vidgear.gears.stabilizer import Stabilizer @@ -236,7 +238,7 @@ while True: if stabilized_frame is None: continue - # {do something with the frame here} + # {do something with the stabilized frame here} # write stabilized frame to writer writer.write(stabilized_frame) @@ -271,4 +273,10 @@ writer.close() !!! example "The complete usage example can be found [here ➶](../../videogear/usage/#using-videogear-with-video-stabilizer-backend)" +  + +## Bonus Examples + +!!! example "Checkout more advanced Stabilizer examples with unusual configuration [here ➶](../../../help/stabilizer_ex/)" +   \ No newline at end of file diff --git a/docs/gears/streamgear/ffmpeg_install.md b/docs/gears/streamgear/ffmpeg_install.md index 763489bce..2062b5928 100644 --- a/docs/gears/streamgear/ffmpeg_install.md +++ b/docs/gears/streamgear/ffmpeg_install.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -21,7 +21,7 @@ limitations under the License. # FFmpeg Installation Instructions
- FFmpeg + FFmpeg
@@ -68,7 +68,7 @@ The StreamGear API supports _Auto-Installation_ and _Manual Configuration_ metho !!! quote "This is a recommended approach on Windows Machines" -If StreamGear API not receives any input from the user on [**`custom_ffmpeg`**](../params/#custom_ffmpeg) parameter, then on Windows system StreamGear API **auto-generates** the required FFmpeg Static Binaries, according to your system specifications, into the temporary directory _(for e.g. `C:\Temp`)_ of your machine. +If StreamGear API not receives any input from the user on [**`custom_ffmpeg`**](../params/#custom_ffmpeg) parameter, then on Windows system StreamGear API **auto-generates** the required FFmpeg Static Binaries from a dedicated [**Github Server**](https://github.com/abhiTronix/FFmpeg-Builds) into the temporary directory _(for e.g. `C:\Temp`)_ of your machine. !!! warning Important Information @@ -85,7 +85,7 @@ If StreamGear API not receives any input from the user on [**`custom_ffmpeg`**]( * **Download:** You can also manually download the latest Windows Static Binaries(*based on your machine arch(x86/x64)*) from the link below: - *Windows Static Binaries:* http://ffmpeg.zeranoe.com/builds/ + *Windows Static Binaries:* https://ffmpeg.org/download.html#build-windows * **Assignment:** Then, you can easily assign the custom path to the folder containing FFmpeg executables(`for e.g 'C:/foo/Downloads/ffmpeg/bin'`) or path of `ffmpeg.exe` executable itself to the [**`custom_ffmpeg`**](../params/#custom_ffmpeg) parameter in the StreamGear API. diff --git a/docs/gears/streamgear/introduction.md b/docs/gears/streamgear/introduction.md new file mode 100644 index 000000000..b460a41ee --- /dev/null +++ b/docs/gears/streamgear/introduction.md @@ -0,0 +1,179 @@ + + +# StreamGear API + + +
+ StreamGear Flow Diagram +
StreamGear API's generalized workflow
+
+ + +## Overview + +> StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH and Apple HLS)_ in just few lines of python code. + +StreamGear provides a standalone, highly extensible, and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/) multimedia framework for generating chunked-encoded media segments of the content. + +SteamGear is an out-of-the-box solution for transcoding source videos/audio files & real-time video frames and breaking them into a sequence of multiple smaller chunks/segments of suitable lengths. These segments make it possible to stream videos at different quality levels _(different bitrates or spatial resolutions)_ and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests. + +SteamGear currently supports [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_. + +SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ or a Master Playlist _(such as M3U8 in-case of Apple HLS)_ besides segments that describe these segment information _(timing, URL, media characteristics like video resolution and adaptive bit rates)_ and is provided to the client before the streaming session. + +!!! alert "For streaming with older traditional protocols such as RTMP, RTSP/RTP you could use [WriteGear](../../writegear/introduction/) API instead." + +  + +!!! new "New in v0.2.2" + + Apple HLS support was added in `v0.2.2`. + + +!!! danger "Important" + + * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../ffmpeg_install/) for its installation. + + * :warning: StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executable on your system. + + * It is advised to enable logging _([`logging=True`](../params/#logging))_ on the first run for easily identifying any runtime errors. + +!!! tip "Useful Links" + + - Checkout [this detailed blogpost](https://ottverse.com/mpeg-dash-video-streaming-the-complete-guide/) on how MPEG-DASH works. + - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) on how HLS works. + - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) for HLS vs. MPEG-DASH comparison. + + +  + +## Mode of Operations + +StreamGear primarily operates in following independent modes for transcoding: + + +??? warning "Real-time Frames Mode is NOT Live-Streaming." + + Rather, you can enable live-streaming in Real-time Frames Mode by using using exclusive [`-livestream`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter in StreamGear API. Checkout [this usage example](../rtfm/usage/#bare-minimum-usage-with-live-streaming) for more information. + + +- [**Single-Source Mode**](../ssm/overview): In this mode, StreamGear **transcodes entire video file** _(as opposed to frame-by-frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) for streaming that required no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing. + +- [**Real-time Frames Mode**](../rtfm/overview): In this mode, StreamGear directly **transcodes frame-by-frame** _(as opposed to a entire video file)_, into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you desire to flexibility manipulate or transform [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, audio has to added manually _(as separate source)_ for streams. + +  + +## Importing + +You can import StreamGear API in your program as follows: + +```python +from vidgear.gears import StreamGear +``` + +  + + +## Watch Demo + +=== "Watch MPEG-DASH Stream" + + Watch StreamGear transcoded MPEG-DASH Stream: + +
+
+
+
+
+
+
+

Powered by clappr & shaka-player

+ + !!! info "This video assets _(Manifest and segments)_ are hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)" + + !!! quote "Video Credits: [**"Tears of Steel"** - Project Mango Teaser](https://mango.blender.org/download/)" + +=== "Watch APPLE HLS Stream" + + Watch StreamGear transcoded APPLE HLS Stream: + +
+
+
+
+
+
+
+

Powered by clappr & HlsjsPlayback

+ + !!! info "This video assets _(Playlist and segments)_ are hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)" + + !!! quote "Video Credits: [**"Sintel"** - Project Durian Teaser](https://durian.blender.org/download/)" + +  + +## Recommended Players + +=== "GUI Players" + - [x] **[MPV Player](https://mpv.io/):** _(recommended)_ MPV is a free, open source, and cross-platform media player. It supports a wide variety of media file formats, audio and video codecs, and subtitle types. + - [x] **[VLC Player](https://www.videolan.org/vlc/releases/3.0.0.html):** VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols. + - [x] **[Parole](https://docs.xfce.org/apps/parole/start):** _(UNIX only)_ Parole is a modern simple media player based on the GStreamer framework for Unix and Unix-like operating systems. + +=== "Command-Line Players" + - [x] **[MP4Client](https://github.com/gpac/gpac/wiki/MP4Client-Intro):** [GPAC](https://gpac.wp.imt.fr/home/) provides a highly configurable multimedia player called MP4Client. GPAC itself is an open source multimedia framework developed for research and academic purposes, and used in many media production chains. + - [x] **[ffplay](https://ffmpeg.org/ffplay.html):** FFplay is a very simple and portable media player using the FFmpeg libraries and the SDL library. It is mostly used as a testbed for the various FFmpeg APIs. + +=== "Online Players" + !!! alert "To run Online players locally, you'll need a HTTP server. For creating one yourself, See [this well-curated list ➶](https://gist.github.com/abhiTronix/7d2798bc9bc62e9e8f1e88fb601d7e7b)" + + - [x] **[Clapper](https://github.com/clappr/clappr):** Clappr is an extensible media player for the web. + - [x] **[Shaka Player](https://github.com/google/shaka-player):** Shaka Player is an open-source JavaScript library for playing adaptive media in a browser. + - [x] **[MediaElementPlayer](https://github.com/mediaelement/mediaelement):** MediaElementPlayer is a complete HTML/CSS audio/video player. + - [x] **[Native MPEG-Dash + HLS Playback](https://chrome.google.com/webstore/detail/native-mpeg-dash-%20-hls-pl/cjfbmleiaobegagekpmlhmaadepdeedn?hl=en)(Chrome Extension):** Allow the browser to play HLS (m3u8) or MPEG-Dash (mpd) video urls 'natively' on chrome browsers. + +  + +## Parameters + + + +## References + + + + +## FAQs + + + +  + +## Bonus Examples + +!!! example "Checkout more advanced StreamGear examples with unusual configuration [here ➶](../../../help/streamgear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/streamgear/overview.md b/docs/gears/streamgear/overview.md deleted file mode 100644 index 20fee4db7..000000000 --- a/docs/gears/streamgear/overview.md +++ /dev/null @@ -1,146 +0,0 @@ - - -# StreamGear API - - -
- StreamGear Flow Diagram -
StreamGear API's generalized workflow
-
- - -## Overview - -> StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH)_ in just few lines of python code. - -StreamGear provides a standalone, highly extensible, and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/) multimedia framework for generating chunked-encoded media segments of the content. - -SteamGear easily transcodes source videos/audio files & real-time video-frames and breaks them into a sequence of multiple smaller chunks/segments of fixed length. These segments make it possible to stream videos at different quality levels _(different bitrates or spatial resolutions)_ and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests. - -SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ besides segments that describe these segment information _(timing, URL, media characteristics like video resolution and bit rates)_ and is provided to the client before the streaming session. - -SteamGear currently only supports [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ , but other adaptive streaming technologies such as Apple HLS, Microsoft Smooth Streaming, will be added soon. Also, Multiple DRM support is yet to be implemented. - -  - -!!! danger "Important" - - * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../ffmpeg_install/) for its installation. - - * :warning: StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system. - - * It is advised to enable logging _([`logging=True`](../params/#logging))_ on the first run for easily identifying any runtime errors. - -  - -## Mode of Operations - -StreamGear primarily works in two independent modes for transcoding which serves different purposes. These modes are as follows: - -### A. Single-Source Mode - -In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing. This mode can be easily activated by assigning suitable video path as input to [`-video_source`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter, during StreamGear initialization. ***Learn more about this mode [here ➶](../usage/#a-single-source-mode)*** - -### B. Real-time Frames Mode - -When no valid input is received on [`-video_source`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter, StreamGear API activates this mode where it directly transcodes video-frames _(as opposed to a entire file)_, into a sequence of multiple smaller chunks/segments for streaming. In this mode, StreamGear supports real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames, and process them over FFmpeg pipeline. But on the downside, audio has to added manually _(as separate source)_ for streams. ***Learn more about this mode [here ➶](../usage/#b-real-time-frames-mode)*** - - -  - -## Watch Demo - -Watch StreamGear transcoded MPEG-DASH Stream: - -
-
-
-
-
-
-
-

Powered by clappr & shaka-player

- -!!! info "This video assets _(Manifest and segments)_ are hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)" - -!!! quote "Video Credits: [**"Tears of Steel"** - Project Mango Teaser](https://mango.blender.org/download/)" - -  - -## Recommended Stream Players - -### GUI Players - -- [x] **[MPV Player](https://mpv.io/):** _(recommended)_ MPV is a free, open source, and cross-platform media player. It supports a wide variety of media file formats, audio and video codecs, and subtitle types. -- [x] **[VLC Player](https://www.videolan.org/vlc/releases/3.0.0.html):** VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols. -- [x] **[Parole](https://docs.xfce.org/apps/parole/start):** _(UNIX only)_ Parole is a modern simple media player based on the GStreamer framework for Unix and Unix-like operating systems. - -### Command-Line Players - -- [x] **[MP4Client](https://github.com/gpac/gpac/wiki/MP4Client-Intro):** [GPAC](https://gpac.wp.imt.fr/home/) provides a highly configurable multimedia player called MP4Client. GPAC itself is an open source multimedia framework developed for research and academic purposes, and used in many media production chains. -- [x] **[ffplay](https://ffmpeg.org/ffplay.html):** FFplay is a very simple and portable media player using the FFmpeg libraries and the SDL library. It is mostly used as a testbed for the various FFmpeg APIs. - -### Online Players - -!!! tip "To run Online players locally, you'll need a HTTP server. For creating one yourself, See [this well-curated list ➶](https://gist.github.com/abhiTronix/7d2798bc9bc62e9e8f1e88fb601d7e7b)" - -- [x] **[Clapper](https://github.com/clappr/clappr):** Clappr is an extensible media player for the web. -- [x] **[Shaka Player](https://github.com/google/shaka-player):** Shaka Player is an open-source JavaScript library for playing adaptive media in a browser. -- [x] **[MediaElementPlayer](https://github.com/mediaelement/mediaelement):** MediaElementPlayer is a complete HTML/CSS audio/video player. - -  - -## Importing - -You can import StreamGear API in your program as follows: - -```python -from vidgear.gears import StreamGear -``` - -  - -## Usage Examples - - - -## Parameters - - - -## References - - - - -## FAQs - - - -  \ No newline at end of file diff --git a/docs/gears/streamgear/params.md b/docs/gears/streamgear/params.md index ff411907d..ad1ab9470 100644 --- a/docs/gears/streamgear/params.md +++ b/docs/gears/streamgear/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -24,10 +24,12 @@ limitations under the License. ## **`output`** -This parameter sets the valid filename/path for storing the StreamGear assets _(Manifest file(such as Media Presentation Description(MPD) in-case of DASH) & Transcoded sequence of segments)_. +This parameter sets the valid filename/path for storing the StreamGear assets _(Manifest file (such as MPD in-case of DASH) or a Master Playlist (such as M3U8 in-case of Apple HLS) & Transcoded sequence of segments)_. !!! warning "StreamGear API will throw `ValueError` if `output` provided is empty or invalid." +!!! error "Make sure to provide _valid filename with valid file-extension_ for selected [`format`](#format) value _(such as `.mpd` in case of MPEG-DASH and `.m3u8` in case of APPLE-HLS)_, otherwise StreamGear will throw `AssertionError`." + !!! note "StreamGear generated sequence of multiple chunks/segments are also stored in the same directory." !!! tip "You can easily delete all previous assets at `output` location, by using [`-clear_prev_assets`](#a-exclusive-parameters) attribute of [`stream_params`](#stream_params) dictionary parameter." @@ -40,41 +42,77 @@ Its valid input can be one of the following: * **Path to directory**: Valid path of the directory. In this case, StreamGear API will automatically assign a unique filename for Manifest file. This can be defined as follows: - ```python - streamer = StreamGear(output = '/home/foo/foo1') # Define streamer with manifest saving directory path - ``` + === "DASH" + + ```python + streamer = StreamGear(output = "/home/foo/foo1") # Define streamer with manifest saving directory path + ``` + + === "HLS" + + ```python + streamer = StreamGear(output = "/home/foo/foo1", format="hls") # Define streamer with playlist saving directory path + ``` * **Filename** _(with/without path)_: Valid filename(_with valid extension_) of the output Manifest file. In case filename is provided without path, then current working directory will be used. - ```python - streamer = StreamGear(output = 'output_foo.mpd') # Define streamer with manifest file name - ``` + === "DASH" - !!! warning "Make sure to provide _valid filename with valid file-extension_ for selected [format](#format) value _(such as `output.mpd` in case of MPEG-DASH)_, otherwise StreamGear will throw `AssertionError`." + ```python + streamer = StreamGear(output = "output_foo.mpd") # Define streamer with manifest file name + ``` + === "HLS" + + ```python + streamer = StreamGear(output = "output_foo.m3u8", format="hls") # Define streamer with playlist file name + ``` * **URL**: Valid URL of a network stream with a protocol supported by installed FFmpeg _(verify with command `ffmpeg -protocols`)_ only. This is useful for directly storing assets to a network server. For example, you can use a `http` protocol URL as follows: - ```python - streamer = StreamGear(output = 'http://195.167.1.101/live/test.mpd') #Define streamer - ``` + + === "DASH" + + ```python + streamer = StreamGear(output = "http://195.167.1.101/live/test.mpd") #Define streamer + ``` + + === "HLS" + + ```python + streamer = StreamGear(output = "http://195.167.1.101/live/test.m3u8", format="hls") #Define streamer + ```   -## **`formats`** +## **`format`** -This parameter select the adaptive HTTP streaming format. HTTP streaming works by breaking the overall stream into a sequence of small HTTP-based file downloads, each downloading one short chunk of an overall potentially unbounded transport stream. For now, the only supported format is: `'dash'` _(i.e [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/))_, but other adaptive streaming technologies such as Apple HLS, Microsoft Smooth Streaming, will be added soon. +This parameter select the adaptive HTTP streaming formats. For now, the supported format are: `dash` _(i.e [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/))_ and `hls` _(i.e [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming))_. + +!!! warning "Any invalid value to `format` parameter will result in ValueError!" + +!!! error "Make sure to provide _valid filename with valid file-extension_ in [`output`](#output) for selected `format` value _(such as `.mpd` in case of MPEG-DASH and `.m3u8` in case of APPLE-HLS)_, otherwise StreamGear will throw `AssertionError`." + **Data-Type:** String -**Default Value:** Its default value is `'dash'` +**Default Value:** Its default value is `dash` **Usage:** -```python -StreamGear(output = 'output_foo.mpd', format="dash") -``` +=== "DASH" + + ```python + StreamGear(output = "output_foo.mpd", format="dash") + ``` + +=== "HLS" + + ```python + StreamGear(output = "output_foo.m3u8", format="hls") + ``` +   @@ -151,7 +189,7 @@ StreamGear API provides some exclusive internal parameters to easily generate St **Usage:** You can easily define any number of streams using `-streams` attribute as follows: - !!! tip "Usage example can be found [here ➶](../usage/#a2-usage-with-additional-streams)" + !!! tip "Usage example can be found [here ➶](../ssm/usage/#usage-with-additional-streams)" ```python stream_params = @@ -164,9 +202,9 @@ StreamGear API provides some exclusive internal parameters to easily generate St   -* **`-video_source`** _(string)_: This attribute takes valid Video path as input and activates [**Single-Source Mode**](../usage/#a-single-source-mode), for transcoding it into multiple smaller chunks/segments for streaming after successful validation. Its value be one of the following: +* **`-video_source`** _(string)_: This attribute takes valid Video path as input and activates [**Single-Source Mode**](../ssm/overview), for transcoding it into multiple smaller chunks/segments for streaming after successful validation. Its value be one of the following: - !!! tip "Usage example can be found [here ➶](../usage/#a1-bare-minimum-usage)" + !!! tip "Usage example can be found [here ➶](../ssm/usage/#bare-minimum-usage)" * **Video Filename**: Valid path to Video file as follows: ```python @@ -183,17 +221,17 @@ StreamGear API provides some exclusive internal parameters to easily generate St   -* **`-audio`** _(dict)_: This attribute takes external custom audio path as audio-input for all StreamGear streams. Its value be one of the following: +* **`-audio`** _(string/list)_: This attribute takes external custom audio path _(as `string`)_ or audio device name followed by suitable demuxer _(as `list`)_ as audio source input for all StreamGear streams. Its value be one of the following: !!! failure "Make sure this audio-source is compatible with provided video -source, otherwise you encounter multiple errors, or even no output at all!" - !!! tip "Usage example can be found [here ➶](../usage/#a3-usage-with-custom-audio)" - - * **Audio Filename**: Valid path to Audio file as follows: + * **Audio Filename** _(string)_: Valid path to Audio file as follows: ```python stream_params = {"-audio": "/home/foo/foo1.aac"} # set input audio source: /home/foo/foo1.aac ``` - * **Audio URL**: Valid URL of a network audio stream as follows: + !!! tip "Usage example can be found [here ➶](../ssm/usage/#usage-with-custom-audio)" + + * **Audio URL** _(string)_: Valid URL of a network audio stream as follows: !!! danger "Make sure given Video URL has protocol that is supported by installed FFmpeg. _(verify with `ffmpeg -protocols` terminal command)_" @@ -201,6 +239,15 @@ StreamGear API provides some exclusive internal parameters to easily generate St stream_params = {"-audio": "https://exampleaudio.org/example-160.mp3"} # set input audio source: https://exampleaudio.org/example-160.mp3 ``` + * **Device name and Demuxer** _(list)_: Valid audio device name followed by suitable demuxer as follows: + + ```python + stream_params = {"-audio": "https://exampleaudio.org/example-160.mp3"} # set input audio source: https://exampleaudio.org/example-160.mp3 + ``` + !!! tip "Usage example can be found [here ➶](../rtfm/usage/#usage-with-device-audio--input)" + + +   * **`-livestream`** _(bool)_: ***(optional)*** specifies whether to enable **Livestream Support**_(chunks will contain information for new frames only)_ for the selected mode, or not. You can easily set it to `True` to enable this feature, and default value is `False`. It can be used as follows: @@ -215,7 +262,7 @@ StreamGear API provides some exclusive internal parameters to easily generate St * **`-input_framerate`** _(float/int)_ : ***(optional)*** specifies the assumed input video source framerate, and only works in [Real-time Frames Mode](../usage/#b-real-time-frames-mode). It can be used as follows: - !!! tip "Usage example can be found [here ➶](../usage/#b3-bare-minimum-usage-with-controlled-input-framerate)" + !!! tip "Usage example can be found [here ➶](../rtfm/usage/#bare-minimum-usage-with-controlled-input-framerate)" ```python stream_params = {"-input_framerate": 60.0} # set input video source framerate to 60fps @@ -265,7 +312,9 @@ StreamGear API provides some exclusive internal parameters to easily generate St   -* **`-clear_prev_assets`** _(bool)_: ***(optional)*** specify whether to force-delete any previous copies of StreamGear Assets _(i.e. Manifest files(.mpd) & streaming chunks(.m4s))_ present at path specified by [`output`](#output) parameter. You can easily set it to `True` to enable this feature, and default value is `False`. It can be used as follows: +* **`-clear_prev_assets`** _(bool)_: ***(optional)*** specify whether to force-delete any previous copies of StreamGear Assets _(i.e. Manifest files(.mpd) & streaming chunks(.m4s) etc.)_ present at path specified by [`output`](#output) parameter. You can easily set it to `True` to enable this feature, and default value is `False`. It can be used as follows: + + !!! info "In Single-Source Mode, additional segments _(such as `.webm`, `.mp4` chunks)_ are also cleared automatically." ```python stream_params = {"-clear_prev_assets": True} # will delete all previous assets @@ -279,6 +328,10 @@ Almost all FFmpeg parameter can be passed as dictionary attributes in `stream_pa !!! tip "Kindly check [H.264 docs ➶](https://trac.ffmpeg.org/wiki/Encode/H.264) and other [FFmpeg Docs ➶](https://ffmpeg.org/documentation.html) for more information on these parameters" + +!!! error "All ffmpeg parameters are case-sensitive. Remember to double check every parameter if any error occurs." + + !!! note "In addition to these parameters, almost any FFmpeg parameter _(supported by installed FFmpeg)_ is also supported. But make sure to read [**FFmpeg Docs**](https://ffmpeg.org/documentation.html) carefully first." ```python @@ -291,6 +344,8 @@ stream_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast", "-tune": "ze All the encoders and decoders that are compiled with FFmpeg in use, are supported by WriteGear API. You can easily check the compiled encoders by running following command in your terminal: +!!! info "Similarily, supported demuxers and filters depends upons compiled FFmpeg in use." + ```sh # for checking encoder ffmpeg -encoders # use `ffmpeg.exe -encoders` on windows diff --git a/docs/gears/streamgear/rtfm/overview.md b/docs/gears/streamgear/rtfm/overview.md new file mode 100644 index 000000000..0f8649233 --- /dev/null +++ b/docs/gears/streamgear/rtfm/overview.md @@ -0,0 +1,90 @@ + + +# StreamGear API: Real-time Frames Mode + + +
+ Real-time Frames Mode Flow Diagram +
Real-time Frames Mode generalized workflow
+
+ + +## Overview + +When no valid input is received on [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#supported-parameters) dictionary parameter, StreamGear API activates this mode where it directly transcodes real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire video file)_ into a sequence of multiple smaller chunks/segments for adaptive streaming. + +This mode works exceptionally well when you desire to flexibility manipulate or transform video-frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, StreamGear **DOES NOT** automatically maps video-source's audio to generated streams with this mode. You need to manually assign separate audio-source through [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. + +SteamGear supports both [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ with this mode. + +For this mode, StreamGear API provides exclusive [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) method for directly trancoding video-frames into streamable chunks. + +  + +!!! new "New in v0.2.2" + + Apple HLS support was added in `v0.2.2`. + + +!!! alert "Real-time Frames Mode is NOT Live-Streaming." + + Rather, you can easily enable live-streaming in Real-time Frames Mode by using StreamGear API's exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. Checkout its [usage example here](../usage/#bare-minimum-usage-with-live-streaming). + + +!!! danger + + * Using [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) function instead of [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) in Real-time Frames Mode will instantly result in **`RuntimeError`**! + + * **NEVER** assign anything to [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#supported-parameters) dictionary parameter, otherwise [Single-Source Mode](../#a-single-source-mode) may get activated, and as a result, using [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function will throw **`RuntimeError`**! + + * You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in this mode, otherwise audio delay will occur in output streams. + + * Input framerate defaults to `25.0` fps if [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value not defined. + + +  + +## Usage Examples + + + +## Parameters + + + +## References + + + + +## FAQs + + + +  \ No newline at end of file diff --git a/docs/gears/streamgear/rtfm/usage.md b/docs/gears/streamgear/rtfm/usage.md new file mode 100644 index 000000000..6008a65a7 --- /dev/null +++ b/docs/gears/streamgear/rtfm/usage.md @@ -0,0 +1,1288 @@ + + +# StreamGear API Usage Examples: Real-time Frames Mode + + +!!! alert "Real-time Frames Mode is NOT Live-Streaming." + + Rather you can easily enable live-streaming in Real-time Frames Mode by using StreamGear API's exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. Checkout following [usage example](#bare-minimum-usage-with-live-streaming). + +!!! warning "Important Information" + + * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../../ffmpeg_install/) for its installation. + + * StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system. + + * By default, when no additional streams are defined, ==StreamGear generates a primary stream of same resolution and framerate[^1] as the input video _(at the index `0`)_.== + + * Always use `terminate()` function at the very end of the main code. + + +  + + +## Bare-Minimum Usage + +Following is the bare-minimum code you need to get started with StreamGear API in Real-time Frames Mode: + +!!! note "We are using [CamGear](../../../camgear/overview/) in this Bare-Minimum example, but any [VideoCapture Gear](../../../#a-videocapture-gears) will work in the similar manner." + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source='foo1.mp4').start() + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="dash_out.mpd") + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source='foo1.mp4').start() + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="hls_out.m3u8", format = "hls") + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +!!! success "After running this bare-minimum example, StreamGear will produce a Manifest file _(`dash.mpd`)_ with streamable chunks that contains information about a Primary Stream of same resolution and framerate[^1] as input _(without any audio)_." + + +  + +## Bare-Minimum Usage with Live-Streaming + +You can easily activate ==Low-latency Livestreaming in Real-time Frames Mode==, where chunks will contain information for few new frames only and forgets all previous ones), using exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows: + +!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." + +!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames." + +!!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter." + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(from web-camera attached at index `0`) + stream = CamGear(source=0).start() + + # enable livestreaming and retrieve framerate from CamGear Stream and + # pass it as `-input_framerate` parameter for controlled framerate + stream_params = {"-input_framerate": stream.framerate, "-livestream": True} + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="dash_out.mpd", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(from web-camera attached at index `0`) + stream = CamGear(source=0).start() + + # enable livestreaming and retrieve framerate from CamGear Stream and + # pass it as `-input_framerate` parameter for controlled framerate + stream_params = {"-input_framerate": stream.framerate, "-livestream": True} + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + + +  + +## Bare-Minimum Usage with RGB Mode + +In Real-time Frames Mode, StreamGear API provide [`rgb_mode`](../../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) boolean parameter with its `stream()` function, which if enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_, thereby also known as ==RGB Mode==. + +The complete usage example is as follows: + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source='foo1.mp4').start() + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="dash_out.mpd") + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {simulating RGB frame for this example} + frame_rgb = frame[:,:,::-1] + + + # send frame to streamer + streamer.stream(frame_rgb, rgb_mode = True) #activate RGB Mode + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source='foo1.mp4').start() + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="hls_out.m3u8", format = "hls") + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {simulating RGB frame for this example} + frame_rgb = frame[:,:,::-1] + + + # send frame to streamer + streamer.stream(frame_rgb, rgb_mode = True) #activate RGB Mode + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + + +  + +## Bare-Minimum Usage with controlled Input-framerate + +In Real-time Frames Mode, StreamGear API provides exclusive [`-input_framerate`](../../params/#a-exclusive-parameters) attribute for its `stream_params` dictionary parameter, that allow us to set the assumed constant framerate for incoming frames. + +In this example, we will retrieve framerate from webcam video-stream, and set it as value for `-input_framerate` attribute in StreamGear: + +!!! danger "Remember, Input framerate default to `25.0` fps if [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value not defined in Real-time Frames mode." + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # Open live video stream on webcam at first index(i.e. 0) device + stream = CamGear(source=0).start() + + # retrieve framerate from CamGear Stream and pass it as `-input_framerate` value + stream_params = {"-input_framerate":stream.framerate} + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # Open live video stream on webcam at first index(i.e. 0) device + stream = CamGear(source=0).start() + + # retrieve framerate from CamGear Stream and pass it as `-input_framerate` value + stream_params = {"-input_framerate":stream.framerate} + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +  + +## Bare-Minimum Usage with OpenCV + +You can easily use StreamGear API directly with any other Video Processing library(_For e.g. [OpenCV](https://github.com/opencv/opencv) itself_) in Real-time Frames Mode. + +The complete usage example is as follows: + +!!! tip "This just a bare-minimum example with OpenCV, but any other Real-time Frames Mode feature/example will work in the similar manner." + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import StreamGear + import cv2 + + # Open suitable video stream, such as webcam on first index(i.e. 0) + stream = cv2.VideoCapture(0) + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="dash_out.mpd") + + # loop over + while True: + + # read frames from stream + (grabbed, frame) = stream.read() + + # check for frame if not grabbed + if not grabbed: + break + + # {do something with the frame here} + # lets convert frame to gray for this example + gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) + + + # send frame to streamer + streamer.stream(gray) + + # Show output window + cv2.imshow("Output Gray Frame", gray) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.release() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import StreamGear + import cv2 + + # Open suitable video stream, such as webcam on first index(i.e. 0) + stream = cv2.VideoCapture(0) + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="hls_out.m3u8", format = "hls") + + # loop over + while True: + + # read frames from stream + (grabbed, frame) = stream.read() + + # check for frame if not grabbed + if not grabbed: + break + + # {do something with the frame here} + # lets convert frame to gray for this example + gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) + + + # send frame to streamer + streamer.stream(gray) + + # Show output window + cv2.imshow("Output Gray Frame", gray) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.release() + + # safely close streamer + streamer.terminate() + ``` + + +  + +## Usage with Additional Streams + +Similar to Single-Source Mode, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to add each resolution and bitrate/framerate as list of dictionaries to this attribute, and rest is done automatically. + +!!! info "A more detailed information on `-streams` attribute can be found [here ➶](../../params/#a-exclusive-parameters)" + +The complete example is as follows: + +!!! danger "Important `-streams` attribute Information" + * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate[^1] as the input, at the index `0`. + * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised! + * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded! + * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically. + * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically. + +!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!" + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # Open suitable video stream, such as webcam on first index(i.e. 0) + stream = CamGear(source=0).start() + + # define various streams + stream_params = { + "-streams": [ + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate + {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate + ], + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd") + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # Open suitable video stream, such as webcam on first index(i.e. 0) + stream = CamGear(source=0).start() + + # define various streams + stream_params = { + "-streams": [ + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate + {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate + ], + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls") + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +  + +## Usage with File Audio-Input + +In Real-time Frames Mode, if you want to add audio to your streams, you've to use exclusive [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to input the path of your audio file to this attribute as `string` value, and the API will automatically validate as well as maps it to all generated streams. + +The complete example is as follows: + +!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." + +!!! warning "You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams." + +!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../../params/#a-exclusive-parameters)" + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source='foo1.mp4').start() + + # add various streams, along with custom audio + stream_params = { + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps + ], + "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! + "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac" + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source='foo1.mp4').start() + + # add various streams, along with custom audio + stream_params = { + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps + ], + "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! + "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac" + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +  + +## Usage with Device Audio-Input + +In Real-time Frames Mode, you've can also use exclusive [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter for streaming live audio from external device. You just need to format your audio device name followed by suitable demuxer as `list` and assign to this attribute, and the API will automatically validate as well as map it to all generated streams. + +The complete example is as follows: + + +!!! alert "Example Assumptions" + + * You're running are Windows machine with all neccessary audio drivers and software installed. + * There's a audio device with named `"Microphone (USB2.0 Camera)"` connected to your windows machine. + + +??? tip "Using devices with `-audio` attribute on different OS platforms" + + === "On Windows" + + Windows OS users can use the [dshow](https://trac.ffmpeg.org/wiki/DirectShow) (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card: + + - [x] **[OPTIONAL] Enable sound card(if disabled):** First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. **Follow this [post ➶](https://forums.tomshardware.com/threads/no-sound-through-stereo-mix-realtek-hd-audio.1716182/) for more details.** + + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `dshow` as follows: + + ```sh + c:\> ffmpeg -list_devices true -f dshow -i dummy + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [dshow @ 03ACF580] DirectShow video devices + [dshow @ 03ACF580] "Integrated Camera" + [dshow @ 03ACF580] "USB2.0 Camera" + [dshow @ 03ACF580] DirectShow audio devices + [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)" + [dshow @ 03ACF580] "Microphone (USB2.0 Camera)" + dummy: Immediate exit requested + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source device and demuxer device and demuxer + stream_params = {"-audio": ["-f","dshow", "-i", "audio=Microphone (USB2.0 Camera)"]} + ``` + + !!! fail "If audio still doesn't work then [checkout this troubleshooting guide ➶](https://www.maketecheasier.com/fix-microphone-not-working-windows10/) or reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On Linux" + + Linux OS users can use the [alsa](https://ffmpeg.org/ffmpeg-all.html#alsa) to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card: + + - [x] **Identify Sound Card:** To get the list of all installed cards on your machine, you can type `arecord -l` or `arecord -L` _(longer output)_. + + ```sh + arecord -l + + **** List of CAPTURE Hardware Devices **** + card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in WriteGear as follows: + + !!! info "The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as `hw:0` or `hw:1`" + + ```python + # assign appropriate input audio-source device and demuxer device and demuxer + stream_params = {"-audio": ["-f","alsa", "-i", "hw:1"]} + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On MacOS" + + MAC OS users can use the [avfoundation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines: + + + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `avfoundation` as follows: + + ```sh + ffmpeg -f qtkit -list_devices true -i "" + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in) + [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio + [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source device and demuxer + stream_params = {"-audio": ["-f","avfoundation", "-audio_device_index", "0"]} + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + +!!! danger "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." + +!!! warning "You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams." + +!!! note "It is advised to use this example with live-streaming enabled(True) by using StreamGear API's exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter." + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source="foo1.mp4").start() + + # add various streams, along with custom audio + stream_params = { + "-streams": [ + { + "-resolution": "1280x720", + "-video_bitrate": "4000k", + }, # Stream1: 1280x720 at 4000kbs bitrate + {"-resolution": "640x360", "-framerate": 30.0}, # Stream2: 640x360 at 30fps + ], + "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! + "-audio": [ + "-f", + "dshow", + "-i", + "audio=Microphone (USB2.0 Camera)", + ], # assign appropriate input audio-source device and demuxer + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import CamGear + from vidgear.gears import StreamGear + import cv2 + + # open any valid video stream(for e.g `foo1.mp4` file) + stream = CamGear(source="foo1.mp4").start() + + # add various streams, along with custom audio + stream_params = { + "-streams": [ + { + "-resolution": "1280x720", + "-video_bitrate": "4000k", + }, # Stream1: 1280x720 at 4000kbs bitrate + {"-resolution": "640x360", "-framerate": 30.0}, # Stream2: 640x360 at 30fps + ], + "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! + "-audio": [ + "-f", + "dshow", + "-i", + "audio=Microphone (USB2.0 Camera)", + ], # assign appropriate input audio-source device and demuxer + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.m3u8", format="hls", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +  + +## Usage with Hardware Video-Encoder + + +In Real-time Frames Mode, you can also easily change encoder as per your requirement just by passing `-vcodec` FFmpeg parameter as an attribute in `stream_params` dictionary parameter. In addition to this, you can also specify the additional properties/features/optimizations for your system's GPU similarly. + +In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'` like properties by formatting them as `option` dictionary parameter's attributes, as follows: + +!!! warning "Check VAAPI support" + + **This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which MAY/MAY-NOT suit your system. Kindly use suitable parameters based your supported system and FFmpeg configurations only.** + + To use `h264_vaapi` encoder, remember to check if its available and your FFmpeg compiled with VAAPI support. You can easily do this by executing following one-liner command in your terminal, and observing if output contains something similar as follows: + + ```sh + ffmpeg -hide_banner -encoders | grep vaapi + + V..... h264_vaapi H.264/AVC (VAAPI) (codec h264) + V..... hevc_vaapi H.265/HEVC (VAAPI) (codec hevc) + V..... mjpeg_vaapi MJPEG (VAAPI) (codec mjpeg) + V..... mpeg2_vaapi MPEG-2 (VAAPI) (codec mpeg2video) + V..... vp8_vaapi VP8 (VAAPI) (codec vp8) + ``` + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import VideoGear + from vidgear.gears import StreamGear + import cv2 + + # Open suitable video stream, such as webcam on first index(i.e. 0) + stream = VideoGear(source=0).start() + + # add various streams with custom Video Encoder and optimizations + stream_params = { + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps + ], + "-vcodec": "h264_vaapi", # define custom Video encoder + "-vaapi_device": "/dev/dri/renderD128", # define device location + "-vf": "format=nv12,hwupload", # define video pixformat + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import VideoGear + from vidgear.gears import StreamGear + import cv2 + + # Open suitable video stream, such as webcam on first index(i.e. 0) + stream = VideoGear(source=0).start() + + # add various streams with custom Video Encoder and optimizations + stream_params = { + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps + ], + "-vcodec": "h264_vaapi", # define custom Video encoder + "-vaapi_device": "/dev/dri/renderD128", # define device location + "-vf": "format=nv12,hwupload", # define video pixformat + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +  + +[^1]: + :bulb: In Real-time Frames Mode, the Primary Stream's framerate defaults to [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value, if defined, else it will be 25fps. \ No newline at end of file diff --git a/docs/gears/streamgear/ssm/overview.md b/docs/gears/streamgear/ssm/overview.md new file mode 100644 index 000000000..b71ba4084 --- /dev/null +++ b/docs/gears/streamgear/ssm/overview.md @@ -0,0 +1,81 @@ + + +# StreamGear API: Single-Source Mode + +
+ Single-Source Mode Flow Diagram +
Single-Source Mode generalized workflow
+
+ + +## Overview + +In this mode, StreamGear transcodes entire audio-video file _(as opposed to frames-by-frame)_ into a sequence of multiple smaller chunks/segments for adaptive streaming. + +This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) files for streaming that requires no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing. + +SteamGear supports both [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ with this mode. + +For this mode, StreamGear API provides exclusive [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) method to easily process audio-video files into streamable chunks. + +This mode can be easily activated by assigning suitable video path as input to [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#stream_params) dictionary parameter, during StreamGear initialization. + +  + +!!! new "New in v0.2.2" + + Apple HLS support was added in `v0.2.2`. + + +!!! warning + + * Using [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function instead of [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) in Single-Source Mode will instantly result in **`RuntimeError`**! + * Any invalid value to the [`-video_source`](../../params/#a-exclusive-parameters) attribute will result in **`AssertionError`**! + +  + +## Usage Examples + + + +## Parameters + + + +## References + + + + +## FAQs + + + + +  \ No newline at end of file diff --git a/docs/gears/streamgear/ssm/usage.md b/docs/gears/streamgear/ssm/usage.md new file mode 100644 index 000000000..1db663992 --- /dev/null +++ b/docs/gears/streamgear/ssm/usage.md @@ -0,0 +1,332 @@ + + +# StreamGear API Usage Examples: Single-Source Mode + +!!! warning "Important Information" + + * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../../../ffmpeg_install/) for its installation. + + * StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system. + + * By default, when no additional streams are defined, ==StreamGear generates a primary stream of same resolution and framerate[^1] as the input video _(at the index `0`)_.== + + * Always use `terminate()` function at the very end of the main code. + + +  + +## Bare-Minimum Usage + +Following is the bare-minimum code you need to get started with StreamGear API in Single-Source Mode: + +!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts." + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode with valid video input + stream_params = {"-video_source": "foo.mp4"} + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode with valid video input + stream_params = {"-video_source": "foo.mp4"} + # describe a suitable master playlist location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + + +!!! success "After running this bare-minimum example, StreamGear will produce a Manifest file _(`dash.mpd`)_ with streamable chunks that contains information about a Primary Stream of same resolution and framerate as the input." + +  + +## Bare-Minimum Usage with Live-Streaming + +You can easily activate ==Low-latency Livestreaming in Single-Source Mode==, where chunks will contain information for few new frames only and forgets all previous ones), using exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows: + +!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." + +!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames." + +!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts." + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode with valid video input and enable livestreaming + stream_params = {"-video_source": 0, "-livestream": True} + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode with valid video input and enable livestreaming + stream_params = {"-video_source": 0, "-livestream": True} + # describe a suitable master playlist location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +  + +## Usage with Additional Streams + +In addition to Primary Stream, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to add each resolution and bitrate/framerate as list of dictionaries to this attribute, and rest is done automatically. + +!!! info "A more detailed information on `-streams` attribute can be found [here ➶](../../params/#a-exclusive-parameters)" + +The complete example is as follows: + +!!! note "If input video-source contains any audio stream/channel, then it automatically gets assigned to all generated streams without any extra efforts." + +!!! danger "Important `-streams` attribute Information" + * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate as the input, at the index `0`. + * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised! + * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded! + * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically. + * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically. + +!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!" + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode and also define various streams + stream_params = { + "-video_source": "foo.mp4", + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate + {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate + ], + } + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode and also define various streams + stream_params = { + "-video_source": "foo.mp4", + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate + {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate + ], + } + # describe a suitable master playlist location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +  + +## Usage with Custom Audio + +By default, if input video-source _(i.e. `-video_source`)_ contains any audio, then it gets automatically mapped to all generated streams. But, if you want to add any custom audio, you can easily do it by using exclusive [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to input the path of your audio file to this attribute as `string`, and the API will automatically validate as well as map it to all generated streams. + +The complete example is as follows: + +!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." + +!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../../params/#a-exclusive-parameters)" + + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode and various streams, along with custom audio + stream_params = { + "-video_source": "foo.mp4", + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps + ], + "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac" + } + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode and various streams, along with custom audio + stream_params = { + "-video_source": "foo.mp4", + "-streams": [ + {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate + {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps + {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps + ], + "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac" + } + # describe a suitable master playlist location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + + +  + + +## Usage with Variable FFmpeg Parameters + +For seamlessly generating these streaming assets, StreamGear provides a highly extensible and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/) and access to almost all of its parameter. Thereby, you can access almost any parameter available with FFmpeg itself as dictionary attributes in [`stream_params` dictionary parameter](../../params/#stream_params), and use it to manipulate transcoding as you like. + +For this example, let us use our own [H.265/HEVC](https://trac.ffmpeg.org/wiki/Encode/H.265) video and [AAC](https://trac.ffmpeg.org/wiki/Encode/AAC) audio encoder, and set custom audio bitrate, and various other optimizations: + + +!!! tip "This example is just conveying the idea on how to use FFmpeg's encoders/parameters with StreamGear API. You can use any FFmpeg parameter in the similar manner." + +!!! danger "Kindly read [**FFmpeg Docs**](https://ffmpeg.org/documentation.html) carefully, before passing any FFmpeg values to `stream_params` parameter. Wrong values may result in undesired errors or no output at all." + +!!! fail "Always use `-streams` attribute to define additional streams safely, any duplicate or incorrect stream definition can break things!" + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode and various other parameters + stream_params = { + "-video_source": "foo.mp4", # define Video-Source + "-vcodec": "libx265", # assigns H.265/HEVC video encoder + "-x265-params": "lossless=1", # enables Lossless encoding + "-crf": 25, # Constant Rate Factor: 25 + "-bpp": "0.15", # Bits-Per-Pixel(BPP), an Internal StreamGear parameter to ensure good quality of high motion scenes + "-streams": [ + {"-resolution": "1280x720", "-video_bitrate": "4000k"}, # Stream1: 1280x720 at 4000kbs bitrate + {"-resolution": "640x360", "-framerate": 60.0}, # Stream2: 640x360 at 60fps + ], + "-audio": "/home/foo/foo1.aac", # define input audio-source: "/home/foo/foo1.aac", + "-acodec": "libfdk_aac", # assign lossless AAC audio encoder + "-vbr": 4, # Variable Bit Rate: `4` + } + + # describe a suitable manifest-file location/name and assign params + streamer = StreamGear(output="dash_out.mpd", logging=True, **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import StreamGear + + # activate Single-Source Mode and various other parameters + stream_params = { + "-video_source": "foo.mp4", # define Video-Source + "-vcodec": "libx265", # assigns H.265/HEVC video encoder + "-x265-params": "lossless=1", # enables Lossless encoding + "-crf": 25, # Constant Rate Factor: 25 + "-bpp": "0.15", # Bits-Per-Pixel(BPP), an Internal StreamGear parameter to ensure good quality of high motion scenes + "-streams": [ + {"-resolution": "1280x720", "-video_bitrate": "4000k"}, # Stream1: 1280x720 at 4000kbs bitrate + {"-resolution": "640x360", "-framerate": 60.0}, # Stream2: 640x360 at 60fps + ], + "-audio": "/home/foo/foo1.aac", # define input audio-source: "/home/foo/foo1.aac", + "-acodec": "libfdk_aac", # assign lossless AAC audio encoder + "-vbr": 4, # Variable Bit Rate: `4` + } + + # describe a suitable master playlist file location/name and assign params + streamer = StreamGear(output="hls_out.m3u8", format = "hls", logging=True, **stream_params) + # trancode source + streamer.transcode_source() + # terminate + streamer.terminate() + ``` + +  + +[^1]: + :bulb: In Real-time Frames Mode, the Primary Stream's framerate defaults to [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value, if defined, else it will be 25fps. \ No newline at end of file diff --git a/docs/gears/streamgear/usage.md b/docs/gears/streamgear/usage.md deleted file mode 100644 index 81682b439..000000000 --- a/docs/gears/streamgear/usage.md +++ /dev/null @@ -1,766 +0,0 @@ - - -# StreamGear API Usage Examples: - - -!!! warning "Important Information" - - * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../ffmpeg_install/) for its installation. - - * StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system. - - * By default, when no additional streams are defined, ==StreamGear generates a primary stream of same resolution and framerate[^1] as the input video _(at the index `0`)_.== - - * Always use `terminate()` function at the very end of the main code. - - -  - -## A. Single-Source Mode - -
- Single-Source Mode Flow Diagram -
Single-Source Mode generalized workflow
-
- -In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing. - -This mode provide [`transcode_source()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) function to process audio-video files into streamable chunks. - -This mode can be easily activated by assigning suitable video path as input to [`-video_source`](../params/#a-exclusive-parameters) attribute of [`stream_params`](../params/#stream_params) dictionary parameter, during StreamGear initialization. - -!!! warning - - * Using [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function instead of [`transcode_source()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) in Single-Source Mode will instantly result in **`RuntimeError`**! - * Any invalid value to the [`-video_source`](../params/#a-exclusive-parameters) attribute will result in **`AssertionError`**! - -  - -### A.1 Bare-Minimum Usage - -Following is the bare-minimum code you need to get started with StreamGear API in Single-Source Mode: - -!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts." - -```python -# import required libraries -from vidgear.gears import StreamGear - -# activate Single-Source Mode with valid video input -stream_params = {"-video_source": "foo.mp4"} -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", **stream_params) -# trancode source -streamer.transcode_source() -# terminate -streamer.terminate() -``` - -!!! success "After running these bare-minimum commands, StreamGear will produce a Manifest file _(`dash.mpd`)_ with steamable chunks that contains information about a Primary Stream of same resolution and framerate as the input." - -  - -### A.2 Bare-Minimum Usage with Live-Streaming - -If you want to **Livestream in Single-Source Mode** _(chunks will contain information for few new frames only, and forgets all previous ones)_, you can use exclusive [`-livestream`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows: - -!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." - -!!! warning "All Chunks will be overwritten in this mode after every few Chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, Hence Newer Chunks and Manifest contains NO information of any older video-frames." - -!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts." - -```python -# import required libraries -from vidgear.gears import StreamGear - -# activate Single-Source Mode with valid video input and enable livestreaming -stream_params = {"-video_source": 0, "-livestream": True} -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", **stream_params) -# trancode source -streamer.transcode_source() -# terminate -streamer.terminate() -``` - -  - -### A.3 Usage with Additional Streams - -In addition to Primary Stream, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to add each resolution and bitrate/framerate as list of dictionaries to this attribute, and rest is done automatically _(More detailed information can be found [here ➶](../params/#a-exclusive-parameters))_. The complete example is as follows: - -!!! note "If input video-source contains any audio stream/channel, then it automatically gets assigned to all generated streams without any extra efforts." - -!!! danger "Important `-streams` attribute Information" - * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate as the input, at the index `0`. - * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised! - * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded! - * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically. - * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically. - -!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!" - -```python -# import required libraries -from vidgear.gears import StreamGear - -# activate Single-Source Mode and also define various streams -stream_params = { - "-video_source": "foo.mp4", - "-streams": [ - {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate - {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate - {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate - {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate - ], -} -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", **stream_params) -# trancode source -streamer.transcode_source() -# terminate -streamer.terminate() -``` - -  - -### A.4 Usage with Custom Audio - -By default, if input video-source _(i.e. `-video_source`)_ contains any audio, then it gets automatically mapped to all generated streams. But, if you want to add any custom audio, you can easily do it by using exclusive [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to input the path of your audio file to this attribute as string, and StreamGear API will automatically validate and map it to all generated streams. The complete example is as follows: - -!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." - -!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../params/#a-exclusive-parameters)" - -```python -# import required libraries -from vidgear.gears import StreamGear - -# activate Single-Source Mode and various streams, along with custom audio -stream_params = { - "-video_source": "foo.mp4", - "-streams": [ - {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate - {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps - {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps - ], - "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac" -} -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", **stream_params) -# trancode source -streamer.transcode_source() -# terminate -streamer.terminate() -``` - -  - - -### A.5 Usage with Variable FFmpeg Parameters - -For seamlessly generating these streaming assets, StreamGear provides a highly extensible and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/), and access to almost all of its parameter. Hence, you can access almost any parameter available with FFmpeg itself as dictionary attributes in [`stream_params` dictionary parameter](../params/#stream_params), and use it to manipulate transcoding as you like. - -For this example, let us use our own [H.265/HEVC](https://trac.ffmpeg.org/wiki/Encode/H.265) video and [AAC](https://trac.ffmpeg.org/wiki/Encode/AAC) audio encoder, and set custom audio bitrate, and various other optimizations: - - -!!! tip "This example is just conveying the idea on how to use FFmpeg's encoders/parameters with StreamGear API. You can use any FFmpeg parameter in the similar manner." - -!!! danger "Kindly read [**FFmpeg Docs**](https://ffmpeg.org/documentation.html) carefully, before passing any FFmpeg values to `stream_params` parameter. Wrong values may result in undesired errors or no output at all." - -!!! fail "Always use `-streams` attribute to define additional streams safely, any duplicate or incorrect stream definition can break things!" - - -```python -# import required libraries -from vidgear.gears import StreamGear - -# activate Single-Source Mode and various other parameters -stream_params = { - "-video_source": "foo.mp4", # define Video-Source - "-vcodec": "libx265", # assigns H.265/HEVC video encoder - "-x265-params": "lossless=1", # enables Lossless encoding - "-crf": 25, # Constant Rate Factor: 25 - "-bpp": "0.15", # Bits-Per-Pixel(BPP), an Internal StreamGear parameter to ensure good quality of high motion scenes - "-streams": [ - {"-resolution": "1280x720", "-video_bitrate": "4000k"}, # Stream1: 1280x720 at 4000kbs bitrate - {"-resolution": "640x360", "-framerate": 60.0}, # Stream2: 640x360 at 60fps - ], - "-audio": "/home/foo/foo1.aac", # define input audio-source: "/home/foo/foo1.aac", - "-acodec": "libfdk_aac", # assign lossless AAC audio encoder - "-vbr": 4, # Variable Bit Rate: `4` -} - -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", logging=True, **stream_params) -# trancode source -streamer.transcode_source() -# terminate -streamer.terminate() -``` - -  - -  - -## B. Real-time Frames Mode - -
- Real-time Frames Mode Flow Diagram -
Real-time Frames Mode generalized workflow
-
- -When no valid input is received on [`-video_source`](../params/#a-exclusive-parameters) attribute of [`stream_params`](../params/#supported-parameters) dictionary parameter, StreamGear API activates this mode where it directly transcodes real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire file)_ into a sequence of multiple smaller chunks/segments for streaming. - -In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. - -This mode provide [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function for directly trancoding video-frames into streamable chunks over the FFmpeg pipeline. - - -!!! warning - - * Using [`transcode_source()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) function instead of [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) in Real-time Frames Mode will instantly result in **`RuntimeError`**! - - * **NEVER** assign anything to [`-video_source`](../params/#a-exclusive-parameters) attribute of [`stream_params`](../params/#supported-parameters) dictionary parameter, otherwise [Single-Source Mode](#a-single-source-mode) may get activated, and as a result, using [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function will throw **`RuntimeError`**! - - * You **MUST** use [`-input_framerate`](../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in this mode, otherwise audio delay will occur in output streams. - - * Input framerate defaults to `25.0` fps if [`-input_framerate`](../params/#a-exclusive-parameters) attribute value not defined. - - - -  - -### B.1 Bare-Minimum Usage - -Following is the bare-minimum code you need to get started with StreamGear API in Real-time Frames Mode: - -!!! note "We are using [CamGear](../../camgear/overview/) in this Bare-Minimum example, but any [VideoCapture Gear](../../#a-videocapture-gears) will work in the similar manner." - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import StreamGear -import cv2 - -# open any valid video stream(for e.g `foo1.mp4` file) -stream = CamGear(source='foo1.mp4').start() - -# describe a suitable manifest-file location/name -streamer = StreamGear(output="dash_out.mpd") - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {do something with the frame here} - - - # send frame to streamer - streamer.stream(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close streamer -streamer.terminate() -``` - -!!! success "After running these bare-minimum commands, StreamGear will produce a Manifest file _(`dash.mpd`)_ with steamable chunks that contains information about a Primary Stream of same resolution and framerate[^1] as input _(without any audio)_." - - -  - -### B.2 Bare-Minimum Usage with Live-Streaming - -If you want to **Livestream in Real-time Frames Mode** _(chunks will contain information for few new frames only)_, which is excellent for building Low Latency solutions such as Live Camera Streaming, then you can use exclusive [`-livestream`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows: - -!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." - -!!! warning "All Chunks will be overwritten in this mode after every few Chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, Hence Newer Chunks and Manifest contains NO information of any older video-frames." - -!!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter." - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import StreamGear -import cv2 - -# open any valid video stream(from web-camera attached at index `0`) -stream = CamGear(source=0).start() - -# enable livestreaming and retrieve framerate from CamGear Stream and -# pass it as `-input_framerate` parameter for controlled framerate -stream_params = {"-input_framerate": stream.framerate, "-livestream": True} - -# describe a suitable manifest-file location/name -streamer = StreamGear(output="dash_out.mpd", **stream_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # send frame to streamer - streamer.stream(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close streamer -streamer.terminate() -``` - -  - -### B.3 Bare-Minimum Usage with RGB Mode - -In Real-time Frames Mode, StreamGear API provide [`rgb_mode`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) boolean parameter with its `stream()` function, which if enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_, thereby also known as ==RGB Mode==. The complete usage example is as follows: - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import StreamGear -import cv2 - -# open any valid video stream(for e.g `foo1.mp4` file) -stream = CamGear(source='foo1.mp4').start() - -# describe a suitable manifest-file location/name -streamer = StreamGear(output="dash_out.mpd") - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {simulating RGB frame for this example} - frame_rgb = frame[:,:,::-1] - - - # send frame to streamer - streamer.stream(frame_rgb, rgb_mode = True) #activate RGB Mode - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close streamer -streamer.terminate() -``` - -  - -### B.4 Bare-Minimum Usage with controlled Input-framerate - -In Real-time Frames Mode, StreamGear API provides exclusive [`-input_framerate`](../params/#a-exclusive-parameters) attribute for its `stream_params` dictionary parameter, that allow us to set the assumed constant framerate for incoming frames. In this example, we will retrieve framerate from webcam video-stream, and set it as value for `-input_framerate` attribute in StreamGear: - -!!! danger "Remember, Input framerate default to `25.0` fps if [`-input_framerate`](../params/#a-exclusive-parameters) attribute value not defined in Real-time Frames mode." - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import StreamGear -import cv2 - -# Open live video stream on webcam at first index(i.e. 0) device -stream = CamGear(source=0).start() - -# retrieve framerate from CamGear Stream and pass it as `-input_framerate` value -stream_params = {"-input_framerate":stream.framerate} - -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", **stream_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {do something with the frame here} - - - # send frame to streamer - streamer.stream(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close streamer -streamer.terminate() -``` - -  - -### B.5 Bare-Minimum Usage with OpenCV - -You can easily use StreamGear API directly with any other Video Processing library(_For e.g. [OpenCV](https://github.com/opencv/opencv) itself_) in Real-time Frames Mode. The complete usage example is as follows: - -!!! tip "This just a bare-minimum example with OpenCV, but any other Real-time Frames Mode feature/example will work in the similar manner." - -```python -# import required libraries -from vidgear.gears import StreamGear -import cv2 - -# Open suitable video stream, such as webcam on first index(i.e. 0) -stream = cv2.VideoCapture(0) - -# describe a suitable manifest-file location/name -streamer = StreamGear(output="dash_out.mpd") - -# loop over -while True: - - # read frames from stream - (grabbed, frame) = stream.read() - - # check for frame if not grabbed - if not grabbed: - break - - # {do something with the frame here} - # lets convert frame to gray for this example - gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) - - - # send frame to streamer - streamer.stream(gray) - - # Show output window - cv2.imshow("Output Gray Frame", gray) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.release() - -# safely close streamer -streamer.terminate() -``` - -  - -### B.6 Usage with Additional Streams - -Similar to Single-Source Mode, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter _(More detailed information can be found [here ➶](../params/#a-exclusive-parameters))_ in Real-time Frames Mode. The complete example is as follows: - -!!! danger "Important `-streams` attribute Information" - * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate[^1] as the input, at the index `0`. - * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised! - * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded! - * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically. - * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically. - -!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!" - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import StreamGear -import cv2 - -# Open suitable video stream, such as webcam on first index(i.e. 0) -stream = CamGear(source=0).start() - -# define various streams -stream_params = { - "-streams": [ - {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate - {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate - {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate - ], -} - -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd") - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {do something with the frame here} - - - # send frame to streamer - streamer.stream(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close streamer -streamer.terminate() -``` - -  - -### B.7 Usage with Audio-Input - -In Real-time Frames Mode, if you want to add audio to your streams, you've to use exclusive [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You need to input the path of your audio to this attribute as string value, and StreamGear API will automatically validate and map it to all generated streams. The complete example is as follows: - -!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." - -!!! warning "You **MUST** use [`-input_framerate`](../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams." - -!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../params/#a-exclusive-parameters)" - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import StreamGear -import cv2 - -# open any valid video stream(for e.g `foo1.mp4` file) -stream = CamGear(source='foo1.mp4').start() - -# add various streams, along with custom audio -stream_params = { - "-streams": [ - {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate - {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps - {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps - ], - "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! - "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac" -} - -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", **stream_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {do something with the frame here} - - - # send frame to streamer - streamer.stream(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close streamer -streamer.terminate() -``` - -  - -### B.8 Usage with Hardware Video-Encoder - - -In Real-time Frames Mode, you can also easily change encoder as per your requirement just by passing `-vcodec` FFmpeg parameter as an attribute in `stream_params` dictionary parameter. In addition to this, you can also specify the additional properties/features/optimizations for your system's GPU similarly. - -In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'` like properties by formatting them as `option` dictionary parameter's attributes, as follows: - -!!! warning "Check VAAPI support" - - **This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which MAY/MAY-NOT suit your system. Kindly use suitable parameters based your supported system and FFmpeg configurations only.** - - To use `h264_vaapi` encoder, remember to check if its available and your FFmpeg compiled with VAAPI support. You can easily do this by executing following one-liner command in your terminal, and observing if output contains something similar as follows: - - ```sh - ffmpeg -hide_banner -encoders | grep vaapi - - V..... h264_vaapi H.264/AVC (VAAPI) (codec h264) - V..... hevc_vaapi H.265/HEVC (VAAPI) (codec hevc) - V..... mjpeg_vaapi MJPEG (VAAPI) (codec mjpeg) - V..... mpeg2_vaapi MPEG-2 (VAAPI) (codec mpeg2video) - V..... vp8_vaapi VP8 (VAAPI) (codec vp8) - ``` - - -```python -# import required libraries -from vidgear.gears import VideoGear -from vidgear.gears import StreamGear -import cv2 - -# Open suitable video stream, such as webcam on first index(i.e. 0) -stream = VideoGear(source=0).start() - -# add various streams with custom Video Encoder and optimizations -stream_params = { - "-streams": [ - {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate - {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps - {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps - ], - "-vcodec": "h264_vaapi", # define custom Video encoder - "-vaapi_device": "/dev/dri/renderD128", # define device location - "-vf": "format=nv12,hwupload", # define video pixformat -} - -# describe a suitable manifest-file location/name and assign params -streamer = StreamGear(output="dash_out.mpd", **stream_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {do something with the frame here} - - - # send frame to streamer - streamer.stream(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close streamer -streamer.terminate() -``` - -  - -[^1]: - :bulb: In Real-time Frames Mode, the Primary Stream's framerate defaults to [`-input_framerate`](../params/#a-exclusive-parameters) attribute value, if defined, else it will be 25fps. \ No newline at end of file diff --git a/docs/gears/videogear/overview.md b/docs/gears/videogear/overview.md index 75768e1e7..b13be384a 100644 --- a/docs/gears/videogear/overview.md +++ b/docs/gears/videogear/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -37,13 +37,10 @@ VideoGear is ideal when you need to switch to different video sources without ch !!! tip "Helpful Tips" - * If you're already familar with [OpenCV](https://github.com/opencv/opencv) library, then see [Switching from OpenCV ➶](../../switch_from_cv/#switching-videocapture-apis) + * If you're already familar with [OpenCV](https://github.com/opencv/opencv) library, then see [Switching from OpenCV ➶](../../../switch_from_cv/#switching-videocapture-apis) * It is advised to enable logging(`logging = True`) on the first run for easily identifying any runtime errors. - -!!! warning "Make sure to [enable Raspberry Pi hardware-specific settings](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior using PiGear API, otherwise nothing will work." -   ## Importing diff --git a/docs/gears/videogear/params.md b/docs/gears/videogear/params.md index 19cbc529e..2e70add7c 100644 --- a/docs/gears/videogear/params.md +++ b/docs/gears/videogear/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/videogear/usage.md b/docs/gears/videogear/usage.md index 2442829de..3f41d1ab8 100644 --- a/docs/gears/videogear/usage.md +++ b/docs/gears/videogear/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -26,6 +26,8 @@ limitations under the License. ## Bare-Minimum Usage with CamGear backend +!!! abstract "VideoGear by default provides direct internal access to [CamGear API](../../camgear/overview/)." + Following is the bare-minimum code you need to access CamGear API with VideoGear: ```python @@ -34,7 +36,7 @@ from vidgear.gears import VideoGear import cv2 -# open any valid video stream(for e.g `myvideo.avi` file +# open any valid video stream(for e.g `myvideo.avi` file) stream = VideoGear(source="myvideo.avi").start() # loop over @@ -69,8 +71,12 @@ stream.stop() ## Bare-Minimum Usage with PiGear backend +!!! abstract "VideoGear contains a special [`enablePiCamera`](../params/#enablepicamera) flag that when `True` provides internal access to [PiGear API](../../pigear/overview/)." + Following is the bare-minimum code you need to access PiGear API with VideoGear: +!!! warning "Make sure to [enable Raspberry Pi hardware-specific settings](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior using PiGear Backend, otherwise nothing will work." + ```python # import required libraries from vidgear.gears import VideoGear @@ -111,7 +117,9 @@ stream.stop() ## Using VideoGear with Video Stabilizer backend -VideoGear API provides a special internal wrapper around VidGear's Exclusive [**Video Stabilizer**](../../stabilizer/overview/) class and provides easy way of activating stabilization for various video-streams _(real-time or not)_ with its [`stabilize`](../params/#stabilize) boolean parameter during initialization. The complete usage example is as follows: +!!! abstract "VideoGear API provides a special internal wrapper around VidGear's Exclusive [**Video Stabilizer**](../../stabilizer/overview/) class and provides easy way of activating stabilization for various video-streams _(real-time or not)_ with its [`stabilize`](../params/#stabilize) boolean parameter during initialization." + +The usage example is as follows: !!! tip "For a more detailed information on Video-Stabilizer Class, Read [here ➶](../../stabilizer/overview/)" @@ -155,10 +163,15 @@ stream_stab.stop()   -VideoGear contains a special [`enablePiCamera`](../params/#enablepicamera) flag that provides internal access to both CamGear and PiGear APIs, and thereby only one of them can be accessed at a given instance. Therefore, the additional parameters of VideoGear API are also based on API _([PiGear API](../params/#parameters-with-pigear-backend) or [CamGear API](../params/#parameters-with-camgear-backend))_ being accessed. The complete usage example of VideoGear API with Variable PiCamera Properties is as follows: +## Advanced VideoGear usage with PiGear Backend + +!!! abstract "VideoGear provides internal access to both CamGear and PiGear APIs, and thereby all additional parameters of [PiGear API](../params/#parameters-with-pigear-backend) or [CamGear API](../params/#parameters-with-camgear-backend) are also easily accessible within VideoGear API." + +The usage example of VideoGear API with Variable PiCamera Properties is as follows: !!! info "This example is basically a VideoGear API implementation of this [PiGear usage example](../../pigear/usage/#using-pigear-with-variable-camera-properties). Thereby, any [CamGear](../../camgear/usage/) or [PiGear](../../pigear/usage/) usage examples can be implemented with VideoGear API in the similar manner." +!!! warning "Make sure to [enable Raspberry Pi hardware-specific settings](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior using PiGear Backend, otherwise nothing will work." ```python # import required libraries @@ -212,11 +225,11 @@ stream.stop() ## Using VideoGear with Colorspace Manipulation -VideoGear API also supports **Colorspace Manipulation** but not direct like other VideoCapture Gears. +VideoGear API also supports **Colorspace Manipulation** but **NOT Direct** like other VideoCapture Gears. !!! danger "Important" - * `color_space` global variable is **NOT Supported** in VideoGear API, calling it will result in `AttribueError`. More details can be found [here ➶](../../../bonus/colorspace_manipulation/#using-color_space-global-variable) + * `color_space` global variable is **NOT Supported** in VideoGear API, calling it will result in `AttribueError`. More details can be found [here ➶](../../../bonus/colorspace_manipulation/#source-colorspace-manipulation) * Any incorrect or None-type value on [`colorspace`](../params/#colorspace) parameter will be skipped automatically. @@ -261,4 +274,10 @@ cv2.destroyAllWindows() stream.stop() ``` +  + +## Bonus Examples + +!!! example "Checkout more advanced VideoGear examples with unusual configuration [here ➶](../../../help/videogear_ex/)" +   \ No newline at end of file diff --git a/docs/gears/webgear/advanced.md b/docs/gears/webgear/advanced.md index 28940c432..c0a735366 100644 --- a/docs/gears/webgear/advanced.md +++ b/docs/gears/webgear/advanced.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -23,6 +23,50 @@ limitations under the License. !!! note "This is a continuation of the [WebGear doc ➶](../overview/#webgear-api). Thereby, It's advised to first get familiarize with this API, and its [requirements](../usage/#requirements)." +  + + +### Using WebGear with Variable Colorspace + +WebGear by default only supports "BGR" colorspace frames as input, but you can use [`jpeg_compression_colorspace`](../params/#webgear-specific-attributes) string attribute through its options dictionary parameter to specify incoming frames colorspace. + +Let's implement a bare-minimum example using WebGear, where we will be sending [**GRAY**](https://en.wikipedia.org/wiki/Grayscale) frames to client browser: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! example "This example works in conjunction with [Source ColorSpace manipulation for VideoCapture Gears ➶](../../../../bonus/colorspace_manipulation/#source-colorspace-manipulation)" + +!!! info "Supported `jpeg_compression_colorspace` colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)" + +```python +# import required libraries +import uvicorn +from vidgear.gears.asyncio import WebGear + +# various performance tweaks and enable grayscale input +options = { + "frame_size_reduction": 25, + "jpeg_compression_colorspace": "GRAY", # set grayscale + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# initialize WebGear app and change its colorspace to grayscale +web = WebGear( + source="foo.mp4", colorspace="COLOR_BGR2GRAY", logging=True, **options +) + +# run this app on Uvicorn server at address http://0.0.0.0:8000/ +uvicorn.run(web(), host="0.0.0.0", port=8000) + +# close app safely +web.shutdown() +``` + +**And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address on your local machine.** +   ## Using WebGear with a Custom Source(OpenCV) @@ -30,7 +74,9 @@ limitations under the License. !!! new "New in v0.2.1" This example was added in `v0.2.1`. -WebGear allows you to easily define your own custom Source that you want to use to manipulate your frames before sending them onto the browser. +WebGear allows you to easily define your own custom Source that you want to use to transform your frames before sending them onto the browser. + +!!! warning "JPEG Frame-Compression and all of its [performance enhancing attributes](../usage/#performance-enhancements) are disabled with a Custom Source!" Let's implement a bare-minimum example with a Custom Source using WebGear API and OpenCV: @@ -62,12 +108,12 @@ async def my_frame_producer(): # do something with your OpenCV frame here # reducer frames size if you want more performance otherwise comment this line - frame = await reducer(frame, percentage=30) # reduce frame by 30% + frame = await reducer(frame, percentage=30, interpolation=cv2.INTER_AREA) # reduce frame by 30% # handle JPEG encoding encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() # yield frame in byte format - yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n") - await asyncio.sleep(0.00001) + yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0) # close stream stream.release() @@ -101,9 +147,9 @@ from vidgear.gears.asyncio import WebGear # various performance tweaks options = { "frame_size_reduction": 40, - "frame_jpeg_quality": 80, - "frame_jpeg_optimize": True, - "frame_jpeg_progressive": False, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, } # initialize WebGear app @@ -172,9 +218,9 @@ async def hello_world(request): # add various performance tweaks as usual options = { "frame_size_reduction": 40, - "frame_jpeg_quality": 80, - "frame_jpeg_optimize": True, - "frame_jpeg_progressive": False, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, } # initialize WebGear app with a valid source @@ -195,56 +241,51 @@ web.shutdown()   -## Rules for Altering WebGear Files and Folders - -WebGear gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind: +## Using WebGear with MiddleWares -### Rules for Altering Data Files - -- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions. -- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones. -- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear-with-custom-mounting-points). - -### Rules for Altering Data Folders - -- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear-with-custom-mounting-points). -- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process). +WebGear natively supports ASGI middleware classes with Starlette for implementing behavior that is applied across your entire ASGI application easily. -  +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. -## Bonus Usage Examples +!!! info "All supported middlewares can be found [here ➶](https://www.starlette.io/middleware/)" -Because of WebGear API's flexible internal wapper around [VideoGear](../../videogear/overview/), it can easily access any parameter of [CamGear](#camgear) and [PiGear](#pigear) videocapture APIs. +For this example, let's use [`CORSMiddleware`](https://www.starlette.io/middleware/#corsmiddleware) for implementing appropriate [CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) to outgoing responses in our application in order to allow cross-origin requests from browsers, as follows: -!!! info "Following usage examples are just an idea of what can be done with WebGear API, you can try various [VideoGear](../../videogear/params/), [CamGear](../../camgear/params/) and [PiGear](../../pigear/params/) parameters directly in WebGear API in the similar manner." +!!! danger "The default parameters used by the CORSMiddleware implementation are restrictive by default, so you'll need to explicitly enable particular origins, methods, or headers, in order for browsers to be permitted to use them in a Cross-Domain context." -### Using WebGear with Pi Camera Module - -Here's a bare-minimum example of using WebGear API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: +!!! tip "Starlette provides several arguments for enabling origins, methods, or headers for CORSMiddleware API. More information can be found [here ➶](https://www.starlette.io/middleware/#corsmiddleware)" ```python # import libs -import uvicorn +import uvicorn, asyncio +from starlette.middleware import Middleware +from starlette.middleware.cors import CORSMiddleware from vidgear.gears.asyncio import WebGear -# various webgear performance and Raspberry Pi camera tweaks +# add various performance tweaks as usual options = { "frame_size_reduction": 40, - "frame_jpeg_quality": 80, - "frame_jpeg_optimize": True, - "frame_jpeg_progressive": False, - "hflip": True, - "exposure_mode": "auto", - "iso": 800, - "exposure_compensation": 15, - "awb_mode": "horizon", - "sensor_mode": 0, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, } -# initialize WebGear app +# initialize WebGear app with a valid source web = WebGear( - enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options -) + source="/home/foo/foo1.mp4", logging=True, **options +) # enable source i.e. `test.mp4` and enable `logging` for debugging + +# define and assign suitable cors middlewares +web.middleware = [ + Middleware( + CORSMiddleware, + allow_origins=["*"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], + ) +] # run this app on Uvicorn server at address http://localhost:8000/ uvicorn.run(web(), host="localhost", port=8000) @@ -252,35 +293,29 @@ uvicorn.run(web(), host="localhost", port=8000) # close app safely web.shutdown() ``` +**And that's all, Now you can see output at [`http://localhost:8000`](http://localhost:8000) address.**   -### Using WebGear with real-time Video Stabilization enabled - -Here's an example of using WebGear API with real-time Video Stabilization enabled: +## Rules for Altering WebGear Files and Folders -```python -# import libs -import uvicorn -from vidgear.gears.asyncio import WebGear +WebGear gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind: -# various webgear performance tweaks -options = { - "frame_size_reduction": 40, - "frame_jpeg_quality": 80, - "frame_jpeg_optimize": True, - "frame_jpeg_progressive": False, -} +### Rules for Altering Data Files + +- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions. +- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones. +- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear-with-custom-mounting-points). -# initialize WebGear app with a raw source and enable video stabilization(`stabilize=True`) -web = WebGear(source="foo.mp4", stabilize=True, logging=True, **options) +### Rules for Altering Data Folders + +- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear-with-custom-mounting-points). +- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process). -# run this app on Uvicorn server at address http://localhost:8000/ -uvicorn.run(web(), host="localhost", port=8000) +  -# close app safely -web.shutdown() -``` +## Bonus Examples + +!!! example "Checkout more advanced WebGear examples with unusual configuration [here ➶](../../../help/webgear_ex/)"   - \ No newline at end of file diff --git a/docs/gears/webgear/overview.md b/docs/gears/webgear/overview.md index 67267453a..a08a07187 100644 --- a/docs/gears/webgear/overview.md +++ b/docs/gears/webgear/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/webgear/params.md b/docs/gears/webgear/params.md index 95c0c586c..d82839c7d 100644 --- a/docs/gears/webgear/params.md +++ b/docs/gears/webgear/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -75,7 +75,7 @@ This parameter can be used to pass user-defined parameter to WebGear API by form WebGear(logging=True, **options) ``` -* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server. The value defaults to `20`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows: +* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server and it has the most significant effect on performance. The value defaults to `25`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows: ```python # frame-size will be reduced by 50% @@ -84,49 +84,67 @@ This parameter can be used to pass user-defined parameter to WebGear API by form WebGear(logging=True, **options) ``` -* **`enable_infinite_frames`** _(boolean)_ : Can be used to continue streaming _(instead of terminating immediately)_ with emulated blank frames with text "No Input", whenever the input source disconnects. Its default value is `False`. Its usage is as follows +* **`jpeg_compression_quality`**: _(int/float)_ This attribute controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows: - !!! new "New in v0.2.1" - `enable_infinite_frames` attribute was added in `v0.2.1`. + !!! new "New in v0.2.2" + `enable_infinite_frames` attribute was added in `v0.2.2`. ```python - # emulate infinite frames - options = {"enable_infinite_frames": True} + # activate jpeg encoding and set quality 95% + options = {"jpeg_compression_quality": 95} # assign it WebGear(logging=True, **options) ``` -* **Various Encoding Parameters:** +* **`jpeg_compression_fastdct`**: _(bool)_ This attribute if True, WebGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows: - In WebGear, the input video frames are first encoded into [**Motion JPEG (M-JPEG or MJPEG**)](https://en.wikipedia.org/wiki/Motion_JPEG) video compression format in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image, before sending onto a server. Therefore, WebGear API provides various attributes to have full control over JPEG encoding performance and quality, which are as follows: + !!! new "New in v0.2.2" + `enable_infinite_frames` attribute was added in `v0.2.2`. + ```python + # activate jpeg encoding and enable fast dct + options = {"jpeg_compression_fastdct": True} + # assign it + WebGear(logging=True, **options) + ``` - * **`frame_jpeg_quality`** _(integer)_ : It controls the JPEG encoder quality and value varies from `0` to `100` (the higher is the better quality but performance will be lower). Its default value is `95`. Its usage is as follows: +* **`jpeg_compression_fastupsample`**: _(bool)_ This attribute if True, WebGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows: - ```python - # JPEG will be encoded at 80% quality - options = {"frame_jpeg_quality": 80} - # assign it - WebGear(logging=True, **options) - ``` + !!! new "New in v0.2.2" + `enable_infinite_frames` attribute was added in `v0.2.2`. - * **`frame_jpeg_optimize`** _(boolean)_ : It enables various JPEG compression optimizations such as Chroma subsampling, Quantization table, etc. Its default value is `False`. Its usage is as follows: + ```python + # activate jpeg encoding and enable fast upsampling + options = {"jpeg_compression_fastupsample": True} + # assign it + WebGear(logging=True, **options) + ``` - ```python - # JPEG optimizations are enabled - options = {"frame_jpeg_optimize": True} - # assign it - WebGear(logging=True, **options) - ``` + * **`jpeg_compression_colorspace`**: _(str)_ This internal attribute is used to specify incoming frames colorspace with compression. Its usage is as follows: - * **`frame_jpeg_progressive`** _(boolean)_ : It enables **Progressive** JPEG encoding instead of the **Baseline**. Progressive Mode. Its default value is `False` means baseline mode is in-use. Its usage is as follows: + !!! info "Supported `jpeg_compression_colorspace` colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)" - ```python - # Progressive JPEG encoding enabled - options = {"frame_jpeg_progressive": True} - # assign it - WebGear(logging=True, **options) - ``` + !!! new "New in v0.2.2" + `enable_infinite_frames` attribute was added in `v0.2.2`. + + ```python + # Specify incoming frames are `grayscale` + options = {"jpeg_compression": "GRAY"} + # assign it + WebGear(logging=True, **options) + ``` + +* **`enable_infinite_frames`** _(boolean)_ : Can be used to continue streaming _(instead of terminating immediately)_ with emulated blank frames with text "No Input", whenever the input source disconnects. Its default value is `False`. Its usage is as follows + + !!! new "New in v0.2.1" + `enable_infinite_frames` attribute was added in `v0.2.1`. + + ```python + # emulate infinite frames + options = {"enable_infinite_frames": True} + # assign it + WebGear(logging=True, **options) + ```   diff --git a/docs/gears/webgear/usage.md b/docs/gears/webgear/usage.md index 2a24c69a3..3d6a66983 100644 --- a/docs/gears/webgear/usage.md +++ b/docs/gears/webgear/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -54,24 +54,27 @@ WebGear provides certain performance enhancing attributes for its [`options`](.. * **Various Encoding Parameters:** - In WebGear API, the input video frames are first encoded into [**Motion JPEG (M-JPEG or MJPEG**)](https://en.wikipedia.org/wiki/Motion_JPEG) compression format, in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image, before sending onto a server. Therefore, WebGear API provides various attributes to have full control over JPEG encoding performance and quality, which are as follows: + In WebGear API, the input video frames are first encoded into [**Motion JPEG (M-JPEG or MJPEG**)](https://en.wikipedia.org/wiki/Motion_JPEG) compression format, in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image using [`simplejpeg`](https://gitlab.com/jfolz/simplejpeg) library, before sending onto a server. Therefore, WebGear API provides various attributes to have full control over JPEG encoding performance and quality, which are as follows: - * **`frame_jpeg_quality`**: _(int)_ It controls the JPEG encoder quality. Its value varies from `0` to `100` (the higher is the better quality but performance will be lower). Its default value is `95`. Its usage is as follows: + * **`jpeg_compression_quality`**: _(int/float)_ This attribute controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows: ```python - options={"frame_jpeg_quality": 80} #JPEG will be encoded at 80% quality. + # activate jpeg encoding and set quality 95% + options = {"jpeg_compression_quality": 95} ``` - * **`frame_jpeg_optimize`**: _(bool)_ It enables various JPEG compression optimizations such as Chroma sub-sampling, Quantization table, etc. These optimizations based on JPEG libs which are used while compiling OpenCV binaries, and recent versions of OpenCV uses [**TurboJPEG library**](https://libjpeg-turbo.org/), which is highly recommended for performance. Its default value is `False`. Its usage is as follows: - + * **`jpeg_compression_fastdct`**: _(bool)_ This attribute if True, WebGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows: + ```python - options={"frame_jpeg_optimize": True} #JPEG optimizations are enabled. + # activate jpeg encoding and enable fast dct + options = {"jpeg_compression_fastdct": True} ``` - * **`frame_jpeg_progressive`**: _(bool)_ It enables **Progressive** JPEG encoding instead of the **Baseline**. Progressive Mode, displays an image in such a way that it shows a blurry/low-quality photo in its entirety, and then becomes clearer as the image downloads, whereas in Baseline Mode, an image created using the JPEG compression algorithm that will start to display the image as the data is made available, line by line. Progressive Mode, can drastically improve the performance in WebGear but at the expense of additional CPU load, thereby suitable for powerful systems only. Its default value is `False` meaning baseline mode is in-use. Its usage is as follows: - + * **`jpeg_compression_fastupsample`**: _(bool)_ This attribute if True, WebGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows: + ```python - options={"frame_jpeg_progressive": True} #Progressive JPEG encoding enabled. + # activate jpeg encoding and enable fast upsampling + options = {"jpeg_compression_fastupsample": True} ```   @@ -85,7 +88,7 @@ Let's implement our Bare-Minimum usage example with these [**Performance Enhanci You can access and run WebGear VideoStreamer Server programmatically in your python script in just a few lines of code, as follows: -!!! tip "For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](./../../../help/webgear_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear)" +!!! tip "For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](../../../help/webgear_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear)" ```python @@ -96,9 +99,9 @@ from vidgear.gears.asyncio import WebGear # various performance tweaks options = { "frame_size_reduction": 40, - "frame_jpeg_quality": 80, - "frame_jpeg_optimize": True, - "frame_jpeg_progressive": False, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, } # initialize WebGear app @@ -111,7 +114,7 @@ uvicorn.run(web(), host="localhost", port=8000) web.shutdown() ``` -which can be accessed on any browser on the network at http://localhost:8000/. +which can be accessed on any browser on your machine at http://localhost:8000/. ### Running from Terminal @@ -123,7 +126,7 @@ You can also access and run WebGear Server directly from the terminal commandlin !!! warning "If you're using `--options/-op` flag, then kindly wrap your dictionary value in single `''` quotes." ```sh -python3 -m vidgear.gears.asyncio --source test.avi --logging True --options '{"frame_size_reduction": 50, "frame_jpeg_quality": 80, "frame_jpeg_optimize": True, "frame_jpeg_progressive": False}' +python3 -m vidgear.gears.asyncio --source test.avi --logging True --options '{"frame_size_reduction": 50, "jpeg_compression_quality": 80, "jpeg_compression_fastdct": True, "jpeg_compression_fastupsample": False}' ``` which can also be accessed on any browser on the network at http://localhost:8000/. @@ -133,8 +136,6 @@ which can also be accessed on any browser on the network at http://localhost:800 You can run `#!py3 python3 -m vidgear.gears.asyncio -h` help command to see all the advanced settings, as follows: - !!! warning "If you're using `--options/-op` flag, then kindly wrap your dictionary value in single `''` quotes." - ```sh usage: python -m vidgear.gears.asyncio [-h] [-m MODE] [-s SOURCE] [-ep ENABLEPICAMERA] [-S STABILIZE] [-cn CAMERA_NUM] [-yt stream_mode] [-b BACKEND] [-cs COLORSPACE] diff --git a/docs/gears/webgear_rtc/advanced.md b/docs/gears/webgear_rtc/advanced.md index 2a1fc2346..2726cdc64 100644 --- a/docs/gears/webgear_rtc/advanced.md +++ b/docs/gears/webgear_rtc/advanced.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -34,7 +34,7 @@ Let's implement a bare-minimum example using WebGear_RTC as Real-time Broadcaste !!! info "[`enable_infinite_frames`](../params/#webgear_rtc-specific-attributes) is enforced by default with this(`enable_live_broadcast`) attribute." -!!! tip "For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](./../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)" +!!! tip "For accessing WebGear_RTC on different Client Devices on the network, we use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](../../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)" ```python # import required libraries @@ -50,21 +50,21 @@ options = { # initialize WebGear_RTC app web = WebGear_RTC(source="foo.mp4", logging=True, **options) -# run this app on Uvicorn server at address http://localhost:8000/ -uvicorn.run(web(), host="localhost", port=8000) +# run this app on Uvicorn server at address http://0.0.0.0:8000/ +uvicorn.run(web(), host="0.0.0.0", port=8000) # close app safely web.shutdown() ``` -**And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address.** +**And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address on your local machine.**   ## Using WebGear_RTC with a Custom Source(OpenCV) -WebGear_RTC allows you to easily define your own Custom Media Server with a custom source that you want to use to manipulate your frames before sending them onto the browser. +WebGear_RTC allows you to easily define your own Custom Media Server with a custom source that you want to use to transform your frames before sending them onto the browser. Let's implement a bare-minimum example with a Custom Source using WebGear_RTC API and OpenCV: @@ -77,6 +77,7 @@ Let's implement a bare-minimum example with a Custom Source using WebGear_RTC AP import uvicorn, asyncio, cv2 from av import VideoFrame from aiortc import VideoStreamTrack +from aiortc.mediastreams import MediaStreamError from vidgear.gears.asyncio import WebGear_RTC from vidgear.gears.asyncio.helper import reducer @@ -112,7 +113,7 @@ class Custom_RTCServer(VideoStreamTrack): # if NoneType if not grabbed: - return None + return MediaStreamError # reducer frames size if you want more performance otherwise comment this line frame = await reducer(frame, percentage=30) # reduce frame by 30% @@ -145,7 +146,6 @@ uvicorn.run(web(), host="localhost", port=8000) # close app safely web.shutdown() - ``` **And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address.** @@ -255,53 +255,48 @@ web.shutdown()   -## Rules for Altering WebGear_RTC Files and Folders - -WebGear_RTC gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind: - -### Rules for Altering Data Files - -- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions. -- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear_rtc` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones. -- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear_rtc-with-custom-mounting-points). +## Using WebGear_RTC with MiddleWares -### Rules for Altering Data Folders - -- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear_rtc-with-custom-mounting-points). -- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear_rtc` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process). +WebGear_RTC also natively supports ASGI middleware classes with Starlette for implementing behavior that is applied across your entire ASGI application easily. -  +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. -## Bonus Usage Examples +!!! info "All supported middlewares can be found [here ➶](https://www.starlette.io/middleware/)" -Because of WebGear_RTC API's flexible internal wapper around [VideoGear](../../videogear/overview/), it can easily access any parameter of [CamGear](#camgear) and [PiGear](#pigear) videocapture APIs. +For this example, let's use [`CORSMiddleware`](https://www.starlette.io/middleware/#corsmiddleware) for implementing appropriate [CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) to outgoing responses in our application in order to allow cross-origin requests from browsers, as follows: -!!! info "Following usage examples are just an idea of what can be done with WebGear_RTC API, you can try various [VideoGear](../../videogear/params/), [CamGear](../../camgear/params/) and [PiGear](../../pigear/params/) parameters directly in WebGear_RTC API in the similar manner." +!!! danger "The default parameters used by the CORSMiddleware implementation are restrictive by default, so you'll need to explicitly enable particular origins, methods, or headers, in order for browsers to be permitted to use them in a Cross-Domain context." -### Using WebGear_RTC with Pi Camera Module - -Here's a bare-minimum example of using WebGear_RTC API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: +!!! tip "Starlette provides several arguments for enabling origins, methods, or headers for CORSMiddleware API. More information can be found [here ➶](https://www.starlette.io/middleware/#corsmiddleware)" ```python # import libs -import uvicorn +import uvicorn, asyncio +from starlette.middleware import Middleware +from starlette.middleware.cors import CORSMiddleware from vidgear.gears.asyncio import WebGear_RTC -# various webgear_rtc performance and Raspberry Pi camera tweaks +# add various performance tweaks as usual options = { "frame_size_reduction": 25, - "hflip": True, - "exposure_mode": "auto", - "iso": 800, - "exposure_compensation": 15, - "awb_mode": "horizon", - "sensor_mode": 0, } -# initialize WebGear_RTC app +# initialize WebGear_RTC app with a valid source web = WebGear_RTC( - enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options -) + source="/home/foo/foo1.mp4", logging=True, **options +) # enable source i.e. `test.mp4` and enable `logging` for debugging + +# define and assign suitable cors middlewares +web.middleware = [ + Middleware( + CORSMiddleware, + allow_origins=["*"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], + ) +] # run this app on Uvicorn server at address http://localhost:8000/ uvicorn.run(web(), host="localhost", port=8000) @@ -310,31 +305,29 @@ uvicorn.run(web(), host="localhost", port=8000) web.shutdown() ``` +**And that's all, Now you can see output at [`http://localhost:8000`](http://localhost:8000) address.** +   -### Using WebGear_RTC with real-time Video Stabilization enabled - -Here's an example of using WebGear_RTC API with real-time Video Stabilization enabled: +## Rules for Altering WebGear_RTC Files and Folders -```python -# import libs -import uvicorn -from vidgear.gears.asyncio import WebGear_RTC +WebGear_RTC gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind: -# various webgear_rtc performance tweaks -options = { - "frame_size_reduction": 25, -} +### Rules for Altering Data Files + +- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions. +- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear_rtc` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones. +- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear_rtc-with-custom-mounting-points). -# initialize WebGear_RTC app with a raw source and enable video stabilization(`stabilize=True`) -web = WebGear_RTC(source="foo.mp4", stabilize=True, logging=True, **options) +### Rules for Altering Data Folders + +- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear_rtc-with-custom-mounting-points). +- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear_rtc` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process). -# run this app on Uvicorn server at address http://localhost:8000/ -uvicorn.run(web(), host="localhost", port=8000) +  -# close app safely -web.shutdown() -``` +## Bonus Examples -  - \ No newline at end of file +!!! example "Checkout more advanced WebGear_RTC examples with unusual configuration [here ➶](../../../help/webgear_rtc_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/webgear_rtc/overview.md b/docs/gears/webgear_rtc/overview.md index a1541b8e9..d7087f9aa 100644 --- a/docs/gears/webgear_rtc/overview.md +++ b/docs/gears/webgear_rtc/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -34,7 +34,7 @@ limitations under the License. WebGear_RTC is implemented with the help of [**aiortc**](https://aiortc.readthedocs.io/en/latest/) library which is built on top of asynchronous I/O framework for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) and supports many features like SDP generation/parsing, Interactive Connectivity Establishment with half-trickle and mDNS support, DTLS key and certificate generation, DTLS handshake, etc. -WebGear_RTC can handle [multiple consumers](../../webgear_rtc/advanced/#using-webgear_rtc-as-real-time-broadcaster) seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to manipulate frames easily before sending them across the network(see this [doc](../../webgear_rtc/advanced/#using-webgear_rtc-with-a-custom-sourceopencv) example). +WebGear_RTC can handle [multiple consumers](../../webgear_rtc/advanced/#using-webgear_rtc-as-real-time-broadcaster) seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to transform frames easily before sending them across the network(see this [doc](../../webgear_rtc/advanced/#using-webgear_rtc-with-a-custom-sourceopencv) example). WebGear_RTC API works in conjunction with [**Starlette**](https://www.starlette.io/) ASGI application and can also flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, [Response classes](https://www.starlette.io/responses/), [Routing tables](https://www.starlette.io/routing/), [Static Files](https://www.starlette.io/staticfiles/), [Templating engine(with Jinja2)](https://www.starlette.io/templates/), etc. diff --git a/docs/gears/webgear_rtc/params.md b/docs/gears/webgear_rtc/params.md index 81ce84e03..ba244f17b 100644 --- a/docs/gears/webgear_rtc/params.md +++ b/docs/gears/webgear_rtc/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -75,7 +75,7 @@ This parameter can be used to pass user-defined parameter to WebGear_RTC API by WebGear_RTC(logging=True, **options) ``` -* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server. The value defaults to `20`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows: +* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server and it has the most significant effect on performance. The value defaults to `20`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows: ```python # frame-size will be reduced by 50% @@ -84,6 +84,36 @@ This parameter can be used to pass user-defined parameter to WebGear_RTC API by WebGear_RTC(logging=True, **options) ``` +* **`jpeg_compression_quality`** _(int/float)_ : This attribute controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows: + + ```python + # activate jpeg encoding and set quality 95% + options = {"jpeg_compression": True, "jpeg_compression_quality": 95} + ``` + +* **`jpeg_compression_fastdct`** _(bool)_ : This attribute if True, WebGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows: + + ```python + # activate jpeg encoding and enable fast dct + options = {"jpeg_compression": True, "jpeg_compression_fastdct": True} + ``` + +* **`jpeg_compression_fastupsample`** _(bool)_ : This attribute if True, WebGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows: + + ```python + # activate jpeg encoding and enable fast upsampling + options = {"jpeg_compression": True, "jpeg_compression_fastupsample": True} + ``` + +* **`jpeg_compression_colorspace`** _(str)_ : This internal attribute is used to specify incoming frames colorspace with compression. Its usage is as follows: + + !!! info "Supported colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)" + + ```python + # Specify incoming frames are `grayscale` + options = {"jpeg_compression": "GRAY"} + ``` + * **`enable_live_broadcast`** _(boolean)_ : WebGear_RTC by default only supports one-to-one peer connection with a single consumer/client, Hence this boolean attribute can be used to enable live broadcast to multiple peer consumers/clients at same time. Its default value is `False`. Its usage is as follows: !!! note "`enable_infinite_frames` is enforced by default when this attribute is enabled(`True`)." diff --git a/docs/gears/webgear_rtc/usage.md b/docs/gears/webgear_rtc/usage.md index 6b5a322ce..4ed7f3046 100644 --- a/docs/gears/webgear_rtc/usage.md +++ b/docs/gears/webgear_rtc/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -31,6 +31,30 @@ WebGear_RTC API is the part of `asyncio` package of VidGear, thereby you need to pip install vidgear[asyncio] ``` +### Aiortc + +Must Required only if you're using [WebGear_RTC API](../../gears/webgear_rtc/overview/). You can easily install it via pip: + +??? error "Microsoft Visual C++ 14.0 is required." + + Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: + + !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well." + + - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16). + - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019). + - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe) + + Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only: + + - [x] Windows 10 SDK + - [x] C++ x64/x86 build tools + + Finally, proceed installing `aiortc` via pip. + +```sh + pip install aiortc +``` ### ASGI Server @@ -48,7 +72,7 @@ Let's implement a Bare-Minimum usage example: You can access and run WebGear_RTC VideoStreamer Server programmatically in your python script in just a few lines of code, as follows: -!!! tip "For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](./../../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)" +!!! tip "For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](../../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)" !!! info "We are using `frame_size_reduction` attribute for frame size reduction _(in percentage)_ to be streamed with its [`options`](../params/#options) dictionary parameter to cope with performance-throttling in this example." @@ -72,7 +96,7 @@ uvicorn.run(web(), host="localhost", port=8000) web.shutdown() ``` -which can be accessed on any browser on the network at http://localhost:8000/. +which can be accessed on any browser on your machine at http://localhost:8000/. ### Running from Terminal @@ -94,8 +118,6 @@ which can also be accessed on any browser on the network at http://localhost:800 You can run `#!py3 python3 -m vidgear.gears.asyncio -h` help command to see all the advanced settings, as follows: - !!! warning "If you're using `--options/-op` flag, then kindly wrap your dictionary value in single `''` quotes." - ```sh usage: python -m vidgear.gears.asyncio [-h] [-m MODE] [-s SOURCE] [-ep ENABLEPICAMERA] [-S STABILIZE] [-cn CAMERA_NUM] [-yt stream_mode] [-b BACKEND] [-cs COLORSPACE] diff --git a/docs/gears/writegear/compression/advanced/cciw.md b/docs/gears/writegear/compression/advanced/cciw.md index d23d616e6..bdd9a008b 100644 --- a/docs/gears/writegear/compression/advanced/cciw.md +++ b/docs/gears/writegear/compression/advanced/cciw.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -75,7 +75,7 @@ execute_ffmpeg_cmd(ffmpeg_command) ## Usage Examples -!!! tip "Following usage examples is just an idea of what can be done with this powerful function. So just Tinker with various FFmpeg parameters/commands yourself and see it working. Also, if you're unable to run any terminal FFmpeg command, then [report an issue](../../../../../contribution/issue/)." +!!! abstract "Following usage examples is just an idea of what can be done with this powerful function. So just Tinker with various FFmpeg parameters/commands yourself and see it working. Also, if you're unable to run any terminal FFmpeg command, then [report an issue](../../../../../contribution/issue/)." ### Using WriteGear to separate Audio from Video @@ -119,7 +119,7 @@ In this example, we will merge audio with video: !!! tip "You can also directly add external audio input to video-frames in WriteGear. For more information, See [this FAQ example ➶](../../../../../help/writegear_faqs/#how-add-external-audio-file-input-to-video-frames)" -!!! warning "Example Assumptions" +!!! alert "Example Assumptions" * You already have a separate video(i.e `'input-video.mp4'`) and audio(i.e `'input-audio.aac'`) files. diff --git a/docs/gears/writegear/compression/advanced/ffmpeg_install.md b/docs/gears/writegear/compression/advanced/ffmpeg_install.md index 94dc3b021..7b0aa9795 100644 --- a/docs/gears/writegear/compression/advanced/ffmpeg_install.md +++ b/docs/gears/writegear/compression/advanced/ffmpeg_install.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -21,7 +21,7 @@ limitations under the License. # FFmpeg Installation Instructions
- FFmpeg + FFmpeg
WriteGear must requires FFmpeg executables for its Compression capabilities in Compression Mode. You can following machine-specific instructions for its installation: @@ -69,7 +69,7 @@ The WriteGear API supports _Auto-Installation_ and _Manual Configuration_ method !!! quote "This is a recommended approach on Windows Machines" -If WriteGear API not receives any input from the user on [**`custom_ffmpeg`**](../../params/#custom_ffmpeg) parameter, then on Windows system WriteGear API **auto-generates** the required FFmpeg Static Binaries, according to your system specifications, into the temporary directory _(for e.g. `C:\Temp`)_ of your machine. +If WriteGear API not receives any input from the user on [**`custom_ffmpeg`**](../../params/#custom_ffmpeg) parameter, then on Windows system WriteGear API **auto-generates** the required FFmpeg Static Binaries from a dedicated [**Github Server**](https://github.com/abhiTronix/FFmpeg-Builds) into the temporary directory _(for e.g. `C:\Temp`)_ of your machine. !!! warning Important Information @@ -86,7 +86,7 @@ If WriteGear API not receives any input from the user on [**`custom_ffmpeg`**](. * **Download:** You can also manually download the latest Windows Static Binaries(*based on your machine arch(x86/x64)*) from the link below: - *Windows Static Binaries:* http://ffmpeg.zeranoe.com/builds/ + *Windows Static Binaries:* https://ffmpeg.org/download.html#build-windows * **Assignment:** Then, you can easily assign the custom path to the folder containing FFmpeg executables(`for e.g 'C:/foo/Downloads/ffmpeg/bin'`) or path of `ffmpeg.exe` executable itself to the [**`custom_ffmpeg`**](../../params/#custom_ffmpeg) parameter in the WriteGear API. diff --git a/docs/gears/writegear/compression/overview.md b/docs/gears/writegear/compression/overview.md index 800d726a6..b7972d359 100644 --- a/docs/gears/writegear/compression/overview.md +++ b/docs/gears/writegear/compression/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/writegear/compression/params.md b/docs/gears/writegear/compression/params.md index b0f3b91d4..76e6d3ed1 100644 --- a/docs/gears/writegear/compression/params.md +++ b/docs/gears/writegear/compression/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -118,6 +118,8 @@ This parameter allows us to exploit almost all FFmpeg supported parameters effor !!! warning "While providing additional av-source with `-i` FFmpeg parameter in `output_params` make sure it don't interfere with WriteGear's frame pipeline otherwise it will break things!" + !!! error "All ffmpeg parameters are case-sensitive. Remember to double check every parameter if any error occurs." + !!! tip "Kindly check [H.264 docs ➶](https://trac.ffmpeg.org/wiki/Encode/H.264) and other [FFmpeg Docs ➶](https://ffmpeg.org/documentation.html) for more information on these parameters" ```python @@ -173,6 +175,8 @@ This parameter allows us to exploit almost all FFmpeg supported parameters effor All the encoders that are compiled with FFmpeg in use, are supported by WriteGear API. You can easily check the compiled encoders by running following command in your terminal: +!!! info "Similarily, supported demuxers and filters depends upons compiled FFmpeg in use." + ```sh ffmpeg -encoders # use `ffmpeg.exe -encoders` on windows ``` diff --git a/docs/gears/writegear/compression/usage.md b/docs/gears/writegear/compression/usage.md index 7527f9bea..bc3b0c0ea 100644 --- a/docs/gears/writegear/compression/usage.md +++ b/docs/gears/writegear/compression/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -92,7 +92,9 @@ writer.close() ## Using Compression Mode in RGB Mode -In Compression Mode, WriteGear API contains [`rgb_mode`](../../../../bonus/reference/writegear/#vidgear.gears.writegear.WriteGear.write) boolean parameter for RGB Mode, which when enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_. This mode makes WriteGear directly compatible with libraries that only supports RGB format. The complete usage example is as follows: +In Compression Mode, WriteGear API contains [`rgb_mode`](../../../../bonus/reference/writegear/#vidgear.gears.writegear.WriteGear.write) boolean parameter for RGB Mode, which when enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_. This mode makes WriteGear directly compatible with libraries that only supports RGB format. + +The complete usage example is as follows: ```python # import required libraries @@ -215,15 +217,15 @@ writer.close() ## Using Compression Mode for Streaming URLs -In Compression Mode, WriteGear can make complex job look easy with FFmpeg. It also allows any URLs _(as output)_ for network streaming with its [`output_filename`](../params/#output_filename) parameter. +In Compression Mode, WriteGear also allows URL strings _(as output)_ for network streaming with its [`output_filename`](../params/#output_filename) parameter. -_In this example, let's stream Live Camera Feed directly to Twitch!_ +In this example, we will stream live camera feed directly to Twitch: -!!! info "YouTube-Live Streaming example code also available in [WriteGear FAQs ➶](../../../../help/writegear_faqs/#is-youtube-live-streaming-possibe-with-writegear)" +!!! info "YouTube-Live Streaming example code also available in [WriteGear FAQs ➶](../../../../help/writegear_ex/#using-writegears-compression-mode-for-youtube-live-streaming)" !!! warning "This example assume you already have a [**Twitch Account**](https://www.twitch.tv/) for publishing video." -!!! danger "Make sure to change [_Twitch Stream Key_](https://www.youtube.com/watch?v=xwOtOfPMIIk) with yours in following code before running!" +!!! alert "Make sure to change [_Twitch Stream Key_](https://www.youtube.com/watch?v=xwOtOfPMIIk) with yours in following code before running!" ```python # import required libraries @@ -292,16 +294,16 @@ writer.close() ## Using Compression Mode with Hardware encoders -By default, WriteGear API uses *libx264 encoder* for encoding its output files in Compression Mode. But you can easily change encoder to your suitable [supported encoder](../params/#supported-encoders) by passing `-vcodec` FFmpeg parameter as an attribute in its [*output_param*](../params/#output_params) dictionary parameter. In addition to this, you can also specify the additional properties/features of your system's GPU easily. +By default, WriteGear API uses `libx264` encoder for encoding output files in Compression Mode. But you can easily change encoder to your suitable [supported encoder](../params/#supported-encoders) by passing `-vcodec` FFmpeg parameter as an attribute with its [*output_param*](../params/#output_params) dictionary parameter. In addition to this, you can also specify the additional properties/features of your system's GPU easily. ??? warning "User Discretion Advised" - This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which **MAY/MAY NOT** suit your system. Kindly use suitable parameters based your supported system and FFmpeg configurations only. + This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which **MAY/MAY NOT** suit your system. Kindly use suitable parameters based your system hardware settings only. -In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'` like properties by formatting them as `option` dictionary parameter's attributes, as follows: +In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'`: -!!! danger "Check VAAPI support" +??? alert "Remember to check VAAPI support" To use `h264_vaapi` encoder, remember to check if its available and your FFmpeg compiled with VAAPI support. You can easily do this by executing following one-liner command in your terminal, and observing if output contains something similar as follows: @@ -427,26 +429,156 @@ writer.close() ## Using Compression Mode with Live Audio Input -In Compression Mode, WriteGear API allows us to exploit almost all FFmpeg supported parameters that you can think of, in its Compression Mode. Hence, processing, encoding, and combining audio with video is pretty much straightforward. +In Compression Mode, WriteGear API allows us to exploit almost all FFmpeg supported parameters that you can think of in its Compression Mode. Hence, combining audio with live video frames is pretty easy. -!!! warning "Example Assumptions" +In this example code, we will merging the audio from a Audio Device _(for e.g. Webcam inbuilt mic)_ to live frames incoming from the Video Source _(for e.g external webcam)_, and save the output as a compressed video file, all in real time: - * You're running are Linux machine. - * You already have appropriate audio & video drivers and softwares installed on your machine. +!!! alert "Example Assumptions" -!!! danger "Locate your Sound Card" + * You're running are Linux machine. + * You already have appropriate audio driver and software installed on your machine. - Remember to locate your Sound Card before running this example: - * Note down the Sound Card value using `arecord -L` command on the your Linux terminal. - * It may be similar to this `plughw:CARD=CAMERA,DEV=0` +??? tip "Identifying and Specifying sound card on different OS platforms" + + === "On Windows" -??? tips + Windows OS users can use the [dshow](https://trac.ffmpeg.org/wiki/DirectShow) (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card: - The useful audio input options for ALSA input are `-ar` (_audio sample rate_) and `-ac` (_audio channels_). Specifying audio sampling rate/frequency will force the audio card to record the audio at that specified rate. Usually the default value is `"44100"` (Hz) but `"48000"`(Hz) works, so chose wisely. Specifying audio channels will force the audio card to record the audio as mono, stereo or even 2.1, and 5.1(_if supported by your audio card_). Usually the default value is `"1"` (mono) for Mic input and `"2"` (stereo) for Line-In input. Kindly go through [FFmpeg docs](https://ffmpeg.org/ffmpeg.html) for more of such options. + - [x] **[OPTIONAL] Enable sound card(if disabled):** First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. **Follow this [post ➶](https://forums.tomshardware.com/threads/no-sound-through-stereo-mix-realtek-hd-audio.1716182/) for more details.** + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `dshow` as follows: + + ```sh + c:\> ffmpeg -list_devices true -f dshow -i dummy + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [dshow @ 03ACF580] DirectShow video devices + [dshow @ 03ACF580] "Integrated Camera" + [dshow @ 03ACF580] "USB2.0 Camera" + [dshow @ 03ACF580] DirectShow audio devices + [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)" + [dshow @ 03ACF580] "Microphone (USB2.0 Camera)" + dummy: Immediate exit requested + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source + output_params = { + "-i":"audio=Microphone (USB2.0 Camera)", + "-thread_queue_size": "512", + "-f": "dshow", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then [checkout this troubleshooting guide ➶](https://www.maketecheasier.com/fix-microphone-not-working-windows10/) or reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On Linux" + + Linux OS users can use the [alsa](https://ffmpeg.org/ffmpeg-all.html#alsa) to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card: + + - [x] **Identify Sound Card:** To get the list of all installed cards on your machine, you can type `arecord -l` or `arecord -L` _(longer output)_. + + ```sh + arecord -l + + **** List of CAPTURE Hardware Devices **** + card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in WriteGear as follows: + + !!! info "The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as `hw:0` or `hw:1`" + + ```python + # assign appropriate input audio-source + output_params = { + "-i": "hw:1", + "-thread_queue_size": "512", + "-f": "alsa", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On MacOS" + + MAC OS users can use the [avfoundation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines: + + + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `avfoundation` as follows: + + ```sh + ffmpeg -f qtkit -list_devices true -i "" + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in) + [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio + [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source + output_params = { + "-audio_device_index": "0", + "-thread_queue_size": "512", + "-f": "avfoundation", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + +!!! danger "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." -In this example code, we will merge the audio from a Audio Source _(for e.g. Webcam inbuilt mic)_ to the frames of a Video Source _(for e.g external webcam)_, and save this data as a compressed video file, all in real time: +!!! warning "You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams." ```python # import required libraries diff --git a/docs/gears/writegear/introduction.md b/docs/gears/writegear/introduction.md index dbe1d5a13..0149e6376 100644 --- a/docs/gears/writegear/introduction.md +++ b/docs/gears/writegear/introduction.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -29,7 +29,9 @@ limitations under the License. > *WriteGear handles various powerful Video-Writer Tools that provide us the freedom to do almost anything imaginable with multimedia data.* -WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**](https://ffmpeg.org/), a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specification _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_. It is powerful enough to perform complex tasks such as [Live-Streaming](../compression/usage/#using-compression-mode-for-streaming-urls) _(such as for Twitch)_ and [Multiplexing Video-Audio](../compression/usage/#using-compression-mode-with-live-audio-input) with real-time frames in way fewer lines of code. +WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**](https://ffmpeg.org/), a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specifications _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_. + +WriteGear also supports streaming with traditional protocols such as RTMP, RTSP/RTP. It is powerful enough to perform complex tasks such as [Live-Streaming](../compression/usage/#using-compression-mode-for-streaming-urls) _(such as for Twitch, YouTube etc.)_ and [Multiplexing Video-Audio](../compression/usage/#using-compression-mode-with-live-audio-input) with real-time frames in just few lines of code. Best of all, WriteGear grants users the complete freedom to play with any FFmpeg parameter with its exclusive ==Custom Commands function== _(see this [doc](../compression/advanced/cciw/))_ without relying on any third-party API. @@ -43,7 +45,7 @@ WriteGear primarily operates in following modes: * [**Compression Mode**](../compression/overview/): In this mode, WriteGear utilizes powerful **FFmpeg** inbuilt encoders to encode lossless multimedia files. This mode provides us the ability to exploit almost any parameter available within FFmpeg, effortlessly and flexibly, and while doing that it robustly handles all errors/warnings quietly. -* [**Non-Compression Mode**](../non_compression/overview/): In this mode, WriteGear utilizes basic **OpenCV's inbuilt VideoWriter API** tools. This mode also supports all parameters manipulation available within VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. +* [**Non-Compression Mode**](../non_compression/overview/): In this mode, WriteGear utilizes basic **OpenCV's inbuilt VideoWriter API** tools. This mode also supports all parameter transformations available within OpenCV's VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc.   @@ -73,4 +75,10 @@ from vidgear.gears import WriteGear See here 🚀 -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced WriteGear examples with unusual configuration [here ➶](../../../help/writegear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/writegear/non_compression/overview.md b/docs/gears/writegear/non_compression/overview.md index 7048da12f..553631d3e 100644 --- a/docs/gears/writegear/non_compression/overview.md +++ b/docs/gears/writegear/non_compression/overview.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/writegear/non_compression/params.md b/docs/gears/writegear/non_compression/params.md index f5c21a877..c8f38a17e 100644 --- a/docs/gears/writegear/non_compression/params.md +++ b/docs/gears/writegear/non_compression/params.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/gears/writegear/non_compression/usage.md b/docs/gears/writegear/non_compression/usage.md index f1f1e54d3..f26b7fdcf 100644 --- a/docs/gears/writegear/non_compression/usage.md +++ b/docs/gears/writegear/non_compression/usage.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/help.md b/docs/help.md index 08426b2bd..c43535723 100644 --- a/docs/help.md +++ b/docs/help.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -73,9 +73,11 @@ Let others know how you are using VidGear and why you like it!   -## Help Author +## Helping Author -Donations help keep VidGear's Development alive and motivate me _(author)_. Giving a little means a lot, even the smallest contribution can make a huge difference. You can financially support through ko-fi 🤝: +> Donations help keep VidGear's development alive and motivate me _(as author)_. :heart: + +It is (like all open source software) a labour of love and something I am doing with my own free time. If you would like to say thanks, please feel free to make a donation through ko-fi: diff --git a/docs/help/camgear_ex.md b/docs/help/camgear_ex.md new file mode 100644 index 000000000..5a0522d3a --- /dev/null +++ b/docs/help/camgear_ex.md @@ -0,0 +1,243 @@ + + +# CamGear Examples + +  + +## Synchronizing Two Sources in CamGear + +In this example both streams and corresponding frames will be processed synchronously i.e. with no delay: + +!!! danger "Using same source with more than one instances of CamGear can lead to [Global Interpreter Lock (GIL)](https://wiki.python.org/moin/GlobalInterpreterLock#:~:text=In%20CPython%2C%20the%20global%20interpreter,conditions%20and%20ensures%20thread%20safety.&text=The%20GIL%20can%20degrade%20performance%20even%20when%20it%20is%20not%20a%20bottleneck.) that degrades performance even when it is not a bottleneck." + +```python +# import required libraries +from vidgear.gears import CamGear +import cv2 +import time + +# define and start the stream on first source ( For e.g #0 index device) +stream1 = CamGear(source=0, logging=True).start() + +# define and start the stream on second source ( For e.g #1 index device) +stream2 = CamGear(source=1, logging=True).start() + +# infinite loop +while True: + + frameA = stream1.read() + # read frames from stream1 + + frameB = stream2.read() + # read frames from stream2 + + # check if any of two frame is None + if frameA is None or frameB is None: + #if True break the infinite loop + break + + # do something with both frameA and frameB here + cv2.imshow("Output Frame1", frameA) + cv2.imshow("Output Frame2", frameB) + # Show output window of stream1 and stream 2 separately + + key = cv2.waitKey(1) & 0xFF + # check for 'q' key-press + if key == ord("q"): + #if 'q' key-pressed break out + break + + if key == ord("w"): + #if 'w' key-pressed save both frameA and frameB at same time + cv2.imwrite("Image-1.jpg", frameA) + cv2.imwrite("Image-2.jpg", frameB) + #break #uncomment this line to break out after taking images + +cv2.destroyAllWindows() +# close output window + +# safely close both video streams +stream1.stop() +stream2.stop() +``` + +  + +## Using variable Youtube-DL parameters in CamGear + +CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. + +The complete usage example is as follows: + +!!! tip "More information on `STREAM_RESOLUTION` & `STREAM_PARAMS` attributes can be found [here ➶](../../gears/camgear/advanced/source_params/#exclusive-camgear-parameters)" + +```python +# import required libraries +from vidgear.gears import CamGear +import cv2 + +# specify attributes +options = {"STREAM_RESOLUTION": "720p", "STREAM_PARAMS": {"nocheckcertificate": True}} + +# Add YouTube Video URL as input source (for e.g https://youtu.be/bvetuLwJIkA) +# and enable Stream Mode (`stream_mode = True`) +stream = CamGear( + source="https://youtu.be/bvetuLwJIkA", stream_mode=True, logging=True, **options +).start() + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # Show output window + cv2.imshow("Output", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() +``` + + +  + + +## Using CamGear for capturing RSTP/RTMP URLs + +You can open any network stream _(such as RTSP/RTMP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. + +Here's a high-level wrapper code around CamGear API to enable auto-reconnection during capturing: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +??? tip "Enforcing UDP stream" + + You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code: + + ```python + # import required libraries + import os + + # enforce UDP + os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp" + ``` + + Finally, use [`backend`](../../gears/camgear/params/#backend) parameter value as `backend=cv2.CAP_FFMPEG` in CamGear. + + +```python +from vidgear.gears import CamGear +import cv2 +import datetime +import time + + +class Reconnecting_CamGear: + def __init__(self, cam_address, reset_attempts=50, reset_delay=5): + self.cam_address = cam_address + self.reset_attempts = reset_attempts + self.reset_delay = reset_delay + self.source = CamGear(source=self.cam_address).start() + self.running = True + + def read(self): + if self.source is None: + return None + if self.running and self.reset_attempts > 0: + frame = self.source.read() + if frame is None: + self.source.stop() + self.reset_attempts -= 1 + print( + "Re-connection Attempt-{} occured at time:{}".format( + str(self.reset_attempts), + datetime.datetime.now().strftime("%m-%d-%Y %I:%M:%S%p"), + ) + ) + time.sleep(self.reset_delay) + self.source = CamGear(source=self.cam_address).start() + # return previous frame + return self.frame + else: + self.frame = frame + return frame + else: + return None + + def stop(self): + self.running = False + self.reset_attempts = 0 + self.frame = None + if not self.source is None: + self.source.stop() + + +if __name__ == "__main__": + # open any valid video stream + stream = Reconnecting_CamGear( + cam_address="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov", + reset_attempts=20, + reset_delay=5, + ) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if None-type + if frame is None: + break + + # {do something with the frame here} + + # Show output window + cv2.imshow("Output", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() +``` + +  \ No newline at end of file diff --git a/docs/help/camgear_faqs.md b/docs/help/camgear_faqs.md index cd62bd3a5..4aea0d91b 100644 --- a/docs/help/camgear_faqs.md +++ b/docs/help/camgear_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -49,67 +49,30 @@ limitations under the License. ## How to compile OpenCV with GStreamer support? -**Answer:** For compiling OpenCV with GSstreamer(`>=v1.0.0`) support, checkout this [tutorial](https://web.archive.org/web/20201225140454/https://medium.com/@galaktyk01/how-to-build-opencv-with-gstreamer-b11668fa09c) for Linux and Windows OSes, and **for MacOS do as follows:** +**Answer:** For compiling OpenCV with GSstreamer(`>=v1.0.0`) support: -**Step-1:** First Brew install GStreamer: +=== "On Linux OSes" -```sh -brew update -brew install gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav -``` + - [x] **Follow [this tutorial ➶](https://medium.com/@galaktyk01/how-to-build-opencv-with-gstreamer-b11668fa09c)** -**Step-2:** Then, Follow [this tutorial ➶](https://www.learnopencv.com/install-opencv-4-on-macos/) +=== "On Windows OSes" + - [x] **Follow [this tutorial ➶](https://medium.com/@galaktyk01/how-to-build-opencv-with-gstreamer-b11668fa09c)** -  - - -## How to change quality and parameters of YouTube Streams with CamGear? - -CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. The complete usage example is as follows: - -!!! tip "More information on `STREAM_RESOLUTION` & `STREAM_PARAMS` attributes can be found [here ➶](../../gears/camgear/advanced/source_params/#exclusive-camgear-parameters)" - -```python -# import required libraries -from vidgear.gears import CamGear -import cv2 - -# specify attributes -options = {"STREAM_RESOLUTION": "720p", "STREAM_PARAMS": {"nocheckcertificate": True}} - -# Add YouTube Video URL as input source (for e.g https://youtu.be/bvetuLwJIkA) -# and enable Stream Mode (`stream_mode = True`) -stream = CamGear( - source="https://youtu.be/bvetuLwJIkA", stream_mode=True, logging=True, **options -).start() - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break +=== "On MAC OSes" + + - [x] **Follow [this tutorial ➶](https://www.learnopencv.com/install-opencv-4-on-macos/) but make sure to brew install GStreamer as follows:** - # {do something with the frame here} + ```sh + brew install gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav + ``` - # Show output window - cv2.imshow("Output", frame) +  - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break -# close output window -cv2.destroyAllWindows() +## How to change quality and parameters of YouTube Streams with CamGear? -# safely close video stream -stream.stop() -``` +**Answer:** CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. See [this bonus example ➶](../camgear_ex/#using-variable-youtube-dl-parameters-in-camgear).   @@ -117,57 +80,7 @@ stream.stop() ## How to open RSTP network streams with CamGear? -You can open any local network stream _(such as RTSP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. The complete usage example is as follows: - -??? tip "Enforcing UDP stream" - - You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code: - - ```python - # import required libraries - import os - - # enforce UDP - os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp" - ``` - - Finally, use [`backend`](../../gears/camgear/params/#backend) parameter value as `backend="CAP_FFMPEG"` in CamGear. - - -```python -# import required libraries -from vidgear.gears import CamGear -import cv2 - -# open valid network video-stream -stream = CamGear(source="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov").start() - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # Show output window - cv2.imshow("Output", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() -``` +**Answer:** You can open any local network stream _(such as RTSP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. See [this bonus example ➶](../camgear_ex/#using-camgear-for-capturing-rstprtmp-urls).   @@ -185,7 +98,7 @@ stream.stop() ## How to synchronize between two cameras? -**Answer:** See [this issue comment ➶](https://github.com/abhiTronix/vidgear/issues/1#issuecomment-473943037). +**Answer:** See [this bonus example ➶](../camgear_ex/#synchronizing-two-sources-in-camgear).   diff --git a/docs/help/general_faqs.md b/docs/help/general_faqs.md index 6f6d475b5..dbf4dd344 100644 --- a/docs/help/general_faqs.md +++ b/docs/help/general_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -24,23 +24,25 @@ limitations under the License.   -## "I'm new to Python Programming or its usage in Computer Vision", How to use vidgear in my projects? +## "I'm new to Python Programming or its usage in OpenCV Library", How to use vidgear in my projects? -**Answer:** It's recommended to first go through the following dedicated tutorials/websites thoroughly, and learn how OpenCV-Python works _(with examples)_: +**Answer:** Before using vidgear, It's recommended to first go through the following dedicated blog sites and learn how OpenCV-Python syntax works _(with examples)_: -- [**PyImageSearch.com** ➶](https://www.pyimagesearch.com/) is the best resource for learning OpenCV and its Python implementation. Adrian Rosebrock provides many practical OpenCV techniques with tutorials, code examples, blogs, and books at PyImageSearch.com. I also learned a lot about computer vision methods and various useful techniques. Highly recommended! +- [**PyImageSearch.com** ➶](https://www.pyimagesearch.com/) is the best resource for learning OpenCV and its Python implementation. Adrian Rosebrock provides many practical OpenCV techniques with tutorials, code examples, blogs, and books at PyImageSearch.com. Highly recommended! - [**learnopencv.com** ➶](https://www.learnopencv.com) Maintained by OpenCV CEO Satya Mallick. This blog is for programmers, hackers, engineers, scientists, students, and self-starters interested in Computer Vision and Machine Learning. -- There's also the official [**OpenCV Tutorials** ➶](https://docs.opencv.org/master/d6/d00/tutorial_py_root.html), provided by the OpenCV folks themselves. +- There's also the official [**OpenCV Tutorials** ➶](https://docs.opencv.org/master/d6/d00/tutorial_py_root.html) curated by the OpenCV developers. -Finally, once done, see [Switching from OpenCV ➶](../../switch_from_cv/) and go through our [Gears ➶](../../gears/#gears-what-are-these) to learn how VidGear works. If you run into any trouble or have any questions, then see [getting help ➶](../get_help) +Once done, visit [Switching from OpenCV ➶](../../switch_from_cv/) to easily replace OpenCV APIs with suitable [Gears ➶](../../gears/#gears-what-are-these) in your project. All the best! :smiley: + +!!! tip "If you run into any trouble or have any questions, then refer our [**Help**](../get_help) section."   ## "VidGear is using Multi-threading, but Python is notorious for its poor performance in multithreading?" -**Answer:** See [Threaded-Queue-Mode ➶](../../bonus/TQM/) +**Answer:** Refer vidgear's [Threaded-Queue-Mode ➶](../../bonus/TQM/)   diff --git a/docs/help/get_help.md b/docs/help/get_help.md index 4390d148d..b01b34706 100644 --- a/docs/help/get_help.md +++ b/docs/help/get_help.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -37,7 +37,7 @@ There are several ways to get help with VidGear: > Got a question related to VidGear Working? -Checkout our Frequently Asked Questions, a curated list of all the questions with adequate answer that we commonly receive, for quickly troubleshooting your problems: +Checkout the Frequently Asked Questions, a curated list of all the questions with adequate answer that we commonly receive, for quickly troubleshooting your problems: - [General FAQs ➶](general_faqs.md) - [CamGear FAQs ➶](camgear_faqs.md) @@ -56,6 +56,26 @@ Checkout our Frequently Asked Questions, a curated list of all the questions wit   +## Bonus Examples + +> How we do this with that API? + +Checkout the Bonus Examples, a curated list of all advanced examples with unusual configuration, which isn't available in Vidgear API's usage examples: + +- [CamGear FAQs ➶](camgear_ex.md) +- [PiGear FAQs ➶](pigear_ex.md) +- [ScreenGear FAQs ➶](screengear_ex.md) +- [StreamGear FAQs ➶](streamgear_ex.md) +- [WriteGear FAQs ➶](writegear_ex.md) +- [NetGear FAQs ➶](netgear_ex.md) +- [WebGear FAQs ➶](webgear_ex.md) +- [WebGear_RTC FAQs ➶](webgear_rtc_ex.md) +- [VideoGear FAQs ➶](videogear_ex.md) +- [Stabilizer Class FAQs ➶](stabilizer_ex.md) +- [NetGear_Async FAQs ➶](netgear_async_ex.md) + +  + ## Join our Gitter Community channel > Have you come up with some new idea 💡 or looking for the fastest way troubleshoot your problems @@ -73,7 +93,7 @@ There you can ask quick questions, swiftly troubleshoot your problems, help othe - [x] [Got a question or problem?](../../contribution/#got-a-question-or-problem) - [x] [Found a typo?](../../contribution/#found-a-typo) - [x] [Found a bug?](../../contribution/#found-a-bug) -- [x] [Missing a feature/improvement?](../../contribution/#request-for-a-featureimprovementt) +- [x] [Missing a feature/improvement?](../../contribution/#request-for-a-featureimprovement)   diff --git a/docs/help/netgear_async_ex.md b/docs/help/netgear_async_ex.md new file mode 100644 index 000000000..a46c17c7e --- /dev/null +++ b/docs/help/netgear_async_ex.md @@ -0,0 +1,169 @@ + + +# NetGear_Async Examples + +  + +## Using NetGear_Async with WebGear + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WebGear Server + +Open a terminal on Client System where you want to display the input frames _(and setup WebGear server)_ received from the Server and execute the following python code: + +!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear_Async will soon exit with `TimeoutError`. You can also try setting [`timeout`](../../gears/netgear_async/params/#timeout) parameter to a higher value to extend this timeout." + +!!! warning "Make sure you use different `port` value for NetGear_Async and WebGear API." + +!!! alert "High CPU utilization may occur on Client's end. User discretion is advised." + +!!! note "Note down the IP-address of this system _(required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code."" + +```python +# import libraries +from vidgear.gears.asyncio import NetGear_Async +from vidgear.gears.asyncio import WebGear +from vidgear.gears.asyncio.helper import reducer +import uvicorn, asyncio, cv2 + +# Define NetGear_Async Client at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with yours !!! +client = NetGear_Async( + receive_mode=True, + pattern=1, + logging=True, +).launch() + +# create your own custom frame producer +async def my_frame_producer(): + + # loop over Client's Asynchronous Frame Generator + async for frame in client.recv_generator(): + + # {do something with received frames here} + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer( + frame, percentage=30, interpolation=cv2.INTER_AREA + ) # reduce frame by 30% + + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0) + + +if __name__ == "__main__": + # Set event loop to client's + asyncio.set_event_loop(client.loop) + + # initialize WebGear app without any source + web = WebGear(logging=True) + + # add your custom frame producer to config with adequate IP address + web.config["generator"] = my_frame_producer + + # run this app on Uvicorn server at address http://localhost:8000/ + uvicorn.run(web(), host="localhost", port=8000) + + # safely close client + client.close() + + # close app safely + web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser." + +### Server + +Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code: + +!!! note "Replace the IP address in the following code with Client's IP address you noted earlier." + +```python +# import library +from vidgear.gears.asyncio import NetGear_Async +import cv2, asyncio + +# initialize Server without any source +server = NetGear_Async( + source=None, + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, +) + +# Create a async frame generator as custom source +async def my_frame_generator(): + + # !!! define your own video source here !!! + # Open any video stream such as live webcam + # video stream on first index(i.e. 0) device + stream = cv2.VideoCapture(0) + + # loop over stream until its terminated + while True: + + # read frames + (grabbed, frame) = stream.read() + + # check if frame empty + if not grabbed: + break + + # do something with the frame to be sent here + + # yield frame + yield frame + # sleep for sometime + await asyncio.sleep(0) + + # close stream + stream.release() + + +if __name__ == "__main__": + # set event loop + asyncio.set_event_loop(server.loop) + # Add your custom source generator to Server configuration + server.config["generator"] = my_frame_generator() + # Launch the Server + server.launch() + try: + # run your main function task until it is complete + server.loop.run_until_complete(server.task) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + finally: + # finally close the server + server.close() +``` + +  diff --git a/docs/help/netgear_async_faqs.md b/docs/help/netgear_async_faqs.md index f68ecf407..289ca8d29 100644 --- a/docs/help/netgear_async_faqs.md +++ b/docs/help/netgear_async_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/help/netgear_ex.md b/docs/help/netgear_ex.md new file mode 100644 index 000000000..ef43baaa8 --- /dev/null +++ b/docs/help/netgear_ex.md @@ -0,0 +1,368 @@ + + +# NetGear Examples + +  + +## Using NetGear with WebGear + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WebGear Server + +Open a terminal on Client System where you want to display the input frames _(and setup WebGear server)_ received from the Server and execute the following python code: + +!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear will soon exit with `RuntimeError`. You can also try setting [`max_retries`](../../gears/netgear/params/#options) and [`request_timeout`](../../gears/netgear/params/#options) like attributes to a higher value to avoid this." + +!!! warning "Make sure you use different `port` value for NetGear and WebGear API." + +!!! alert "High CPU utilization may occur on Client's end. User discretion is advised." + +!!! note "Note down the IP-address of this system _(required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code."" + +```python +# import necessary libs +import uvicorn, asyncio, cv2 +from vidgear.gears.asyncio import WebGear +from vidgear.gears.asyncio.helper import reducer + +# initialize WebGear app without any source +web = WebGear(logging=True) + + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# create your own custom frame producer +async def my_frame_producer(): + # initialize global params + # Define NetGear Client at given IP address and define parameters + # !!! change following IP address '192.168.x.xxx' with yours !!! + client = NetGear( + receive_mode=True, + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options, + ) + + # loop over frames + while True: + # receive frames from network + frame = self.client.recv() + + # if NoneType + if frame is None: + return None + + # do something with your OpenCV frame here + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer( + frame, percentage=30, interpolation=cv2.INTER_AREA + ) # reduce frame by 30% + + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0) + # close stream + client.close() + + +# add your custom frame producer to config with adequate IP address +web.config["generator"] = my_frame_producer + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser." + + +### Server + +Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code: + +!!! note "Replace the IP address in the following code with Client's IP address you noted earlier." + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear +import cv2 + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# Open live video stream on webcam at first index(i.e. 0) device +stream = VideoGear(source=0).start() + +# Define NetGear server at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with client's IP address !!! +server = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read frames from stream + frame = stream.read() + + # check for frame if None-type + if frame is None: + break + + # {do something with the frame here} + + # send frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  + +## Using NetGear with WebGear_RTC + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WebGear_RTC Server + +Open a terminal on Client System where you want to display the input frames _(and setup WebGear_RTC server)_ received from the Server and execute the following python code: + +!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear will soon exit with `RuntimeError`. You can also try setting [`max_retries`](../../gears/netgear/params/#options) and [`request_timeout`](../../gears/netgear/params/#options) like attributes to a higher value to avoid this." + +!!! warning "Make sure you use different `port` value for NetGear and WebGear_RTC API." + +!!! alert "High CPU utilization may occur on Client's end. User discretion is advised." + +!!! note "Note down the IP-address of this system _required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code."" + +```python +# import required libraries +import uvicorn, asyncio, cv2 +from av import VideoFrame +from aiortc import VideoStreamTrack +from aiortc.mediastreams import MediaStreamError +from vidgear.gears import NetGear +from vidgear.gears.asyncio import WebGear_RTC +from vidgear.gears.asyncio.helper import reducer + +# initialize WebGear_RTC app without any source +web = WebGear_RTC(logging=True) + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + + +# create your own Bare-Minimum Custom Media Server +class Custom_RTCServer(VideoStreamTrack): + """ + Custom Media Server using OpenCV, an inherit-class + to aiortc's VideoStreamTrack. + """ + + def __init__( + self, + address=None, + port="5454", + protocol="tcp", + pattern=1, + logging=True, + options={}, + ): + # don't forget this line! + super().__init__() + + # initialize global params + # Define NetGear Client at given IP address and define parameters + self.client = NetGear( + receive_mode=True, + address=address, + port=protocol, + pattern=pattern, + receive_mode=True, + logging=logging, + **options + ) + + async def recv(self): + """ + A coroutine function that yields `av.frame.Frame`. + """ + # don't forget this function!!! + + # get next timestamp + pts, time_base = await self.next_timestamp() + + # receive frames from network + frame = self.client.recv() + + # if NoneType + if frame is None: + raise MediaStreamError + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer(frame, percentage=30) # reduce frame by 30% + + # contruct `av.frame.Frame` from `numpy.nd.array` + av_frame = VideoFrame.from_ndarray(frame, format="bgr24") + av_frame.pts = pts + av_frame.time_base = time_base + + # return `av.frame.Frame` + return av_frame + + def terminate(self): + """ + Gracefully terminates VideoGear stream + """ + # don't forget this function!!! + + # terminate + if not (self.client is None): + self.client.close() + self.client = None + + +# assign your custom media server to config with adequate IP address +# !!! change following IP address '192.168.x.xxx' with yours !!! +web.config["server"] = Custom_RTCServer( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser." + +### Server + +Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code: + +!!! note "Replace the IP address in the following code with Client's IP address you noted earlier." + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear +import cv2 + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# Open live video stream on webcam at first index(i.e. 0) device +stream = VideoGear(source=0).start() + +# Define NetGear server at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with client's IP address !!! +server = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  \ No newline at end of file diff --git a/docs/help/netgear_faqs.md b/docs/help/netgear_faqs.md index 4766cdb17..f8aab71fe 100644 --- a/docs/help/netgear_faqs.md +++ b/docs/help/netgear_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -39,12 +39,13 @@ limitations under the License. Here's the compatibility chart for NetGear's [Exclusive Modes](../../gears/netgear/overview/#exclusive-modes): -| Exclusive Modes | Multi-Servers | Multi-Clients | Secure | Bidirectional | -| :-------------: | :-----------: | :-----------: | :----: | :-----------: | -| **Multi-Servers** | - | No _(throws error)_ | Yes | No _(disables it)_ | -| **Multi-Clients** | No _(throws error)_ | - | Yes | No _(disables it)_ | -| **Secure** | Yes | Yes | - | Yes | -| **Bidirectional** | No _(disabled)_ | No _(disabled)_ | Yes | - | +| Exclusive Modes | Multi-Servers | Multi-Clients | Secure | Bidirectional | SSH Tunneling | +| :-------------: | :-----------: | :-----------: | :----: | :-----------: | :-----------: | +| **Multi-Servers** | - | No _(throws error)_ | Yes | No _(disables it)_ | No _(throws error)_ | +| **Multi-Clients** | No _(throws error)_ | - | Yes | No _(disables it)_ | No _(throws error)_ | +| **Secure** | Yes | Yes | - | Yes | Yes | +| **Bidirectional** | No _(disabled)_ | No _(disabled)_ | Yes | - | Yes | +| **SSH Tunneling** | No _(throws error)_ | No _(throws error)_ | Yes | Yes | - |   @@ -93,6 +94,13 @@ Here's the compatibility chart for NetGear's [Exclusive Modes](../../gears/netge   + +## How to access NetGear API outside network or remotely? + +**Answer:** See its [SSH Tunneling Mode doc ➶](../../gears/netgear/advanced/ssh_tunnel/). + +  + ## Are there any side-effect of sending data with frames? **Answer:** Yes, it may lead to additional **LATENCY** depending upon the size/amount of the data being transferred. User discretion is advised. diff --git a/docs/help/pigear_ex.md b/docs/help/pigear_ex.md new file mode 100644 index 000000000..03d86f63e --- /dev/null +++ b/docs/help/pigear_ex.md @@ -0,0 +1,75 @@ + + +# PiGear Examples + +  + +## Setting variable `picamera` parameters for Camera Module at runtime + +You can use `stream` global parameter in PiGear to feed any [`picamera`](https://picamera.readthedocs.io/en/release-1.10/api_camera.html) parameters at runtime. + +In this example we will set initial Camera Module's `brightness` value `80`, and will change it `50` when **`z` key** is pressed at runtime: + +```python +# import required libraries +from vidgear.gears import PiGear +import cv2 + +# initial parameters +options = {"brightness": 80} # set brightness to 80 + +# open pi video stream with default parameters +stream = PiGear(logging=True, **options).start() + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + # check for 'z' key if pressed + if key == ord("z"): + # change brightness to 50 + stream.stream.brightness = 50 + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() +``` + +  \ No newline at end of file diff --git a/docs/help/pigear_faqs.md b/docs/help/pigear_faqs.md index 176e7a6d0..3c24814da 100644 --- a/docs/help/pigear_faqs.md +++ b/docs/help/pigear_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -67,53 +67,6 @@ limitations under the License. ## How to change `picamera` settings for Camera Module at runtime? -**Answer:** You can use `stream` global parameter in PiGear to feed any `picamera` setting at runtime. See following sample usage example: - -!!! info "" - In this example we will set initial Camera Module's `brightness` value `80`, and will change it `50` when **`z` key** is pressed at runtime. - -```python -# import required libraries -from vidgear.gears import PiGear -import cv2 - -# initial parameters -options = {"brightness": 80} # set brightness to 80 - -# open pi video stream with default parameters -stream = PiGear(logging=True, **options).start() - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {do something with the frame here} - - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - # check for 'z' key if pressed - if key == ord("z"): - # change brightness to 50 - stream.stream.brightness = 50 - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() -``` +**Answer:** You can use `stream` global parameter in PiGear to feed any `picamera` setting at runtime. See [this bonus example ➶](../pigear_ex/#setting-variable-picamera-parameters-for-camera-module-at-runtime)   \ No newline at end of file diff --git a/docs/help/screengear_ex.md b/docs/help/screengear_ex.md new file mode 100644 index 000000000..80463ee11 --- /dev/null +++ b/docs/help/screengear_ex.md @@ -0,0 +1,149 @@ + + +# ScreenGear Examples + +  + +## Using ScreenGear with NetGear and WriteGear + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WriteGear + +Open a terminal on Client System _(where you want to save the input frames received from the Server)_ and execute the following python code: + +!!! info "Note down the IP-address of this system(required at Server's end) by executing the command: `hostname -I` and also replace it in the following code." + +!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import required libraries +from vidgear.gears import NetGear +from vidgear.gears import WriteGear +import cv2 + +# define various tweak flags +options = {"flag": 0, "copy": False, "track": False} + +# Define Netgear Client at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with yours !!! +client = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + receive_mode=True, + logging=True, + **options +) + +# Define writer with default parameters and suitable output filename for e.g. `Output.mp4` +writer = WriteGear(output_filename="Output.mp4") + +# loop over +while True: + + # receive frames from network + frame = client.recv() + + # check for received frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + +# close output window +cv2.destroyAllWindows() + +# safely close client +client.close() + +# safely close writer +writer.close() +``` + +### Server + ScreenGear + +Now, Open the terminal on another Server System _(with a montior/display attached to it)_, and execute the following python code: + +!!! info "Replace the IP address in the following code with Client's IP address you noted earlier." + +!!! tip "You can terminate stream on both side anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear + +# define dimensions of screen w.r.t to given monitor to be captured +options = {"top": 40, "left": 0, "width": 100, "height": 100} + +# open stream with defined parameters +stream = ScreenGear(logging=True, **options).start() + +# define various netgear tweak flags +options = {"flag": 0, "copy": False, "track": False} + +# Define Netgear server at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with client's IP address !!! +server = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  + diff --git a/docs/help/screengear_faqs.md b/docs/help/screengear_faqs.md index 63118e583..7fb76d802 100644 --- a/docs/help/screengear_faqs.md +++ b/docs/help/screengear_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/help/stabilizer_ex.md b/docs/help/stabilizer_ex.md new file mode 100644 index 000000000..8b8636265 --- /dev/null +++ b/docs/help/stabilizer_ex.md @@ -0,0 +1,236 @@ + + +# Stabilizer Class Examples + +  + +## Saving Stabilizer Class output with Live Audio Input + +In this example code, we will merging the audio from a Audio Device _(for e.g. Webcam inbuilt mic input)_ with Stablized frames incoming from the Stabilizer Class _(which is also using same Webcam video input through OpenCV)_, and save the final output as a compressed video file, all in real time: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! alert "Example Assumptions" + + * You're running are Linux machine. + * You already have appropriate audio driver and software installed on your machine. + + +??? tip "Identifying and Specifying sound card on different OS platforms" + + === "On Windows" + + Windows OS users can use the [dshow](https://trac.ffmpeg.org/wiki/DirectShow) (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card: + + - [x] **[OPTIONAL] Enable sound card(if disabled):** First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. **Follow this [post ➶](https://forums.tomshardware.com/threads/no-sound-through-stereo-mix-realtek-hd-audio.1716182/) for more details.** + + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `dshow` as follows: + + ```sh + c:\> ffmpeg -list_devices true -f dshow -i dummy + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [dshow @ 03ACF580] DirectShow video devices + [dshow @ 03ACF580] "Integrated Camera" + [dshow @ 03ACF580] "USB2.0 Camera" + [dshow @ 03ACF580] DirectShow audio devices + [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)" + [dshow @ 03ACF580] "Microphone (USB2.0 Camera)" + dummy: Immediate exit requested + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source + output_params = { + "-i":"audio=Microphone (USB2.0 Camera)", + "-thread_queue_size": "512", + "-f": "dshow", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then [checkout this troubleshooting guide ➶](https://www.maketecheasier.com/fix-microphone-not-working-windows10/) or reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On Linux" + + Linux OS users can use the [alsa](https://ffmpeg.org/ffmpeg-all.html#alsa) to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card: + + - [x] **Identify Sound Card:** To get the list of all installed cards on your machine, you can type `arecord -l` or `arecord -L` _(longer output)_. + + ```sh + arecord -l + + **** List of CAPTURE Hardware Devices **** + card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in WriteGear as follows: + + !!! info "The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as `hw:0` or `hw:1`" + + ```python + # assign appropriate input audio-source + output_params = { + "-i": "hw:1", + "-thread_queue_size": "512", + "-f": "alsa", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On MacOS" + + MAC OS users can use the [avfoundation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines: + + + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `avfoundation` as follows: + + ```sh + ffmpeg -f qtkit -list_devices true -i "" + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in) + [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio + [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source + output_params = { + "-audio_device_index": "0", + "-thread_queue_size": "512", + "-f": "avfoundation", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + +!!! danger "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." + +!!! warning "You **MUST** use [`-input_framerate`](../../gears/writegear/compression/params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams." + +```python +# import required libraries +from vidgear.gears import WriteGear +from vidgear.gears.stabilizer import Stabilizer +import cv2 + +# Open suitable video stream, such as webcam on first index(i.e. 0) +stream = cv2.VideoCapture(0) + +# initiate stabilizer object with defined parameters +stab = Stabilizer(smoothing_radius=30, crop_n_zoom=True, border_size=5, logging=True) + +# change with your webcam soundcard, plus add additional required FFmpeg parameters for your writer +output_params = { + "-thread_queue_size": "512", + "-f": "alsa", + "-ac": "1", + "-ar": "48000", + "-i": "plughw:CARD=CAMERA,DEV=0", +} + +# Define writer with defined parameters and suitable output filename for e.g. `Output.mp4 +writer = WriteGear(output_filename="Output.mp4", logging=True, **output_params) + +# loop over +while True: + + # read frames from stream + (grabbed, frame) = stream.read() + + # check for frame if not grabbed + if not grabbed: + break + + # send current frame to stabilizer for processing + stabilized_frame = stab.stabilize(frame) + + # wait for stabilizer which still be initializing + if stabilized_frame is None: + continue + + # {do something with the stabilized frame here} + + # write stabilized frame to writer + writer.write(stabilized_frame) + + +# clear stabilizer resources +stab.clean() + +# safely close video stream +stream.release() + +# safely close writer +writer.close() +``` + +  \ No newline at end of file diff --git a/docs/help/stabilizer_faqs.md b/docs/help/stabilizer_faqs.md index 5a33629c1..7406c3795 100644 --- a/docs/help/stabilizer_faqs.md +++ b/docs/help/stabilizer_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -30,7 +30,7 @@ limitations under the License. ## How much latency you would typically expect with Stabilizer Class? -**Answer:** The stabilizer will be Slower for High-Quality videos-frames. Try reducing frames size _(Use [`reducer()`](../../bonus/reference/helper/#reducer) method)_ before feeding them for reducing latency. Also, see [`smoothing_radius`](../../gears/stabilizer/params/#smoothing_radius) parameter of Stabilizer class that handles the quality of stabilization at the expense of latency and sudden panning. The larger its value, the less will be panning, more will be latency, and vice-versa. +**Answer:** The stabilizer will be Slower for High-Quality videos-frames. Try reducing frames size _(Use [`reducer()`](../../bonus/reference/helper/#vidgear.gears.helper.reducer--reducer) method)_ before feeding them for reducing latency. Also, see [`smoothing_radius`](../../gears/stabilizer/params/#smoothing_radius) parameter of Stabilizer class that handles the quality of stabilization at the expense of latency and sudden panning. The larger its value, the less will be panning, more will be latency, and vice-versa.   diff --git a/docs/help/streamgear_ex.md b/docs/help/streamgear_ex.md new file mode 100644 index 000000000..d8a83db14 --- /dev/null +++ b/docs/help/streamgear_ex.md @@ -0,0 +1,161 @@ + + +# StreamGear Examples + +  + +## StreamGear Live-Streaming Usage with PiGear + +In this example, we will be Live-Streaming video-frames from Raspberry Pi _(with Camera Module connected)_ using PiGear API and StreamGear API's Real-time Frames Mode: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." + +!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames." + +!!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../../gears/streamgear/params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter." + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import PiGear + from vidgear.gears import StreamGear + import cv2 + + # add various Picamera tweak parameters to dictionary + options = { + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, + } + + # open pi video stream with defined parameters + stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start() + + # enable livestreaming and retrieve framerate from CamGear Stream and + # pass it as `-input_framerate` parameter for controlled framerate + stream_params = {"-input_framerate": stream.framerate, "-livestream": True} + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="dash_out.mpd", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import PiGear + from vidgear.gears import StreamGear + import cv2 + + # add various Picamera tweak parameters to dictionary + options = { + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, + } + + # open pi video stream with defined parameters + stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start() + + # enable livestreaming and retrieve framerate from CamGear Stream and + # pass it as `-input_framerate` parameter for controlled framerate + stream_params = {"-input_framerate": stream.framerate, "-livestream": True} + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + + +  \ No newline at end of file diff --git a/docs/help/streamgear_faqs.md b/docs/help/streamgear_faqs.md index c49435926..955b15f76 100644 --- a/docs/help/streamgear_faqs.md +++ b/docs/help/streamgear_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -24,13 +24,13 @@ limitations under the License. ## What is StreamGear API and what does it do? -**Answer:** StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH)_ in just few lines of python code. _For more info. see [StreamGear doc ➶](../../gears/streamgear/overview/)_ +**Answer:** StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH)_ in just few lines of python code. _For more info. see [StreamGear doc ➶](../../gears/streamgear/introduction/)_   ## How to get started with StreamGear API? -**Answer:** See [StreamGear doc ➶](../../gears/streamgear/overview/). Still in doubt, then ask us on [Gitter ➶](https://gitter.im/vidgear/community) Community channel. +**Answer:** See [StreamGear doc ➶](../../gears/streamgear/introduction/). Still in doubt, then ask us on [Gitter ➶](https://gitter.im/vidgear/community) Community channel.   @@ -42,7 +42,7 @@ limitations under the License. ## How to play Streaming Assets created with StreamGear API? -**Answer:** You can easily feed Manifest file(`.mpd`) to DASH Supported Players Input but sure encoded chunks are present along with it. See this list of [recommended players ➶](../../gears/streamgear/overview/#recommended-stream-players) +**Answer:** You can easily feed Manifest file(`.mpd`) to DASH Supported Players Input but sure encoded chunks are present along with it. See this list of [recommended players ➶](../../gears/streamgear/introduction/#recommended-stream-players)   @@ -60,24 +60,34 @@ limitations under the License. ## How to create additional streams in StreamGear API? -**Answer:** [See this example ➶](../../gears/streamgear/usage/#a2-usage-with-additional-streams) +**Answer:** [See this example ➶](../../gears/streamgear/ssm/usage/#usage-with-additional-streams)   -## How to use StreamGear API with real-time frames? -**Answer:** See [Real-time Frames Mode ➶](../../gears/streamgear/usage/#b-real-time-frames-mode) +## How to use StreamGear API with OpenCV? + +**Answer:** [See this example ➶](../../gears/streamgear/rtfm/usage/bare-minimum-usage-with-opencv)   -## How to use StreamGear API with OpenCV? +## How to use StreamGear API with real-time frames? -**Answer:** [See this example ➶](../../gears/streamgear/usage/#b4-bare-minimum-usage-with-opencv) +**Answer:** See [Real-time Frames Mode ➶](../../gears/streamgear/rtfm/overview)   +## Is Real-time Frames Mode only used for Live-Streaming? + +**Answer:** Real-time Frame Modes and Live-Streaming are completely different terms and not directly related. + +- **Real-time Frame Mode** is one of [primary mode](./../gears/streamgear/introduction/#mode-of-operations) for directly transcoding real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire file)_ into a sequence of multiple smaller chunks/segments for streaming. + +- **Live-Streaming** is feature of StreamGear's primary modes that activates behaviour where chunks will contain information for few new frames only and forgets all previous ones for low latency streaming. It can be activated for any primary mode using exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. + + ## How to use Hardware/GPU encoder for StreamGear trancoding? -**Answer:** [See this example ➶](../../gears/streamgear/usage/#b7-usage-with-hardware-video-encoder) +**Answer:** [See this example ➶](../../gears/streamgear/rtfm/usage/#usage-with-hardware-video-encoder)   \ No newline at end of file diff --git a/docs/help/videogear_ex.md b/docs/help/videogear_ex.md new file mode 100644 index 000000000..de8a92053 --- /dev/null +++ b/docs/help/videogear_ex.md @@ -0,0 +1,220 @@ + + +# VideoGear Examples + +  + +## Using VideoGear with ROS(Robot Operating System) + +We will be using [`cv_bridge`](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython) to convert OpenCV frames to ROS image messages and vice-versa. + +In this example, we'll create a node that convert OpenCV frames into ROS image messages, and then publishes them over ROS. + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! note "This example is vidgear implementation of this [wiki example](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)." + +```python +# import roslib +import roslib + +roslib.load_manifest("my_package") + +# import other required libraries +import sys +import rospy +import cv2 +from std_msgs.msg import String +from sensor_msgs.msg import Image +from cv_bridge import CvBridge, CvBridgeError +from vidgear.gears import VideoGear + +# custom publisher class +class image_publisher: + def __init__(self, source=0, logging=False): + # create CV bridge + self.bridge = CvBridge() + # define publisher topic + self.image_pub = rospy.Publisher("image_topic_pub", Image) + # open stream with given parameters + self.stream_stab = VideoGear(source=source, logging=logging).start() + # define publisher topic + rospy.Subscriber("image_topic_sub", Image, self.callback) + + def callback(self, data): + + # {do something with received ROS node data here} + + # read stabilized frames + frame = self.stream.read() + # check for stabilized frame if None-type + if not (frame is None): + + # {do something with the frame here} + + # publish our frame + try: + self.image_pub.publish(self.bridge.cv2_to_imgmsg(frame, "bgr8")) + except CvBridgeError as e: + # catch any errors + print(e) + + def close(self): + # stop stream + self.stream_stab.stop() + + +def main(args): + # !!! define your own video source here !!! + # Open any video stream such as live webcam + # video stream on first index(i.e. 0) device + + # define publisher + ic = image_publisher(source=0, logging=True) + # initiate ROS node on publisher + rospy.init_node("image_publisher", anonymous=True) + try: + # run node + rospy.spin() + except KeyboardInterrupt: + print("Shutting down") + finally: + # close publisher + ic.close() + + +if __name__ == "__main__": + main(sys.argv) +``` + +  + +## Using VideoGear for capturing RSTP/RTMP URLs + +Here's a high-level wrapper code around VideoGear API to enable auto-reconnection during capturing, plus stabilization is enabled _(`stabilize=True`)_ in order to stabilize captured frames on-the-go: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +??? tip "Enforcing UDP stream" + + You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code: + + ```python + # import required libraries + import os + + # enforce UDP + os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp" + ``` + + Finally, use [`backend`](../../gears/videogear/params/#backend) parameter value as `backend=cv2.CAP_FFMPEG` in VideoGear. + + +```python +from vidgear.gears import VideoGear +import cv2 +import datetime +import time + + +class Reconnecting_VideoGear: + def __init__(self, cam_address, stabilize=False, reset_attempts=50, reset_delay=5): + self.cam_address = cam_address + self.stabilize = stabilize + self.reset_attempts = reset_attempts + self.reset_delay = reset_delay + self.source = VideoGear( + source=self.cam_address, stabilize=self.stabilize + ).start() + self.running = True + + def read(self): + if self.source is None: + return None + if self.running and self.reset_attempts > 0: + frame = self.source.read() + if frame is None: + self.source.stop() + self.reset_attempts -= 1 + print( + "Re-connection Attempt-{} occured at time:{}".format( + str(self.reset_attempts), + datetime.datetime.now().strftime("%m-%d-%Y %I:%M:%S%p"), + ) + ) + time.sleep(self.reset_delay) + self.source = VideoGear( + source=self.cam_address, stabilize=self.stabilize + ).start() + # return previous frame + return self.frame + else: + self.frame = frame + return frame + else: + return None + + def stop(self): + self.running = False + self.reset_attempts = 0 + self.frame = None + if not self.source is None: + self.source.stop() + + +if __name__ == "__main__": + # open any valid video stream + stream = Reconnecting_VideoGear( + cam_address="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov", + reset_attempts=20, + reset_delay=5, + ) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if None-type + if frame is None: + break + + # {do something with the frame here} + + # Show output window + cv2.imshow("Output", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() +``` + +  \ No newline at end of file diff --git a/docs/help/videogear_faqs.md b/docs/help/videogear_faqs.md index 50be6a58f..67cdb89cb 100644 --- a/docs/help/videogear_faqs.md +++ b/docs/help/videogear_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/help/webgear_ex.md b/docs/help/webgear_ex.md new file mode 100644 index 000000000..05b1dc628 --- /dev/null +++ b/docs/help/webgear_ex.md @@ -0,0 +1,233 @@ + + +# WebGear Examples + +  + +## Using WebGear with RaspberryPi Camera Module + +Because of WebGear API's flexible internal wapper around VideoGear, it can easily access any parameter of CamGear and PiGear videocapture APIs. + +!!! info "Following usage examples are just an idea of what can be done with WebGear API, you can try various [VideoGear](../../gears/videogear/params/), [CamGear](../../gears/camgear/params/) and [PiGear](../../gears/pigear/params/) parameters directly in WebGear API in the similar manner." + +Here's a bare-minimum example of using WebGear API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear + +# various webgear performance and Raspberry Pi camera tweaks +options = { + "frame_size_reduction": 40, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, +} + +# initialize WebGear app +web = WebGear( + enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + +## Using WebGear with real-time Video Stabilization enabled + +Here's an example of using WebGear API with real-time Video Stabilization enabled: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear + +# various webgear performance tweaks +options = { + "frame_size_reduction": 40, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, +} + +# initialize WebGear app with a raw source and enable video stabilization(`stabilize=True`) +web = WebGear(source="foo.mp4", stabilize=True, logging=True, **options) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + + +## Display Two Sources Simultaneously in WebGear + +In this example, we'll be displaying two video feeds side-by-side simultaneously on browser using WebGear API by defining two separate frame generators: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +**Step-1 (Trigger Auto-Generation Process):** Firstly, run this bare-minimum code to trigger the [**Auto-generation**](../../gears/webgear/#auto-generation-process) process, this will create `.vidgear` directory at current location _(directory where you'll run this code)_: + +```python +# import required libraries +import uvicorn +from vidgear.gears.asyncio import WebGear + +# provide current directory to save data files +options = {"custom_data_location": "./"} + +# initialize WebGear app +web = WebGear(source=0, logging=True, **options) + +# close app safely +web.shutdown() +``` + +**Step-2 (Replace HTML file):** Now, go inside `.vidgear` :arrow_right: `webgear` :arrow_right: `templates` directory at current location of your machine, and there replace content of `index.html` file with following: + +```html +{% extends "base.html" %} +{% block content %} +

WebGear Video Feed

+
+ Feed + Feed +
+{% endblock %} +``` + +**Step-3 (Build your own Frame Producers):** Now, create a python script code with OpenCV source, as follows: + +```python +# import necessary libs +import uvicorn, asyncio, cv2 +from vidgear.gears.asyncio import WebGear +from vidgear.gears.asyncio.helper import reducer +from starlette.responses import StreamingResponse +from starlette.routing import Route + +# provide current directory to load data files +options = {"custom_data_location": "./"} + +# initialize WebGear app without any source +web = WebGear(logging=True, **options) + +# create your own custom frame producer +async def my_frame_producer1(): + + # !!! define your first video source here !!! + # Open any video stream such as "foo1.mp4" + stream = cv2.VideoCapture("foo1.mp4") + # loop over frames + while True: + # read frame from provided source + (grabbed, frame) = stream.read() + # break if NoneType + if not grabbed: + break + + # do something with your OpenCV frame here + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer(frame, percentage=30) # reduce frame by 30% + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0.00001) + # close stream + stream.release() + + +# create your own custom frame producer +async def my_frame_producer2(): + + # !!! define your second video source here !!! + # Open any video stream such as "foo2.mp4" + stream = cv2.VideoCapture("foo2.mp4") + # loop over frames + while True: + # read frame from provided source + (grabbed, frame) = stream.read() + # break if NoneType + if not grabbed: + break + + # do something with your OpenCV frame here + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer(frame, percentage=30) # reduce frame by 30% + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0.00001) + # close stream + stream.release() + + +async def custom_video_response(scope): + """ + Return a async video streaming response for `my_frame_producer2` generator + """ + assert scope["type"] in ["http", "https"] + await asyncio.sleep(0.00001) + return StreamingResponse( + my_frame_producer2(), + media_type="multipart/x-mixed-replace; boundary=frame", + ) + + +# add your custom frame producer to config +web.config["generator"] = my_frame_producer1 + +# append new route i.e. new custom route with custom response +web.routes.append( + Route("/video2", endpoint=custom_video_response) + ) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in Browser." + + +  \ No newline at end of file diff --git a/docs/help/webgear_faqs.md b/docs/help/webgear_faqs.md index 6616fcf61..e39194337 100644 --- a/docs/help/webgear_faqs.md +++ b/docs/help/webgear_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -48,7 +48,7 @@ limitations under the License. ## Is it possible to stream on a different device on the network with WebGear? -!!! note "If you set `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine, then you must still use http://localhost:8000/ to access stream on your host machine browser." +!!! alert "If you set `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine, then you must still use http://localhost:8000/ to access stream on that same host machine browser." For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. Then type the IP-address of source machine followed by the defined `port` value in your desired Client Device's browser (for e.g. http://192.27.0.101:8000) to access the stream. @@ -72,6 +72,12 @@ For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"   +## How can to add CORS headers to WebGear? + +**Answer:** See [this usage example ➶](../../gears/webgear/advanced/#using-webgear-with-middlewares). + +  + ## Can I change the default location? **Answer:** Yes, you can use WebGear's [`custom_data_location`](../../gears/webgear/params/#webgear-specific-attributes) attribute of `option` parameter in WebGear API, to change [default location](../../gears/webgear/overview/#default-location) to somewhere else. diff --git a/docs/help/webgear_rtc_ex.md b/docs/help/webgear_rtc_ex.md new file mode 100644 index 000000000..894599957 --- /dev/null +++ b/docs/help/webgear_rtc_ex.md @@ -0,0 +1,213 @@ + + +# WebGear_RTC_RTC Examples + +  + +## Using WebGear_RTC with RaspberryPi Camera Module + +Because of WebGear_RTC API's flexible internal wapper around VideoGear, it can easily access any parameter of CamGear and PiGear videocapture APIs. + +!!! info "Following usage examples are just an idea of what can be done with WebGear_RTC API, you can try various [VideoGear](../../gears/videogear/params/), [CamGear](../../gears/camgear/params/) and [PiGear](../../gears/pigear/params/) parameters directly in WebGear_RTC API in the similar manner." + +Here's a bare-minimum example of using WebGear_RTC API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear_RTC + +# various webgear_rtc performance and Raspberry Pi camera tweaks +options = { + "frame_size_reduction": 25, + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, +} + +# initialize WebGear_RTC app +web = WebGear_RTC( + enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + +## Using WebGear_RTC with real-time Video Stabilization enabled + +Here's an example of using WebGear_RTC API with real-time Video Stabilization enabled: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear_RTC + +# various webgear_rtc performance tweaks +options = { + "frame_size_reduction": 25, +} + +# initialize WebGear_RTC app with a raw source and enable video stabilization(`stabilize=True`) +web = WebGear_RTC(source="foo.mp4", stabilize=True, logging=True, **options) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + +## Display Two Sources Simultaneously in WebGear_RTC + +In this example, we'll be displaying two video feeds side-by-side simultaneously on browser using WebGear_RTC API by simply concatenating frames in real-time: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +```python +# import necessary libs +import uvicorn, asyncio, cv2 +import numpy as np +from av import VideoFrame +from aiortc import VideoStreamTrack +from vidgear.gears.asyncio import WebGear_RTC +from vidgear.gears.asyncio.helper import reducer + +# initialize WebGear_RTC app without any source +web = WebGear_RTC(logging=True) + +# frame concatenator +def get_conc_frame(frame1, frame2): + h1, w1 = frame1.shape[:2] + h2, w2 = frame2.shape[:2] + + # create empty matrix + vis = np.zeros((max(h1, h2), w1 + w2, 3), np.uint8) + + # combine 2 frames + vis[:h1, :w1, :3] = frame1 + vis[:h2, w1 : w1 + w2, :3] = frame2 + + return vis + + +# create your own Bare-Minimum Custom Media Server +class Custom_RTCServer(VideoStreamTrack): + """ + Custom Media Server using OpenCV, an inherit-class + to aiortc's VideoStreamTrack. + """ + + def __init__(self, source1=None, source2=None): + + # don't forget this line! + super().__init__() + + # check is source are provided + if source1 is None or source2 is None: + raise ValueError("Provide both source") + + # initialize global params + # define both source here + self.stream1 = cv2.VideoCapture(source1) + self.stream2 = cv2.VideoCapture(source2) + + async def recv(self): + """ + A coroutine function that yields `av.frame.Frame`. + """ + # don't forget this function!!! + + # get next timestamp + pts, time_base = await self.next_timestamp() + + # read video frame + (grabbed1, frame1) = self.stream1.read() + (grabbed2, frame2) = self.stream2.read() + + # if NoneType + if not grabbed1 or not grabbed2: + return None + else: + print("Got frames") + + print(frame1.shape) + print(frame2.shape) + + # concatenate frame + frame = get_conc_frame(frame1, frame2) + + print(frame.shape) + + # reducer frames size if you want more performance otherwise comment this line + # frame = await reducer(frame, percentage=30) # reduce frame by 30% + + # contruct `av.frame.Frame` from `numpy.nd.array` + av_frame = VideoFrame.from_ndarray(frame, format="bgr24") + av_frame.pts = pts + av_frame.time_base = time_base + + # return `av.frame.Frame` + return av_frame + + def terminate(self): + """ + Gracefully terminates VideoGear stream + """ + # don't forget this function!!! + + # terminate + if not (self.stream1 is None): + self.stream1.release() + self.stream1 = None + + if not (self.stream2 is None): + self.stream2.release() + self.stream2 = None + + +# assign your custom media server to config with both adequate sources (for e.g. foo1.mp4 and foo2.mp4) +web.config["server"] = Custom_RTCServer( + source1="dance_videos/foo1.mp4", source2="dance_videos/foo2.mp4" +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in Browser." + + +  \ No newline at end of file diff --git a/docs/help/webgear_rtc_faqs.md b/docs/help/webgear_rtc_faqs.md index fd4212254..ac468f546 100644 --- a/docs/help/webgear_rtc_faqs.md +++ b/docs/help/webgear_rtc_faqs.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -84,6 +84,13 @@ For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.   +## How can to add CORS headers to WebGear_RTC? + +**Answer:** See [this usage example ➶](../../gears/webgear_rtc/advanced/#using-webgear_rtc-with-middlewares). + +  + + ## Can I change the default location? **Answer:** Yes, you can use WebGear_RTC's [`custom_data_location`](../../gears/webgear_rtc/params/#webgear_rtc-specific-attributes) attribute of `option` parameter in WebGear_RTC API, to change [default location](../../gears/webgear_rtc/overview/#default-location) to somewhere else. diff --git a/docs/help/writegear_ex.md b/docs/help/writegear_ex.md new file mode 100644 index 000000000..c505a55cb --- /dev/null +++ b/docs/help/writegear_ex.md @@ -0,0 +1,306 @@ + + + +# WriteGear Examples + +  + +## Using WriteGear's Compression Mode for YouTube-Live Streaming + +!!! new "New in v0.2.1" + This example was added in `v0.2.1`. + +!!! alert "This example assume you already have a [**YouTube Account with Live-Streaming enabled**](https://support.google.com/youtube/answer/2474026#enable) for publishing video." + +!!! danger "Make sure to change [_YouTube-Live Stream Key_](https://support.google.com/youtube/answer/2907883#zippy=%2Cstart-live-streaming-now) with yours in following code before running!" + +```python +# import required libraries +from vidgear.gears import CamGear +from vidgear.gears import WriteGear +import cv2 + +# define video source +VIDEO_SOURCE = "/home/foo/foo.mp4" + +# Open stream +stream = CamGear(source=VIDEO_SOURCE, logging=True).start() + +# define required FFmpeg optimizing parameters for your writer +# [NOTE]: Added VIDEO_SOURCE as audio-source, since YouTube rejects audioless streams! +output_params = { + "-i": VIDEO_SOURCE, + "-acodec": "aac", + "-ar": 44100, + "-b:a": 712000, + "-vcodec": "libx264", + "-preset": "medium", + "-b:v": "4500k", + "-bufsize": "512k", + "-pix_fmt": "yuv420p", + "-f": "flv", +} + +# [WARNING] Change your YouTube-Live Stream Key here: +YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx" + +# Define writer with defined parameters and +writer = WriteGear( + output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY), + logging=True, + **output_params +) + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + +# safely close video stream +stream.stop() + +# safely close writer +writer.close() +``` + +  + + +## Using WriteGear's Compression Mode creating MP4 segments from a video stream + +!!! new "New in v0.2.1" + This example was added in `v0.2.1`. + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import WriteGear +import cv2 + +# Open any video source `foo.mp4` +stream = VideoGear( + source="foo.mp4", logging=True +).start() + +# define required FFmpeg optimizing parameters for your writer +output_params = { + "-c:v": "libx264", + "-crf": 22, + "-map": 0, + "-segment_time": 9, + "-g": 9, + "-sc_threshold": 0, + "-force_key_frames": "expr:gte(t,n_forced*9)", + "-clones": ["-f", "segment"], +} + +# Define writer with defined parameters +writer = WriteGear(output_filename="output%03d.mp4", logging=True, **output_params) + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() + +# safely close writer +writer.close() +``` + +  + + +## Using WriteGear's Compression Mode to add external audio file input to video frames + +!!! new "New in v0.2.1" + This example was added in `v0.2.1`. + +!!! failure "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." + +```python +# import required libraries +from vidgear.gears import CamGear +from vidgear.gears import WriteGear +import cv2 + +# open any valid video stream(for e.g `foo_video.mp4` file) +stream = CamGear(source="foo_video.mp4").start() + +# add various parameters, along with custom audio +stream_params = { + "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! + "-i": "foo_audio.aac", # assigns input audio-source: "foo_audio.aac" +} + +# Define writer with defined parameters +writer = WriteGear(output_filename="Output.mp4", logging=True, **stream_params) + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() + +# safely close writer +writer.close() +``` + +  + + +## Using WriteGear with ROS(Robot Operating System) + +We will be using [`cv_bridge`](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython) to convert OpenCV frames to ROS image messages and vice-versa. + +In this example, we'll create a node that listens to a ROS image message topic, converts the recieved images messages into OpenCV frames, draws a circle on it, and then process these frames into a lossless compressed file format in real-time. + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! note "This example is vidgear implementation of this [wiki example](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)." + +```python +# import roslib +import roslib + +roslib.load_manifest("my_package") + +# import other required libraries +import sys +import rospy +import cv2 +from std_msgs.msg import String +from sensor_msgs.msg import Image +from cv_bridge import CvBridge, CvBridgeError +from vidgear.gears import WriteGear + +# custom publisher class +class image_subscriber: + def __init__(self, output_filename="Output.mp4"): + # create CV bridge + self.bridge = CvBridge() + # define publisher topic + self.image_pub = rospy.Subscriber("image_topic_sub", Image, self.callback) + # Define writer with default parameters + self.writer = WriteGear(output_filename=output_filename) + + def callback(self, data): + # convert recieved data to frame + try: + cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") + except CvBridgeError as e: + print(e) + + # check if frame is valid + if cv_image: + + # {do something with the frame here} + + # add circle + (rows, cols, channels) = cv_image.shape + if cols > 60 and rows > 60: + cv2.circle(cv_image, (50, 50), 10, 255) + + # write frame to writer + writer.write(frame) + + def close(self): + # safely close video stream + self.writer.close() + + +def main(args): + # define publisher with suitable output filename + # such as `Output.mp4` for saving output + ic = image_subscriber(output_filename="Output.mp4") + # initiate ROS node on publisher + rospy.init_node("image_subscriber", anonymous=True) + try: + # run node + rospy.spin() + except KeyboardInterrupt: + print("Shutting down") + finally: + # close publisher + ic.close() + + +if __name__ == "__main__": + main(sys.argv) +``` + +  \ No newline at end of file diff --git a/docs/help/writegear_faqs.md b/docs/help/writegear_faqs.md index 48824a720..bb2764b2c 100644 --- a/docs/help/writegear_faqs.md +++ b/docs/help/writegear_faqs.md @@ -3,7 +3,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -39,10 +39,8 @@ limitations under the License. **Answer:** WriteGear will exit with `ValueError` if you feed frames of different dimensions or channels. -   - ## How to install and configure FFmpeg correctly for WriteGear on my machine? **Answer:** Follow these [Installation Instructions ➶](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation. @@ -109,205 +107,21 @@ limitations under the License. ## Is YouTube-Live Streaming possibe with WriteGear? -**Answer:** Yes, See example below: - -!!! new "New in v0.2.1" - This example was added in `v0.2.1`. - -!!! warning "This example assume you already have a [**YouTube Account with Live-Streaming enabled**](https://support.google.com/youtube/answer/2474026#enable) for publishing video." - -!!! danger "Make sure to change [_YouTube-Live Stream Key_](https://support.google.com/youtube/answer/2907883#zippy=%2Cstart-live-streaming-now) with yours in following code before running!" - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import WriteGear -import cv2 - -# define video source -VIDEO_SOURCE = "/home/foo/foo.mp4" - -# Open stream -stream = CamGear(source=VIDEO_SOURCE, logging=True).start() - -# define required FFmpeg optimizing parameters for your writer -# [NOTE]: Added VIDEO_SOURCE as audio-source, since YouTube rejects audioless streams! -output_params = { - "-i": VIDEO_SOURCE, - "-acodec": "aac", - "-ar": 44100, - "-b:a": 712000, - "-vcodec": "libx264", - "-preset": "medium", - "-b:v": "4500k", - "-bufsize": "512k", - "-pix_fmt": "yuv420p", - "-f": "flv", -} - -# [WARNING] Change your YouTube-Live Stream Key here: -YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx" - -# Define writer with defined parameters and -writer = WriteGear( - output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY), - logging=True, - **output_params -) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # write frame to writer - writer.write(frame) - -# safely close video stream -stream.stop() - -# safely close writer -writer.close() -``` +**Answer:** Yes, See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-for-youtube-live-streaming).   ## How to create MP4 segments from a video stream with WriteGear? -**Answer:** See example below: - -!!! new "New in v0.2.1" - This example was added in `v0.2.1`. - -```python -# import required libraries -from vidgear.gears import VideoGear -from vidgear.gears import WriteGear -import cv2 - -# Open any video source `foo.mp4` -stream = VideoGear( - source="foo.mp4", logging=True -).start() - -# define required FFmpeg optimizing parameters for your writer -output_params = { - "-c:v": "libx264", - "-crf": 22, - "-map": 0, - "-segment_time": 9, - "-g": 9, - "-sc_threshold": 0, - "-force_key_frames": "expr:gte(t,n_forced*9)", - "-clones": ["-f", "segment"], -} - -# Define writer with defined parameters -writer = WriteGear(output_filename="output%03d.mp4", logging=True, **output_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # write frame to writer - writer.write(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close writer -writer.close() -``` +**Answer:** See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-creating-mp4-segments-from-a-video-stream).   ## How add external audio file input to video frames? -**Answer:** See example below: - -!!! new "New in v0.2.1" - This example was added in `v0.2.1`. - -!!! failure "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import WriteGear -import cv2 - -# open any valid video stream(for e.g `foo_video.mp4` file) -stream = CamGear(source="foo_video.mp4").start() - -# add various parameters, along with custom audio -stream_params = { - "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! - "-i": "foo_audio.aac", # assigns input audio-source: "foo_audio.aac" -} - -# Define writer with defined parameters -writer = WriteGear(output_filename="Output.mp4", logging=True, **stream_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # write frame to writer - writer.write(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close writer -writer.close() -``` +**Answer:** See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-to-add-external-audio-file-input-to-video-frames).   diff --git a/docs/index.md b/docs/index.md index e1a24f30f..d26ec494f 100644 --- a/docs/index.md +++ b/docs/index.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -28,9 +28,9 @@ limitations under the License.   -> VidGear is a High-Performance **Video-Processing** Framework for building complex real-time media applications in python :fire: +> VidGear is a cross-platform High-Performance **Video-Processing** Framework for building complex real-time media applications in python :fire: -VidGear provides an easy-to-use, highly extensible, **Multi-Threaded + Asyncio Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering robust error-handling and real-time performance ⚡️. +VidGear provides an easy-to-use, highly extensible, **[Multi-Threaded](bonus/TQM/#threaded-queue-mode) + [Asyncio](https://docs.python.org/3/library/asyncio.html) API Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering robust error-handling and real-time performance ⚡️. > _"Write Less and Accomplish More"_ — VidGear's Motto @@ -40,13 +40,17 @@ VidGear focuses on simplicity, and thereby lets programmers and software develop ## Getting Started -- [x] If this is your first time using VidGear, head straight to the [Installation ➶](installation.md) to install VidGear. +!!! tip "In case you're run into any problems, consult the [Help](help/get_help) section." -- [x] Once you have VidGear installed, **Checkout its Function-Specific [Gears ➶](gears.md)** +- [x] If this is your first time using VidGear, head straight to the [**Installation**](installation.md) to install VidGear. + +- [x] Once you have VidGear installed, Checkout its **[Function-Specific Gears](gears.md)**. + +- [x] Also, if you're already familar with [**OpenCV**][opencv] library, then see **[Switching from OpenCV Library](switch_from_cv.md)**. + +!!! alert "If you're just getting started with OpenCV-Python programming, then refer this [FAQ ➶](help/general_faqs/#im-new-to-python-programming-or-its-usage-in-opencv-library-how-to-use-vidgear-in-my-projects)" -- [x] Also, if you're already familar with [OpenCV][opencv] library, then see [Switching from OpenCV Library ➶](switch_from_cv.md) -- [x] Or, if you're just getting started with OpenCV with Python, then see [here ➶](../help/general_faqs/#im-new-to-python-programming-or-its-usage-in-computer-vision-how-to-use-vidgear-in-my-projects)   @@ -63,7 +67,7 @@ These Gears can be classified as follows: * [CamGear](gears/camgear/overview/): Multi-Threaded API targeting various IP-USB-Cameras/Network-Streams/Streaming-Sites-URLs. * [PiGear](gears/pigear/overview/): Multi-Threaded API targeting various Raspberry-Pi Camera Modules. * [ScreenGear](gears/screengear/overview/): Multi-Threaded API targeting ultra-fast Screencasting. -* [VideoGear](gears/videogear/overview/): Common Video-Capture API with internal [Video Stabilizer](gears/stabilizer/overview/) wrapper. +* [VideoGear](gears/videogear/overview/): Common Video-Capture API with internal [_Video Stabilizer_](gears/stabilizer/overview/) wrapper. #### VideoWriter Gears @@ -71,7 +75,7 @@ These Gears can be classified as follows: #### Streaming Gears -* [StreamGear](gears/streamgear/overview/): Handles Transcoding of High-Quality, Dynamic & Adaptive Streaming Formats. +* [StreamGear](gears/streamgear/introduction/): Handles Transcoding of High-Quality, Dynamic & Adaptive Streaming Formats. * **Asynchronous I/O Streaming Gear:** @@ -92,29 +96,29 @@ These Gears can be classified as follows: > Contributions are welcome, and greatly appreciated! -Please see our [Contribution Guidelines ➶](contribution.md) for more details. +Please see our [**Contribution Guidelines**](contribution.md) for more details.   ## Community Channel -If you've come up with some new idea, or looking for the fastest way troubleshoot your problems. Please checkout our [Gitter community channel ➶][gitter] +If you've come up with some new idea, or looking for the fastest way troubleshoot your problems. Please checkout our [**Gitter community channel ➶**][gitter]   ## Become a Stargazer -You can be a [Stargazer :star2:][stargazer] by starring us on Github, it helps us a lot and you're making it easier for others to find & trust this library. Thanks! +You can be a [**Stargazer :star2:**][stargazer] by starring us on Github, it helps us a lot and you're making it easier for others to find & trust this library. Thanks!   -## Support Us +## Donations -> VidGear relies on your support :heart: +> VidGear is free and open source and will always remain so. :heart: -Donations help keep VidGear's Open Source Development alive. No amount is too little, even the smallest contributions can make a huge difference. +It is (like all open source software) a labour of love and something I am doing with my own free time. If you would like to say thanks, please feel free to make a donation: - +   @@ -122,13 +126,22 @@ Donations help keep VidGear's Open Source Development alive. No amount is too li Here is a Bibtex entry you can use to cite this project in a publication: +[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4718616.svg)](https://doi.org/10.5281/zenodo.4718616) ```BibTeX -@misc{vidgear, - author = {Abhishek Thakur}, - title = {vidgear}, - howpublished = {\url{https://github.com/abhiTronix/vidgear}}, - year = {2019-2021} +@software{vidgear, + author = {Abhishek Thakur and + Christian Clauss and + Christian Hollinger and + Benjamin Lowe and + Mickaël Schoentgen and + Renaud Bouckenooghe}, + title = {abhiTronix/vidgear: VidGear v0.2.2}, + year = 2021 + publisher = {Zenodo}, + version = {vidgear-0.2.2}, + doi = {10.5281/zenodo.4718616}, + url = {https://doi.org/10.5281/zenodo.4718616} } ``` diff --git a/docs/installation.md b/docs/installation.md index 256ea7c3c..b77f53e44 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/installation/pip_install.md b/docs/installation/pip_install.md index 2155bfcd5..dfb8c2a3e 100644 --- a/docs/installation/pip_install.md +++ b/docs/installation/pip_install.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -21,85 +21,154 @@ limitations under the License. # Install using pip -> _Best option for quickly getting stable VidGear installed._ +> _Best option for easily getting stable VidGear installed._ ## Prerequisites -When installing VidGear with pip, you need to check manually if following dependencies are installed: +When installing VidGear with [pip](https://pip.pypa.io/en/stable/installing/), you need to check manually if following dependencies are installed: -### OpenCV -Must require OpenCV(3.0+) python binaries installed for all core functions. You easily install it directly via [pip](https://pip.pypa.io/en/stable/installing/): +???+ alert "Upgrade your `pip`" -??? tip "OpenCV installation from source" + It strongly advised to upgrade to latest `pip` before installing vidgear to avoid any undesired installation error(s). There are two mechanisms to upgrade `pip`: - You can also follow online tutorials for building & installing OpenCV on [Windows](https://www.learnopencv.com/install-opencv3-on-windows/), [Linux](https://www.pyimagesearch.com/2018/05/28/ubuntu-18-04-how-to-install-opencv/) and [Raspberry Pi](https://www.pyimagesearch.com/2018/09/26/install-opencv-4-on-your-raspberry-pi/) machines manually from its source. + 1. **`ensurepip`:** Python comes with an [`ensurepip`](https://docs.python.org/3/library/ensurepip.html#module-ensurepip) module[^1], which can easily upgrade/install `pip` in any Python environment. -```sh - pip install -U opencv-python -``` + === "Linux/MacOS" -### FFmpeg + ```sh + python -m ensurepip --upgrade + + ``` -Must require for the video compression and encoding compatibilities within [StreamGear](#streamgear) and [**Compression Mode**](../../gears/writegear/compression/overview/) in [WriteGear](#writegear) API. + === "Windows" -!!! tip "FFmpeg Installation" + ```sh + py -m ensurepip --upgrade + + ``` + 2. **`pip`:** Use can also use existing `pip` to upgrade itself: - Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation. + ??? info "Install `pip` if not present" -### Picamera + * Download the script, from https://bootstrap.pypa.io/get-pip.py. + * Open a terminal/command prompt, `cd` to the folder containing the `get-pip.py` file and run: -Must Required if you're using Raspberry Pi Camera Modules with its [PiGear](../../gears/pigear/overview/) API. You can easily install it via pip: + === "Linux/MacOS" + ```sh + python get-pip.py + + ``` -!!! warning "Make sure to [**enable Raspberry Pi hardware-specific settings**](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior to using this library, otherwise it won't work." + === "Windows" -```sh - pip install picamera -``` + ```sh + py get-pip.py + + ``` + More details about this script can be found in [pypa/get-pip’s README](https://github.com/pypa/get-pip). -### Aiortc -Must Required only if you're using the [WebGear_RTC API](../../gears/webgear_rtc/overview/). You can easily install it via pip: + === "Linux/MacOS" -??? error "Microsoft Visual C++ 14.0 is required." - - Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: + ```sh + python -m pip install pip --upgrade + + ``` - !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well." + === "Windows" - - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16). - - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019). - - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe) + ```sh + py -m pip install pip --upgrade + + ``` - Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only: +### Core Prerequisites - - [x] Windows 10 SDK - - [x] C++ x64/x86 build tools +* #### OpenCV - Finally, proceed installing `aiortc` via pip. + Must require OpenCV(3.0+) python binaries installed for all core functions. You easily install it directly via [pip](https://pypi.org/project/opencv-python/): -```sh - pip install aiortc -``` + ??? tip "OpenCV installation from source" -### Uvloop + You can also follow online tutorials for building & installing OpenCV on [Windows](https://www.learnopencv.com/install-opencv3-on-windows/), [Linux](https://www.pyimagesearch.com/2018/05/28/ubuntu-18-04-how-to-install-opencv/), [MacOS](https://www.pyimagesearch.com/2018/08/17/install-opencv-4-on-macos/) and [Raspberry Pi](https://www.pyimagesearch.com/2018/09/26/install-opencv-4-on-your-raspberry-pi/) machines manually from its source. -Must required only if you're using the [NetGear_Async](../../gears/netgear_async/overview/) API on UNIX machines for maximum performance. You can easily install it via pip: + :warning: Make sure not to install both *pip* and *source* version together. Otherwise installation will fail to work! -!!! error "uvloop is **[NOT yet supported on Windows Machines](https://github.com/MagicStack/uvloop/issues/14).**" -!!! warning "Python-3.6 legacies support [**dropped in version `>=1.15.0`**](https://github.com/MagicStack/uvloop/releases/tag/v0.15.0). Kindly install previous `0.14.0` version instead." + ??? info "Other OpenCV binaries" -```sh - pip install uvloop -``` + OpenCV mainainers also provide additional binaries via pip that contains both main modules and contrib/extra modules [`opencv-contrib-python`](https://pypi.org/project/opencv-contrib-python/), and for server (headless) environments like [`opencv-python-headless`](https://pypi.org/project/opencv-python-headless/) and [`opencv-contrib-python-headless`](https://pypi.org/project/opencv-contrib-python-headless/). You can also install ==any one of them== in similar manner. More information can be found [here](https://github.com/opencv/opencv-python#installation-and-usage). + + + ```sh + pip install opencv-python + ``` + + +### API Specific Prerequisites + +* #### FFmpeg + + Require only for the video compression and encoding compatibility within [**StreamGear API**](../../gears/streamgear/overview/) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/). + + !!! tip "FFmpeg Installation" + + Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation. + +* #### Picamera + + Required only if you're using Raspberry Pi Camera Modules with its [**PiGear**](../../gears/pigear/overview/) API. You can easily install it via pip: + + + !!! warning "Make sure to [**enable Raspberry Pi hardware-specific settings**](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior to using this library, otherwise it won't work." + + ```sh + pip install picamera + ``` + +* #### Aiortc + + Required only if you're using the [**WebGear_RTC**](../../gears/webgear_rtc/overview/) API. You can easily install it via pip: + + ??? error "Microsoft Visual C++ 14.0 is required." + + Installing `aiortc` on windows may sometimes require Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: + + !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well." + + - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16). + - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019). + - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe) + + Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only: + + - [x] Windows 10 SDK + - [x] C++ x64/x86 build tools + + Finally, proceed installing `aiortc` via pip. + + ```sh + pip install aiortc + ``` + +* #### Uvloop + + Required only if you're using the [**NetGear_Async**](../../gears/netgear_async/overview/) API on UNIX machines for maximum performance. You can easily install it via pip: + + !!! error "uvloop is **[NOT yet supported on Windows Machines](https://github.com/MagicStack/uvloop/issues/14).**" + !!! warning "Python-3.6 legacies support [**dropped in version `>=1.15.0`**](https://github.com/MagicStack/uvloop/releases/tag/v0.15.0). Kindly install previous `0.14.0` version instead." + + ```sh + pip install uvloop + ```   ## Installation -Installation is as simple as: +**Installation is as simple as:** ??? warning "Windows Installation" @@ -108,45 +177,101 @@ Installation is as simple as: A quick solution may be to preface every Python command with `python -m` like this: ```sh - python -m pip install vidgear + python -m pip install vidgear + + # or with asyncio support + python -m pip install vidgear[asyncio] + ``` + + And, If you don't have the privileges to the directory you're installing package. Then use `--user` flag, that makes pip install packages in your home directory instead: + + ``` sh + python -m pip install --user vidgear - # or with asyncio support - python -m pip install vidgear[asyncio] + # or with asyncio support + python -m pip install --user vidgear[asyncio] ``` - If you don't have the privileges to the directory you're installing package. Then use `--user` flag, that makes pip install packages in your home directory instead: + Or, If you're using `py` as alias for installed python, then: ``` sh - python -m pip install --user vidgear + py -m pip install --user vidgear - # or with asyncio support - python -m pip install --user vidgear[asyncio] + # or with asyncio support + py -m pip install --user vidgear[asyncio] ``` +??? experiment "Installing vidgear with only selective dependencies" + + Starting with version `v0.2.2`, you can now run any VidGear API by installing only just specific dependencies required by the API in use(except for some Core dependencies). + + This is useful when you want to manually review, select and install minimal API-specific dependencies on bare-minimum vidgear from scratch on your system: + + - To install bare-minimum vidgear without any dependencies, use [`--no-deps`](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-no-deps) pip flag as follows: + + ```sh + # Install stable release without any dependencies + pip install --no-deps --upgrade vidgear + ``` + + - Then, you must install all **Core dependencies**: + + ```sh + # Install core dependencies + pip install cython, numpy, requests, tqdm, colorlog + + # Install opencv(only if not installed previously) + pip install opencv-python + ``` + + - Finally, manually install your **API-specific dependencies** as required by your API(in use): + + + | APIs | Dependencies | + |:---:|:---| + | CamGear | `pafy`, `youtube-dl`, `streamlink` | + | PiGear | `picamera` | + | VideoGear | - | + | ScreenGear | `mss`, `pyscreenshot`, `Pillow` | + | WriteGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/writegear/compression/advanced/ffmpeg_install/#ffmpeg-installation-instructions) | + | StreamGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/streamgear/ffmpeg_install/#ffmpeg-installation-instructions) | + | NetGear | `pyzmq`, `simplejpeg` | + | WebGear | `starlette`, `jinja2`, `uvicorn`, `simplejpeg` | + | WebGear_RTC | `aiortc`, `starlette`, `jinja2`, `uvicorn` | + | NetGear_Async | `pyzmq`, `msgpack`, `msgpack_numpy`, `uvloop` | + + ```sh + # Just copy-&-paste from above table + pip install + ``` + + ```sh - # Install stable release - pip install vidgear +# Install latest stable release +pip install -U vidgear - # Or Install stable release with Asyncio support - pip install vidgear[asyncio] +# Or Install latest stable release with Asyncio support +pip install -U vidgear[asyncio] ``` **And if you prefer to install VidGear directly from the repository:** ```sh - pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear +pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear - # or with asyncio support - pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear[asyncio] +# or with asyncio support +pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear[asyncio] ``` **Or you can also download its wheel (`.whl`) package from our repository's [releases](https://github.com/abhiTronix/vidgear/releases) section, and thereby can be installed as follows:** ```sh - pip install vidgear-0.2.1-py3-none-any.whl +pip install vidgear-0.2.2-py3-none-any.whl - # or with asyncio support - pip install vidgear-0.2.1-py3-none-any.whl[asyncio] +# or with asyncio support +pip install vidgear-0.2.2-py3-none-any.whl[asyncio] ```   + +[^1]: :warning: The `ensurepip` module is missing/disabled on Ubuntu. Use second method. \ No newline at end of file diff --git a/docs/installation/source_install.md b/docs/installation/source_install.md index 0ad39d706..9f1e2cee3 100644 --- a/docs/installation/source_install.md +++ b/docs/installation/source_install.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -26,54 +26,114 @@ limitations under the License. ## Prerequisites -When installing VidGear from source, FFmpeg and Aiortc is the only dependency you need to install manually: +When installing VidGear from source, FFmpeg and Aiortc are the only two API specific dependencies you need to install manually: !!! question "What about rest of the dependencies?" - Any other python dependencies will be automatically installed based on your OS specifications. + Any other python dependencies _(Core/API specific)_ will be automatically installed based on your OS specifications. + -### FFmpeg +???+ alert "Upgrade your `pip`" -Must require for the video compression and encoding compatibilities within [StreamGear](#streamgear) and [**Compression Mode**](../../gears/writegear/compression/overview/) in [WriteGear](#writegear) API. + It strongly advised to upgrade to latest `pip` before installing vidgear to avoid any undesired installation error(s). There are two mechanisms to upgrade `pip`: -!!! tip "FFmpeg Installation" + 1. **`ensurepip`:** Python comes with an [`ensurepip`](https://docs.python.org/3/library/ensurepip.html#module-ensurepip) module[^1], which can easily upgrade/install `pip` in any Python environment. - Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation. + === "Linux/MacOS" + ```sh + python -m ensurepip --upgrade + + ``` -### Aiortc + === "Windows" -Must Required only if you're using the [WebGear_RTC API](../../gears/webgear_rtc/overview/). You can easily install it via pip: + ```sh + py -m ensurepip --upgrade + + ``` + 2. **`pip`:** Use can also use existing `pip` to upgrade itself: -??? error "Microsoft Visual C++ 14.0 is required." - - Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: + ??? info "Install `pip` if not present" - !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well." + * Download the script, from https://bootstrap.pypa.io/get-pip.py. + * Open a terminal/command prompt, `cd` to the folder containing the `get-pip.py` file and run: - - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16). - - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019). - - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe) + === "Linux/MacOS" - Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only: + ```sh + python get-pip.py + + ``` - - [x] Windows 10 SDK - - [x] C++ x64/x86 build tools + === "Windows" - Finally, proceed installing `aiortc` via pip. + ```sh + py get-pip.py + + ``` + More details about this script can be found in [pypa/get-pip’s README](https://github.com/pypa/get-pip). -```sh - pip install aiortc -``` + + === "Linux/MacOS" + + ```sh + python -m pip install pip --upgrade + + ``` + + === "Windows" + + ```sh + py -m pip install pip --upgrade + + ``` + +### API Specific Prerequisites + +* #### FFmpeg + + Require only for the video compression and encoding compatibility within [**StreamGear API**](../../gears/streamgear/overview/) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/). + + !!! tip "FFmpeg Installation" + + Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation. +* #### Aiortc + + Required only if you're using the [**WebGear_RTC**](../../gears/webgear_rtc/overview/) API. You can easily install it via pip: + + ??? error "Microsoft Visual C++ 14.0 is required." + + Installing `aiortc` on windows may sometimes requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: + + !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well." + + - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16). + - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019). + - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe) + + Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only: + + - [x] Windows 10 SDK + - [x] C++ x64/x86 build tools + + Finally, proceed installing `aiortc` via pip. + + ```sh + pip install aiortc + ``` +   ## Installation -If you want to just install and try out the checkout the latest beta [`testing`](https://github.com/abhiTronix/vidgear/tree/testing) branch , you can do so with the following command. This can be useful if you want to provide feedback for a new feature or want to confirm if a bug you have encountered is fixed in the `testing` branch. +**If you want to just install and try out the checkout the latest beta [`testing`](https://github.com/abhiTronix/vidgear/tree/testing) branch , you can do so with the following command:** -!!! warning "DO NOT clone or install `development` branch, as it is not tested with CI environments and is possibly very unstable or unusable." +!!! info "This can be useful if you want to provide feedback for a new feature or want to confirm if a bug you have encountered is fixed in the `testing` branch." + +!!! warning "DO NOT clone or install `development` branch unless advised, as it is not tested with CI environments and possibly very unstable or unusable." ??? tip "Windows Installation" @@ -81,7 +141,7 @@ If you want to just install and try out the checkout the latest beta [`testing`] * Use following commands to clone and install VidGear: - ```sh + ```sh # clone the repository and get inside git clone https://github.com/abhiTronix/vidgear.git && cd vidgear @@ -93,7 +153,73 @@ If you want to just install and try out the checkout the latest beta [`testing`] # OR install with asyncio support python - m pip install .[asyncio] - ``` + ``` + + * If you're using `py` as alias for installed python, then: + + ``` sh + # clone the repository and get inside + git clone https://github.com/abhiTronix/vidgear.git && cd vidgear + + # checkout the latest testing branch + git checkout testing + + # install normally + python -m pip install . + + # OR install with asyncio support + python - m pip install .[asyncio] + ``` + +??? experiment "Installing vidgear with only selective dependencies" + + Starting with version `v0.2.2`, you can now run any VidGear API by installing only just specific dependencies required by the API in use(except for some Core dependencies). + + This is useful when you want to manually review, select and install minimal API-specific dependencies on bare-minimum vidgear from scratch on your system: + + - To install bare-minimum vidgear without any dependencies, use [`--no-deps`](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-no-deps) pip flag as follows: + + ```sh + # clone the repository and get inside + git clone https://github.com/abhiTronix/vidgear.git && cd vidgear + + # checkout the latest testing branch + git checkout testing + + # Install without any dependencies + pip install --no-deps . + ``` + + - Then, you must install all **Core dependencies**: + + ```sh + # Install core dependencies + pip install cython, numpy, requests, tqdm, colorlog + + # Install opencv(only if not installed previously) + pip install opencv-python + ``` + + - Finally, manually install your **API-specific dependencies** as required by your API(in use): + + + | APIs | Dependencies | + |:---:|:---| + | CamGear | `pafy`, `youtube-dl`, `streamlink` | + | PiGear | `picamera` | + | VideoGear | - | + | ScreenGear | `mss`, `pyscreenshot`, `Pillow` | + | WriteGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/writegear/compression/advanced/ffmpeg_install/#ffmpeg-installation-instructions) | + | StreamGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/streamgear/ffmpeg_install/#ffmpeg-installation-instructions) | + | NetGear | `pyzmq`, `simplejpeg` | + | WebGear | `starlette`, `jinja2`, `uvicorn`, `simplejpeg` | + | WebGear_RTC | `aiortc`, `starlette`, `jinja2`, `uvicorn` | + | NetGear_Async | `pyzmq`, `msgpack`, `msgpack_numpy`, `uvloop` | + + ```sh + # Just copy-&-paste from above table + pip install + ``` ```sh # clone the repository and get inside @@ -119,3 +245,6 @@ pip install git+git://github.com/abhiTronix/vidgear@testing#egg=vidgear[asyncio] ```   + + +[^1]: The `ensurepip` module was added to the Python standard library in Python 3.4. diff --git a/docs/license.md b/docs/license.md index 6e972a8b2..b65ef570a 100644 --- a/docs/license.md +++ b/docs/license.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -24,7 +24,7 @@ This library is released under the **[Apache 2.0 License](https://github.com/abh ## Copyright Notice - Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) + Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/docs/overrides/404.html b/docs/overrides/404.html index be90f793f..ab2502822 100644 --- a/docs/overrides/404.html +++ b/docs/overrides/404.html @@ -1,8 +1,62 @@ +{% extends "main.html" %} {% block content %} +

404

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

UH OH! You're lost.

+

The page you are looking for does not exist. How you got here is a mystery. But you can click the button below to go back to the homepage.

+ {% endblock %} + - -{% extends "main.html" %} -{% block content %} -

404

-
-Lost -
-

-Look like you're lost -

-

the page you are looking for is not available!

-{% endblock %} +--> \ No newline at end of file diff --git a/docs/overrides/assets/images/bidir_async.png b/docs/overrides/assets/images/bidir_async.png new file mode 100644 index 000000000..8f158c65b Binary files /dev/null and b/docs/overrides/assets/images/bidir_async.png differ diff --git a/docs/overrides/assets/images/branch_flow.svg b/docs/overrides/assets/images/branch_flow.svg new file mode 100644 index 000000000..0331653c4 --- /dev/null +++ b/docs/overrides/assets/images/branch_flow.svg @@ -0,0 +1,44 @@ +DevelopmentUnstableTestingCI Tested (latest)MasterStableReleasecommitsPull-Request (PR)CI test (passed)PR commits (external)merge \ No newline at end of file diff --git a/docs/overrides/assets/images/multi_client.png b/docs/overrides/assets/images/multi_client.png old mode 100755 new mode 100644 index 7ddd323d2..9cfe8eea7 Binary files a/docs/overrides/assets/images/multi_client.png and b/docs/overrides/assets/images/multi_client.png differ diff --git a/docs/overrides/assets/images/multi_server.png b/docs/overrides/assets/images/multi_server.png old mode 100755 new mode 100644 index 243d9c0bf..fba54eecd Binary files a/docs/overrides/assets/images/multi_server.png and b/docs/overrides/assets/images/multi_server.png differ diff --git a/docs/overrides/assets/images/ssh_tunnel.png b/docs/overrides/assets/images/ssh_tunnel.png new file mode 100644 index 000000000..26945c041 Binary files /dev/null and b/docs/overrides/assets/images/ssh_tunnel.png differ diff --git a/docs/overrides/assets/images/ssh_tunnel_ex.png b/docs/overrides/assets/images/ssh_tunnel_ex.png new file mode 100644 index 000000000..423fc7c64 Binary files /dev/null and b/docs/overrides/assets/images/ssh_tunnel_ex.png differ diff --git a/docs/overrides/assets/images/stream_tweak.png b/docs/overrides/assets/images/stream_tweak.png new file mode 100644 index 000000000..6a956fd32 Binary files /dev/null and b/docs/overrides/assets/images/stream_tweak.png differ diff --git a/docs/overrides/assets/javascripts/extra.js b/docs/overrides/assets/javascripts/extra.js index 7d1c71690..a09882de3 100755 --- a/docs/overrides/assets/javascripts/extra.js +++ b/docs/overrides/assets/javascripts/extra.js @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,8 +18,9 @@ limitations under the License. =============================================== */ -var player = new Clappr.Player({ - source: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/fbcf0377b171b777db5e0b3b939138df35a90676/streamgear_video_chunks/streamgear_dash.mpd', +// DASH StreamGear demo +var player_dash = new Clappr.Player({ + source: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/dca65250d95eeeb87d594686c2f2c2208a015486/streamgear_video_segments/DASH/streamgear_dash.mpd', plugins: [DashShakaPlayback, LevelSelector], shakaConfiguration: { streaming: { @@ -29,13 +30,61 @@ var player = new Clappr.Player({ shakaOnBeforeLoad: function(shaka_player) { // shaka_player.getNetworkingEngine().registerRequestFilter() ... }, + levelSelectorConfig: { + title: 'Quality', + labels: { + 2: 'High', // 500kbps + 1: 'Med', // 240kbps + 0: 'Low', // 120kbps + }, + labelCallback: function(playbackLevel, customLabel) { + return customLabel; // High 720p + } + }, + width: '100%', + height: '100%', + parentId: '#player_dash', + poster: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/dca65250d95eeeb87d594686c2f2c2208a015486/streamgear_video_segments/DASH/hd_thumbnail.jpg', + preload: 'metadata', +}); + +// HLS StremGear demo +var player_hls = new Clappr.Player({ + source: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/abc0c193ab26e21f97fa30c9267de6beb8a72295/streamgear_video_segments/HLS/streamgear_hls.m3u8', + plugins: [HlsjsPlayback, LevelSelector], + hlsUseNextLevel: false, + hlsMinimumDvrSize: 60, + hlsRecoverAttempts: 16, + hlsPlayback: { + preload: true, + customListeners: [], + }, + playback: { + extrapolatedWindowNumSegments: 2, + triggerFatalErrorOnResourceDenied: false, + hlsjsConfig: { + // hls.js specific options + }, + }, + levelSelectorConfig: { + title: 'Quality', + labels: { + 2: 'High', // 500kbps + 1: 'Med', // 240kbps + 0: 'Low', // 120kbps + }, + labelCallback: function(playbackLevel, customLabel) { + return customLabel; // High 720p + } + }, width: '100%', height: '100%', - parentId: '#player', - poster: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/674250e6c0387d0d0528406eec35bc580ceafee3/streamgear_video_chunks/hd_thumbnail.jpg', + parentId: '#player_hls', + poster: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/abc0c193ab26e21f97fa30c9267de6beb8a72295/streamgear_video_segments/HLS/hd_thumbnail.jpg', preload: 'metadata', }); +// DASH Stabilizer demo var player_stab = new Clappr.Player({ source: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/fbcf0377b171b777db5e0b3b939138df35a90676/stabilizer_video_chunks/stabilizer_dash.mpd', plugins: [DashShakaPlayback], @@ -50,6 +99,11 @@ var player_stab = new Clappr.Player({ width: '100%', height: '100%', parentId: '#player_stab', - poster: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/674250e6c0387d0d0528406eec35bc580ceafee3/stabilizer_video_chunks/hd_thumbnail.png', + poster: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/94bf767c28bf2fe61b9c327625af8e22745f9fdf/stabilizer_video_chunks/hd_thumbnail_2.png', preload: 'metadata', }); + +// gitter sidecard +((window.gitter = {}).chat = {}).options = { + room: 'vidgear/community' +}; \ No newline at end of file diff --git a/docs/overrides/assets/stylesheets/custom.css b/docs/overrides/assets/stylesheets/custom.css index 7972c0fb3..a04895b69 100755 --- a/docs/overrides/assets/stylesheets/custom.css +++ b/docs/overrides/assets/stylesheets/custom.css @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -19,115 +19,274 @@ limitations under the License. */ :root { - --md-admonition-icon--new: url('data:image/svg+xml;charset=utf-8,') + --md-admonition-icon--new: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M13 2V3H12V9H11V10H9V11H8V12H7V13H5V12H4V11H3V9H2V15H3V16H4V17H5V18H6V22H8V21H7V20H8V19H9V18H10V19H11V22H13V21H12V17H13V16H14V15H15V12H16V13H17V11H15V9H20V8H17V7H22V3H21V2M14 3H15V4H14Z' /%3E%3C/svg%3E"); + --md-admonition-icon--alert: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M6,6.9L3.87,4.78L5.28,3.37L7.4,5.5L6,6.9M13,1V4H11V1H13M20.13,4.78L18,6.9L16.6,5.5L18.72,3.37L20.13,4.78M4.5,10.5V12.5H1.5V10.5H4.5M19.5,10.5H22.5V12.5H19.5V10.5M6,20H18A2,2 0 0,1 20,22H4A2,2 0 0,1 6,20M12,5A6,6 0 0,1 18,11V19H6V11A6,6 0 0,1 12,5Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xquote: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M20 2H4C2.9 2 2 2.9 2 4V16C2 17.1 2.9 18 4 18H8V21C8 21.6 8.4 22 9 22H9.5C9.7 22 10 21.9 10.2 21.7L13.9 18H20C21.1 18 22 17.1 22 16V4C22 2.9 21.1 2 20 2M11 13H7V8.8L8.3 6H10.3L8.9 9H11V13M17 13H13V8.8L14.3 6H16.3L14.9 9H17V13Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xwarning: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath d='M23,12L20.56,9.22L20.9,5.54L17.29,4.72L15.4,1.54L12,3L8.6,1.54L6.71,4.72L3.1,5.53L3.44,9.21L1,12L3.44,14.78L3.1,18.47L6.71,19.29L8.6,22.47L12,21L15.4,22.46L17.29,19.28L20.9,18.46L20.56,14.78L23,12M13,17H11V15H13V17M13,13H11V7H13V13Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xdanger: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M12,2A9,9 0 0,0 3,11C3,14.03 4.53,16.82 7,18.47V22H9V19H11V22H13V19H15V22H17V18.46C19.47,16.81 21,14 21,11A9,9 0 0,0 12,2M8,11A2,2 0 0,1 10,13A2,2 0 0,1 8,15A2,2 0 0,1 6,13A2,2 0 0,1 8,11M16,11A2,2 0 0,1 18,13A2,2 0 0,1 16,15A2,2 0 0,1 14,13A2,2 0 0,1 16,11M12,14L13.5,17H10.5L12,14Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xtip: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M12,6A6,6 0 0,1 18,12C18,14.22 16.79,16.16 15,17.2V19A1,1 0 0,1 14,20H10A1,1 0 0,1 9,19V17.2C7.21,16.16 6,14.22 6,12A6,6 0 0,1 12,6M14,21V22A1,1 0 0,1 13,23H11A1,1 0 0,1 10,22V21H14M20,11H23V13H20V11M1,11H4V13H1V11M13,1V4H11V1H13M4.92,3.5L7.05,5.64L5.63,7.05L3.5,4.93L4.92,3.5M16.95,5.63L19.07,3.5L20.5,4.93L18.37,7.05L16.95,5.63Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xfail: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M8.27,3L3,8.27V15.73L8.27,21H15.73L21,15.73V8.27L15.73,3M8.41,7L12,10.59L15.59,7L17,8.41L13.41,12L17,15.59L15.59,17L12,13.41L8.41,17L7,15.59L10.59,12L7,8.41' /%3E%3C/svg%3E"); + --md-admonition-icon--xsuccess: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M13.13 22.19L11.5 18.36C13.07 17.78 14.54 17 15.9 16.09L13.13 22.19M5.64 12.5L1.81 10.87L7.91 8.1C7 9.46 6.22 10.93 5.64 12.5M21.61 2.39C21.61 2.39 16.66 .269 11 5.93C8.81 8.12 7.5 10.53 6.65 12.64C6.37 13.39 6.56 14.21 7.11 14.77L9.24 16.89C9.79 17.45 10.61 17.63 11.36 17.35C13.5 16.53 15.88 15.19 18.07 13C23.73 7.34 21.61 2.39 21.61 2.39M14.54 9.46C13.76 8.68 13.76 7.41 14.54 6.63S16.59 5.85 17.37 6.63C18.14 7.41 18.15 8.68 17.37 9.46C16.59 10.24 15.32 10.24 14.54 9.46M8.88 16.53L7.47 15.12L8.88 16.53M6.24 22L9.88 18.36C9.54 18.27 9.21 18.12 8.91 17.91L4.83 22H6.24M2 22H3.41L8.18 17.24L6.76 15.83L2 20.59V22M2 19.17L6.09 15.09C5.88 14.79 5.73 14.47 5.64 14.12L2 17.76V19.17Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xexample: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M5,9.5L7.5,14H2.5L5,9.5M3,4H7V8H3V4M5,20A2,2 0 0,0 7,18A2,2 0 0,0 5,16A2,2 0 0,0 3,18A2,2 0 0,0 5,20M9,5V7H21V5H9M9,19H21V17H9V19M9,13H21V11H9V13Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xquestion: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M20 4H18V3H20.5C20.78 3 21 3.22 21 3.5V5.5C21 5.78 20.78 6 20.5 6H20V7H19V5H20V4M19 9H20V8H19V9M17 3H16V7H17V3M23 15V18C23 18.55 22.55 19 22 19H21V20C21 21.11 20.11 22 19 22H5C3.9 22 3 21.11 3 20V19H2C1.45 19 1 18.55 1 18V15C1 14.45 1.45 14 2 14H3C3 10.13 6.13 7 10 7H11V5.73C10.4 5.39 10 4.74 10 4C10 2.9 10.9 2 12 2S14 2.9 14 4C14 4.74 13.6 5.39 13 5.73V7H14C14.34 7 14.67 7.03 15 7.08V10H19.74C20.53 11.13 21 12.5 21 14H22C22.55 14 23 14.45 23 15M10 15.5C10 14.12 8.88 13 7.5 13S5 14.12 5 15.5 6.12 18 7.5 18 10 16.88 10 15.5M19 15.5C19 14.12 17.88 13 16.5 13S14 14.12 14 15.5 15.12 18 16.5 18 19 16.88 19 15.5M17 8H16V9H17V8Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xbug: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M13 2V7.08A5.47 5.47 0 0 0 12 7A5.47 5.47 0 0 0 11 7.08V2M16.9 15A5 5 0 0 1 16.73 15.55L20 17.42V22H18V18.58L15.74 17.29A4.94 4.94 0 0 1 8.26 17.29L6 18.58V22H4V17.42L7.27 15.55A5 5 0 0 1 7.1 15H5.3L2.55 16.83L1.45 15.17L4.7 13H7.1A5 5 0 0 1 7.37 12.12L5.81 11.12L2.24 12L1.76 10L6.19 8.92L8.5 10.45A5 5 0 0 1 15.5 10.45L17.77 8.92L22.24 10L21.76 12L18.19 11.11L16.63 12.11A5 5 0 0 1 16.9 13H19.3L22.55 15.16L21.45 16.82L18.7 15M11 14A1 1 0 1 0 10 15A1 1 0 0 0 11 14M15 14A1 1 0 1 0 14 15A1 1 0 0 0 15 14Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xabstract: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M3,3H21V5H3V3M3,7H15V9H3V7M3,11H21V13H3V11M3,15H15V17H3V15M3,19H21V21H3V19Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xnote: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M20.71,7.04C20.37,7.38 20.04,7.71 20.03,8.04C20,8.36 20.34,8.69 20.66,9C21.14,9.5 21.61,9.95 21.59,10.44C21.57,10.93 21.06,11.44 20.55,11.94L16.42,16.08L15,14.66L19.25,10.42L18.29,9.46L16.87,10.87L13.12,7.12L16.96,3.29C17.35,2.9 18,2.9 18.37,3.29L20.71,5.63C21.1,6 21.1,6.65 20.71,7.04M3,17.25L12.56,7.68L16.31,11.43L6.75,21H3V17.25Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xinfo: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M18 2H12V9L9.5 7.5L7 9V2H6C4.9 2 4 2.9 4 4V20C4 21.1 4.9 22 6 22H18C19.1 22 20 21.1 20 20V4C20 2.89 19.1 2 18 2M17.68 18.41C17.57 18.5 16.47 19.25 16.05 19.5C15.63 19.79 14 20.72 14.26 18.92C14.89 15.28 16.11 13.12 14.65 14.06C14.27 14.29 14.05 14.43 13.91 14.5C13.78 14.61 13.79 14.6 13.68 14.41S13.53 14.23 13.67 14.13C13.67 14.13 15.9 12.34 16.72 12.28C17.5 12.21 17.31 13.17 17.24 13.61C16.78 15.46 15.94 18.15 16.07 18.54C16.18 18.93 17 18.31 17.44 18C17.44 18 17.5 17.93 17.61 18.05C17.72 18.22 17.83 18.3 17.68 18.41M16.97 11.06C16.4 11.06 15.94 10.6 15.94 10.03C15.94 9.46 16.4 9 16.97 9C17.54 9 18 9.46 18 10.03C18 10.6 17.54 11.06 16.97 11.06Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xadvance: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath d='M7,2V4H8V18A4,4 0 0,0 12,22A4,4 0 0,0 16,18V4H17V2H7M11,16C10.4,16 10,15.6 10,15C10,14.4 10.4,14 11,14C11.6,14 12,14.4 12,15C12,15.6 11.6,16 11,16M13,12C12.4,12 12,11.6 12,11C12,10.4 12.4,10 13,10C13.6,10 14,10.4 14,11C14,11.6 13.6,12 13,12M14,7H10V4H14V7Z' /%3E%3C/svg%3E"); } + +.md-typeset .admonition.advance, +.md-typeset details.advance { + border-color: rgb(27, 77, 62); +} + .md-typeset .admonition.new, .md-typeset details.new { - border-color: rgb(43, 155, 70); + border-color: rgb(57,255,20); +} + +.md-typeset .admonition.alert, +.md-typeset details.alert { + border-color: rgb(255, 0, 255); } + .md-typeset .new > .admonition-title, .md-typeset .new > summary { - background-color: rgba(43, 155, 70, 0.1); - border-color: rgb(43, 155, 70); + background-color: rgb(57,255,20,0.1); + border-color: rgb(57,255,20); } + .md-typeset .new > .admonition-title::before, .md-typeset .new > summary::before { - background-color: rgb(43, 155, 70); - -webkit-mask-image: var(--md-admonition-icon--new); - mask-image: var(--md-admonition-icon--new); + background-color: rgb(57,255,20); + -webkit-mask-image: var(--md-admonition-icon--new); + mask-image: var(--md-admonition-icon--new); +} + +.md-typeset .alert > .admonition-title, +.md-typeset .alert > summary { + background-color: rgba(255, 0, 255, 0.1); + border-color: rgb(255, 0, 255); } -code { -word-break: keep-all !important; +.md-typeset .alert > .admonition-title::before, +.md-typeset .alert > summary::before { + background-color: rgb(255, 0, 255); + -webkit-mask-image: var(--md-admonition-icon--alert); + mask-image: var(--md-admonition-icon--alert); } -td { -vertical-align: middle !important; +.md-typeset .advance > .admonition-title, +.md-typeset .advance > summary, +.md-typeset .experiment > .admonition-title, +.md-typeset .experiment > summary { + background-color: rgba(0, 57, 166, 0.1); + border-color: rgb(0, 57, 166); } -th { - font-weight: bold !important; - text-align: center !important; +.md-typeset .advance > .admonition-title::before, +.md-typeset .advance > summary::before, +.md-typeset .experiment > .admonition-title::before, +.md-typeset .experiment > summary::before { + background-color: rgb(0, 57, 166); + -webkit-mask-image: var(--md-admonition-icon--xadvance); + mask-image: var(--md-admonition-icon--xadvance); +} + +.md-typeset .attention > .admonition-title::before, +.md-typeset .attention > summary::before, +.md-typeset .caution > .admonition-title::before, +.md-typeset .caution > summary::before, +.md-typeset .warning > .admonition-title::before, +.md-typeset .warning > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xwarning); + mask-image: var(--md-admonition-icon--xwarning); +} + +.md-typeset .hint > .admonition-title::before, +.md-typeset .hint > summary::before, +.md-typeset .important > .admonition-title::before, +.md-typeset .important > summary::before, +.md-typeset .tip > .admonition-title::before, +.md-typeset .tip > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xtip) !important; + mask-image: var(--md-admonition-icon--xtip) !important; +} + +.md-typeset .info > .admonition-title::before, +.md-typeset .info > summary::before, +.md-typeset .todo > .admonition-title::before, +.md-typeset .todo > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xinfo); + mask-image: var(--md-admonition-icon--xinfo); +} + +.md-typeset .danger > .admonition-title::before, +.md-typeset .danger > summary::before, +.md-typeset .error > .admonition-title::before, +.md-typeset .error > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xdanger); + mask-image: var(--md-admonition-icon--xdanger); +} + +.md-typeset .note > .admonition-title::before, +.md-typeset .note > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xnote); + mask-image: var(--md-admonition-icon--xnote); +} + +.md-typeset .abstract > .admonition-title::before, +.md-typeset .abstract > summary::before, +.md-typeset .summary > .admonition-title::before, +.md-typeset .summary > summary::before, +.md-typeset .tldr > .admonition-title::before, +.md-typeset .tldr > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xabstract); + mask-image: var(--md-admonition-icon--xabstract); +} + +.md-typeset .faq > .admonition-title::before, +.md-typeset .faq > summary::before, +.md-typeset .help > .admonition-title::before, +.md-typeset .help > summary::before, +.md-typeset .question > .admonition-title::before, +.md-typeset .question > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xquestion); + mask-image: var(--md-admonition-icon--xquestion); +} + +.md-typeset .check > .admonition-title::before, +.md-typeset .check > summary::before, +.md-typeset .done > .admonition-title::before, +.md-typeset .done > summary::before, +.md-typeset .success > .admonition-title::before, +.md-typeset .success > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xsuccess); + mask-image: var(--md-admonition-icon--xsuccess); +} + +.md-typeset .fail > .admonition-title::before, +.md-typeset .fail > summary::before, +.md-typeset .failure > .admonition-title::before, +.md-typeset .failure > summary::before, +.md-typeset .missing > .admonition-title::before, +.md-typeset .missing > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xfail); + mask-image: var(--md-admonition-icon--xfail); +} + +.md-typeset .bug > .admonition-title::before, +.md-typeset .bug > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xbug); + mask-image: var(--md-admonition-icon--xbug); +} + +.md-typeset .example > .admonition-title::before, +.md-typeset .example > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xexample); + mask-image: var(--md-admonition-icon--xexample); +} + +.md-typeset .cite > .admonition-title::before, +.md-typeset .cite > summary::before, +.md-typeset .quote > .admonition-title::before, +.md-typeset .quote > summary::before { + -webkit-mask-image: var(--md-admonition-icon--xquote); + mask-image: var(--md-admonition-icon--xquote); } .md-nav__item--active > .md-nav__link { - font-weight: bold; + font-weight: bold; } .center { - display: block; - margin-left: auto; - margin-right: auto; - width: 80%; + display: block; + margin-left: auto; + margin-right: auto; + width: 80%; +} + +/* Handles Gitter Sidecard UI */ +.gitter-open-chat-button { + background-color: var(--md-primary-fg-color) !important; + font-family: inherit !important; + font-size: 12px; } .center-small { - display: block; - margin-left: auto; - margin-right: auto; - width: 90%; + display: block; + margin-left: auto; + margin-right: auto; + width: 90%; } .md-tabs__link--active { - font-weight: bold; + font-weight: bold; } .md-nav__title { - font-size: 1rem !important; + font-size: 1rem !important; } .md-version__link { - overflow: hidden; + overflow: hidden; } -.md-version__current{ - text-transform: uppercase; - font-weight: bolder; +.md-version__current { + text-transform: uppercase; + font-weight: bolder; } .md-typeset .task-list-control .task-list-indicator::before { - background-color: #FF0000; + background-color: #ff0000; + -webkit-mask-image: var(--md-admonition-icon--failure); + mask-image: var(--md-admonition-icon--failure); } blockquote { - padding: 0.5em 10px; - quotes: "\201C""\201D""\2018""\2019"; + padding: 0.5em 10px; + quotes: "\201C""\201D""\2018""\2019"; } + blockquote:before { - color: #ccc; - content: open-quote; - font-size: 4em; - line-height: 0.1em; - margin-right: 0.25em; - vertical-align: -0.4em; + color: #ccc; + content: open-quote; + font-size: 4em; + line-height: 0.1em; + margin-right: 0.25em; + vertical-align: -0.4em; } + blockquote:after { - visibility: hidden; - content: close-quote; + visibility: hidden; + content: close-quote; } + blockquote p { - display: inline; + display: inline; } - /* Handles Responive Video tags (from bootstrap) */ + .video { - padding: 0; - margin: 0; - list-style: none; - display: flex; - justify-content: center; + padding: 0; + margin: 0; + list-style: none; + display: flex; + justify-content: center; } + .embed-responsive { - position: relative; - display: block; - width: 100%; - padding: 0; - overflow: hidden; + position: relative; + display: block; + width: 100%; + padding: 0; + overflow: hidden; } .embed-responsive::before { - display: block; - content: ""; + display: block; + content: ""; } .embed-responsive .embed-responsive-item, @@ -135,49 +294,149 @@ blockquote p { .embed-responsive embed, .embed-responsive object, .embed-responsive video { - position: absolute; - top: 0; - bottom: 0; - left: 0; - width: 100%; - height: 100%; - border: 0; + position: absolute; + top: 0; + bottom: 0; + left: 0; + width: 100%; + height: 100%; + border: 0; } .embed-responsive-21by9::before { - padding-top: 42.857143%; + padding-top: 42.857143%; } .embed-responsive-16by9::before { - padding-top: 56.25%; + padding-top: 56.25%; } .embed-responsive-4by3::before { - padding-top: 75%; + padding-top: 75%; } .embed-responsive-1by1::before { - padding-top: 100%; + padding-top: 100%; } - /* ends */ - footer.sponsorship { - text-align: center; +footer.sponsorship { + text-align: center; } - footer.sponsorship hr { - display: inline-block; - width: px2rem(32px); - margin: 0 px2rem(14px); - vertical-align: middle; - border-bottom: 2px solid var(--md-default-fg-color--lighter); + +footer.sponsorship hr { + display: inline-block; + width: 2rem; + margin: 0.875rem; + vertical-align: middle; + border-bottom: 2px solid var(--md-default-fg-color--lighter); } - footer.sponsorship:hover hr { - border-color: var(--md-accent-fg-color); + +footer.sponsorship:hover hr { + border-color: var(--md-accent-fg-color); } - footer.sponsorship:not(:hover) .twemoji.heart-throb-hover svg { - color: var(--md-default-fg-color--lighter) !important; + +footer.sponsorship:not(:hover) .twemoji.heart-throb-hover svg { + color: var(--md-default-fg-color--lighter) !important; } + .doc-heading { - padding-top: 50px; + padding-top: 50px; } + +.btn { + z-index: 1; + overflow: hidden; + background: transparent; + position: relative; + padding: 8px 50px; + border-radius: 30px; + cursor: pointer; + font-size: 1em; + letter-spacing: 2px; + transition: 0.2s ease; + font-weight: bold; + margin: 5px 0px; +} + +.btn.bcolor { + border: 4px solid var(--md-typeset-a-color); + color: var(--blue); +} + +.btn.bcolor:before { + content: ""; + position: absolute; + left: 0; + top: 0; + width: 0%; + height: 100%; + background: var(--md-typeset-a-color); + z-index: -1; + transition: 0.2s ease; +} + +.btn.bcolor:hover { + color: var(--white); + background: var(--md-typeset-a-color); + transition: 0.2s ease; +} + +.btn.bcolor:hover:before { + width: 100%; +} + +main #g6219 { + transform-origin: 85px 4px; + animation: an1 12s 0.5s infinite ease-out; +} + +@keyframes an1 { + 0% { + transform: rotate(0); + } + + 5% { + transform: rotate(3deg); + } + + 15% { + transform: rotate(-2.5deg); + } + + 25% { + transform: rotate(2deg); + } + + 35% { + transform: rotate(-1.5deg); + } + + 45% { + transform: rotate(1deg); + } + + 55% { + transform: rotate(-1.5deg); + } + + 65% { + transform: rotate(2deg); + } + + 75% { + transform: rotate(-2deg); + } + + 85% { + transform: rotate(2.5deg); + } + + 95% { + transform: rotate(-3deg); + } + + 100% { + transform: rotate(0); + } +} \ No newline at end of file diff --git a/docs/overrides/main.html b/docs/overrides/main.html index f42807eba..1c8d80468 100644 --- a/docs/overrides/main.html +++ b/docs/overrides/main.html @@ -1,23 +1,3 @@ - - {% extends "base.html" %} {% block extrahead %} {% set title = config.site_name %} @@ -46,4 +26,26 @@ -{% endblock %} \ No newline at end of file + + +{% endblock %} + + \ No newline at end of file diff --git a/docs/switch_from_cv.md b/docs/switch_from_cv.md index 5ce5dc5fc..e01aa7b2c 100644 --- a/docs/switch_from_cv.md +++ b/docs/switch_from_cv.md @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -26,21 +26,30 @@ limitations under the License. # Switching from OpenCV Library -Switching OpenCV with VidGear APIs is usually a fairly painless process, and will just require changing a few lines in your python script. +Switching OpenCV with VidGear APIs is fairly painless process, and will just require changing a few lines in your python script. !!! abstract "This document is intended to software developers who want to migrate their python code from OpenCV Library to VidGear APIs." !!! warning "Prior knowledge of Python or OpenCV won't be covered in this guide. Proficiency with OpenCV-Python _(Python API for OpenCV)_ is a must in order understand this document." -!!! tip "If you're just getting started with OpenCV-Python, then see [here ➶](../help/general_faqs/#im-new-to-python-programming-or-its-usage-in-computer-vision-how-to-use-vidgear-in-my-projects)" +!!! tip "If you're just getting started with OpenCV-Python programming, then refer this [FAQ ➶](../help/general_faqs/#im-new-to-python-programming-or-its-usage-in-opencv-library-how-to-use-vidgear-in-my-projects)"   ## Why VidGear is better than OpenCV? -!!! info "Learn about OpenCV see [here➶](https://software.intel.com/content/www/us/en/develop/articles/what-is-opencv.html)" +!!! info "Learn more about OpenCV [here ➶](https://software.intel.com/content/www/us/en/develop/articles/what-is-opencv.html)" -VidGear employs OpenCV at its backend and enhances its existing capabilities even further by introducing many new state-of-the-art features on top of it like multi-threading for performance, real-time Stabilization, inherit support for multiple devices and screen-casting, live network-streaming, plus [way much more ➶](../gears). Vidgear offers all this while maintaining the same standard OpenCV-Python _(Python API for OpenCV)_ coding syntax for all of its APIs, thereby making it even easier to implement Complex OpenCV applications in fewer lines of python code. +VidGear employs OpenCV at its backend and enhances its existing capabilities even further by introducing many new state-of-the-art functionalities such as: + +- [x] Accelerated [Multi-Threaded](../bonus/TQM/#what-does-threaded-queue-mode-exactly-do) Performance. +- [x] Out-of-the-box support for OpenCV APIs. +- [x] Real-time [Stabilization](../gears/stabilizer/overview/) ready. +- [x] Lossless hardware enabled video [encoding](../gears/writegear/compression/usage/#using-compression-mode-with-hardware-encoders) and [transcoding](../gears/streamgear/rtfm/usage/#usage-with-hardware-video-encoder). +- [x] Inherited multi-backend support for various video sources and devices. +- [x] Screen-casting, Multi-bitrate network-streaming, and [way much more ➶](../gears) + +Vidgear offers all this at once while maintaining the same standard OpenCV-Python _(Python API for OpenCV)_ coding syntax for all of its APIs, thereby making it even easier to implement complex real-time OpenCV applications in python code without changing things much.   @@ -143,7 +152,7 @@ Let's breakdown a few noteworthy difference in both syntaxes: | Terminating | `#!python stream.release()` | `#!python stream.stop()` | -!!! success "Now, checkout other [VideoCapture Gears ➶](../gears/#a-videocapture-gears)" +!!! success "Now checkout other [VideoCapture Gears ➶](../gears/#a-videocapture-gears)"   @@ -267,6 +276,6 @@ Let's breakdown a few noteworthy difference in both syntaxes: | Writing frames | `#!python writer.write(frame)` | `#!python writer.write(frame)` | | Terminating | `#!python writer.release()` | `#!python writer.close()` | -!!! success "Now, checkout more examples of WriteGear API _(with FFmpeg backend)_ [here ➶](../gears/writegear/compression/usage/)" +!!! success "Now checkout more about WriteGear API [here ➶](../gears/writegear/introduction/)"   \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 3ea353ec9..b819890b4 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,4 +1,4 @@ -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -24,27 +24,23 @@ repo_name: abhiTronix/vidgear repo_url: https://github.com/abhiTronix/vidgear edit_uri: "" -# Google analytics -google_analytics: ['UA-131929464-1', 'abhitronix.github.io'] - # Copyright -copyright: Copyright © 2019 - 2021 Abhishek Thakur(@abhiTronix) +copyright: Copyright © 2019 Abhishek Thakur(@abhiTronix) # Configuration theme: name: material custom_dir: docs/overrides - # Don't include MkDocs' JavaScript - include_search_page: false - search_index_only: true - # Default values, taken from mkdocs_theme.yml language: en features: - header.autohide - navigation.tabs - navigation.top + - search.suggest + - search.highlight + - search.share palette: # Light mode - media: "(prefers-color-scheme: light)" @@ -57,14 +53,14 @@ theme: # Dark mode - media: "(prefers-color-scheme: dark)" scheme: slate - primary: red - accent: red + primary: deep orange + accent: orange toggle: icon: material/weather-night name: Switch to light mode font: - text: Nunito - code: IBM Plex + text: Muli + code: Fira Code icon: logo: logo logo: assets/images/logo.svg @@ -103,7 +99,9 @@ extra: manifest: site.webmanifest version: provider: mike - + analytics: # Google analytics + provider: google + property: UA-131929464-1 extra_css: - assets/stylesheets/custom.css @@ -118,8 +116,8 @@ markdown_extensions: - attr_list - def_list - footnotes - - meta - md_in_html + - meta - toc: permalink: ⚓ slugify: !!python/name:pymdownx.slugs.uslugify @@ -150,6 +148,8 @@ markdown_extensions: - pymdownx.tasklist: custom_checkbox: true - pymdownx.tilde + - pymdownx.striphtml: + strip_comments: true # Page tree nav: @@ -210,20 +210,26 @@ nav: - References: bonus/reference/writegear.md - FAQs: help/writegear_faqs.md - StreamGear: - - Overview: gears/streamgear/overview.md - - Usage Examples: gears/streamgear/usage.md - - Advanced: - - FFmpeg Installation: gears/streamgear/ffmpeg_install.md + - Introduction: gears/streamgear/introduction.md + - Single-Source Mode: + - Overview: gears/streamgear/ssm/overview.md + - Usage Examples: gears/streamgear/ssm/usage.md + - Real-time Frames Mode: + - Overview: gears/streamgear/rtfm/overview.md + - Usage Examples: gears/streamgear/rtfm/usage.md + - Extras: + - FFmpeg Installation: gears/streamgear/ffmpeg_install.md - Parameters: gears/streamgear/params.md - References: bonus/reference/streamgear.md - FAQs: help/streamgear_faqs.md - NetGear: - Overview: gears/netgear/overview.md - Usage Examples: gears/netgear/usage.md - - Advanced Usage: + - Advanced Usages: - Multi-Servers Mode: gears/netgear/advanced/multi_server.md - Multi-Clients Mode: gears/netgear/advanced/multi_client.md - Bidirectional Mode: gears/netgear/advanced/bidirectional_mode.md + - SSH Tunneling Mode: gears/netgear/advanced/ssh_tunnel.md - Secure Mode: gears/netgear/advanced/secure_mode.md - Frame Compression: gears/netgear/advanced/compression.md - Parameters: gears/netgear/params.md @@ -246,6 +252,8 @@ nav: - NetGear_Async: - Overview: gears/netgear_async/overview.md - Usage Examples: gears/netgear_async/usage.md + - Advanced Usages: + - Bidirectional Mode: gears/netgear_async/advanced/bidirectional_mode.md - Parameters: gears/netgear_async/params.md - References: bonus/reference/netgear_async.md - FAQs: help/netgear_async_faqs.md @@ -255,7 +263,7 @@ nav: - Parameters: gears/stabilizer/params.md - References: bonus/reference/stabilizer.md - FAQs: help/stabilizer_faqs.md - - Bonus: + - References: - API References: - vidgear.gears: - CamGear API: bonus/reference/camgear.md @@ -280,7 +288,6 @@ nav: - Getting Help: help/get_help.md - Frequently Asked Questions: - General FAQs: help/general_faqs.md - - Stabilizer Class FAQs: help/stabilizer_faqs.md - CamGear FAQs: help/camgear_faqs.md - PiGear FAQs: help/pigear_faqs.md - VideoGear FAQs: help/videogear_faqs.md @@ -291,3 +298,16 @@ nav: - WebGear FAQs: help/webgear_faqs.md - WebGear_RTC FAQs: help/webgear_rtc_faqs.md - NetGear_Async FAQs: help/netgear_async_faqs.md + - Stabilizer Class FAQs: help/stabilizer_faqs.md + - Bonus Examples: + - CamGear Examples: help/camgear_ex.md + - PiGear Examples: help/pigear_ex.md + - VideoGear Examples: help/videogear_ex.md + - ScreenGear Examples: help/screengear_ex.md + - WriteGear Examples: help/writegear_ex.md + - StreamGear Examples: help/streamgear_ex.md + - NetGear Examples: help/netgear_ex.md + - WebGear Examples: help/webgear_ex.md + - WebGear_RTC Examples: help/webgear_rtc_ex.md + - NetGear_Async Examples: help/netgear_async_ex.md + - Stabilizer Class Examples: help/stabilizer_ex.md \ No newline at end of file diff --git a/pytest.ini b/pytest.ini index ad0cc59c3..915b55546 100644 --- a/pytest.ini +++ b/pytest.ini @@ -1,4 +1,4 @@ -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/bash/install_opencv.sh b/scripts/bash/install_opencv.sh index ccdb7a9d5..fccf20a92 100644 --- a/scripts/bash/install_opencv.sh +++ b/scripts/bash/install_opencv.sh @@ -1,6 +1,6 @@ #!/bin/sh -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/bash/prepare_dataset.sh b/scripts/bash/prepare_dataset.sh index 7b3a31934..ee83101d0 100644 --- a/scripts/bash/prepare_dataset.sh +++ b/scripts/bash/prepare_dataset.sh @@ -1,6 +1,6 @@ #!/bin/sh -# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +# Copyright (c) 2019 Abhishek Thakur(@abhiTronix) # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -19,6 +19,7 @@ TMPFOLDER=$(python -c 'import tempfile; print(tempfile.gettempdir())') # Creating necessary directories mkdir -p "$TMPFOLDER"/temp_mpd # MPD assets temp path +mkdir -p "$TMPFOLDER"/temp_m3u8 # M3U8 assets temp path mkdir -p "$TMPFOLDER"/temp_write # For testing WriteGear Assets. mkdir -p "$TMPFOLDER"/temp_ffmpeg # For downloading FFmpeg Static Binary Assets. mkdir -p "$TMPFOLDER"/Downloads diff --git a/setup.cfg b/setup.cfg index af5dc27dd..ea5b40312 100644 --- a/setup.cfg +++ b/setup.cfg @@ -2,7 +2,7 @@ # This includes the license file(s) in the wheel. # https://wheel.readthedocs.io/en/stable/user_guide.html#including-license-files-in-the-generated-wheel-file license_files = LICENSE -description-file = README.md +description_file = README.md [bdist_wheel] # This flag says to generate wheels that support both Python 2 and Python diff --git a/setup.py b/setup.py index 501c78a02..69e8c1714 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -24,7 +24,7 @@ import setuptools import urllib.request -from pkg_resources import parse_version +from distutils.version import LooseVersion from distutils.util import convert_path from setuptools import setup @@ -40,10 +40,10 @@ def test_opencv(): import cv2 # check whether OpenCV Binaries are 3.x+ - if parse_version(cv2.__version__) < parse_version("3"): + if LooseVersion(cv2.__version__) < LooseVersion("3"): raise ImportError( "Incompatible (< 3.0) OpenCV version-{} Installation found on this machine!".format( - parse_version(cv2.__version__) + LooseVersion(cv2.__version__) ) ) except ImportError: @@ -59,8 +59,8 @@ def latest_version(package_name): try: response = urllib.request.urlopen(urllib.request.Request(url), timeout=1) data = json.load(response) - versions = data["releases"].keys() - versions = sorted(versions) + versions = list(data["releases"].keys()) + versions.sort(key=LooseVersion) return ">={}".format(versions[-1]) except: pass @@ -75,7 +75,7 @@ def latest_version(package_name): with open("README.md", "r", encoding="utf-8") as fh: long_description = fh.read() long_description = long_description.replace( # patch for images - "docs/overrides/assets", "https://abhitronix.github.io/vidgear/assets" + "docs/overrides/assets", "https://abhitronix.github.io/vidgear/latest/assets" ) # patch for unicodes long_description = long_description.replace("➶", ">>") @@ -90,15 +90,15 @@ def latest_version(package_name): author="Abhishek Thakur", install_requires=[ "pafy{}".format(latest_version("pafy")), + "youtube-dl{}".format(latest_version("youtube-dl")), # pafy backend "mss{}".format(latest_version("mss")), + "cython", # helper for numpy install "numpy", - "youtube-dl{}".format(latest_version("youtube-dl")), - "streamlink{}".format(latest_version("streamlink")), - "requests{}".format(latest_version("requests")), + "streamlink", + "requests", "pyzmq{}".format(latest_version("pyzmq")), - "simplejpeg".format(latest_version("simplejpeg")), + "simplejpeg{}".format(latest_version("simplejpeg")), "colorlog", - "colorama", "tqdm", "Pillow", "pyscreenshot{}".format(latest_version("pyscreenshot")), @@ -112,21 +112,16 @@ def latest_version(package_name): extras_require={ "asyncio": [ "starlette{}".format(latest_version("starlette")), - "aiofiles", "jinja2", - "aiohttp", "uvicorn{}".format(latest_version("uvicorn")), - "msgpack_numpy", + "msgpack{}".format(latest_version("msgpack")), + "msgpack_numpy{}".format(latest_version("msgpack_numpy")), + "aiortc{}".format(latest_version("aiortc")), ] - + ( - ["aiortc{}".format(latest_version("aiortc"))] - if (platform.system() != "Windows") - else [] - ) + ( ( ["uvloop{}".format(latest_version("uvloop"))] - if sys.version_info[:2] >= (3, 7) + if sys.version_info[:2] >= (3, 7) # dropped support for 3.6.x legacies else ["uvloop==0.14.0"] ) if (platform.system() != "Windows") @@ -166,8 +161,8 @@ def latest_version(package_name): "Topic :: Multimedia :: Video", "Topic :: Scientific/Engineering", "Intended Audience :: Developers", - 'Intended Audience :: Science/Research', - 'Intended Audience :: Education', + "Intended Audience :: Science/Research", + "Intended Audience :: Education", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", diff --git a/vidgear/gears/__init__.py b/vidgear/gears/__init__.py index 766af43ce..f7a36e768 100644 --- a/vidgear/gears/__init__.py +++ b/vidgear/gears/__init__.py @@ -1,8 +1,168 @@ # import the necessary packages -from .pigear import PiGear +import sys +import types +import logging +import importlib +from distutils.version import LooseVersion + +# define custom logger +FORMAT = "%(name)s :: %(levelname)s :: %(message)s" +logging.basicConfig(format=FORMAT) +logger = logging.getLogger("VidGear CORE") +logger.propagate = False + + +def get_module_version(module=None): + """ + ## get_module_version + + Retrieves version of specified module + + Parameters: + name (ModuleType): module of datatype `ModuleType`. + + **Returns:** version of specified module as string + """ + # check if module type is valid + assert not (module is None) and isinstance( + module, types.ModuleType + ), "[VidGear CORE:ERROR] :: Invalid module!" + + # get version from attribute + version = getattr(module, "__version__", None) + # retry if failed + if version is None: + # some modules uses a capitalized attribute name + version = getattr(module, "__VERSION__", None) + # raise if still failed + if version is None: + raise ImportError( + "[VidGear CORE:ERROR] :: Can't determine version for module: `{}`!".format( + module.__name__ + ) + ) + return str(version) + + +def import_core_dependency( + name, pkg_name=None, custom_message=None, version=None, mode="gte" +): + """ + ## import_core_dependency + + Imports specified core dependency. By default(`error = raise`), if a dependency is missing, + an ImportError with a meaningful message will be raised. Also, If a dependency is present, + but version is different than specified, an error is raised. + + Parameters: + name (string): name of dependency to be imported. + pkg_name (string): (Optional) package name of dependency(if different `pip` name). Otherwise `name` will be used. + custom_message (string): (Optional) custom Import error message to be raised or logged. + version (string): (Optional) required minimum/maximum version of the dependency to be imported. + mode (boolean): (Optional) Possible values "gte"(greater then equal), "lte"(less then equal), "exact"(exact). Default is "gte". + + **Returns:** `None` + """ + # check specified parameters + assert name and isinstance( + name, str + ), "[VidGear CORE:ERROR] :: Kindly provide name of the dependency." + + # extract name in case of relative import + sub_class = "" + name = name.strip() + if name.startswith("from"): + name = name.split(" ") + name, sub_class = (name[1].strip(), name[-1].strip()) + + # check mode of operation + assert mode in ["gte", "lte", "exact"], "[VidGear CORE:ERROR] :: Invalid mode!" + + # specify package name of dependency(if defined). Otherwise use name + install_name = pkg_name if not (pkg_name is None) else name + + # create message + msg = ( + custom_message + if not (custom_message is None) + else "Failed to find its core dependency '{}'. Install it with `pip install {}` command.".format( + name, install_name + ) + ) + # try importing dependency + try: + module = importlib.import_module(name) + if sub_class: + module = getattr(module, sub_class) + except ImportError: + # raise + raise ImportError(msg) from None + + # check if minimum required version + if not (version) is None: + # Handle submodules + parent_module = name.split(".")[0] + if parent_module != name: + # grab parent module + module_to_get = sys.modules[parent_module] + else: + module_to_get = module + + # extract version + module_version = get_module_version(module_to_get) + # verify + if mode == "exact": + if LooseVersion(module_version) != LooseVersion(version): + # create message + msg = "Unsupported version '{}' found. Vidgear requires '{}' dependency with exact version '{}' installed!".format( + module_version, parent_module, version + ) + # raise + raise ImportError(msg) + elif mode == "lte": + if LooseVersion(module_version) > LooseVersion(version): + # create message + msg = "Unsupported version '{}' found. Vidgear requires '{}' dependency installed with older version '{}' or smaller!".format( + module_version, parent_module, version + ) + # raise + raise ImportError(msg) + else: + if LooseVersion(module_version) < LooseVersion(version): + # create message + msg = "Unsupported version '{}' found. Vidgear requires '{}' dependency installed with newer version '{}' or greater!".format( + module_version, parent_module, version + ) + # raise + raise ImportError(msg) + return module + + +# import core dependencies +import_core_dependency( + "cv2", + pkg_name="opencv-python", + version="3", + custom_message="Failed to find core dependency '{}'. Install it with `pip install opencv-python` command.", +) +import_core_dependency( + "numpy", + version="1.19.5" + if sys.version_info[:2] < (3, 7) + else None, # dropped support for 3.6.x legacies + mode="lte", +) +import_core_dependency( + "colorlog", +) +import_core_dependency("requests") +import_core_dependency("from tqdm import tqdm", pkg_name="tqdm") + +# import all APIs from .camgear import CamGear -from .netgear import NetGear +from .pigear import PiGear from .videogear import VideoGear +from .netgear import NetGear from .writegear import WriteGear from .screengear import ScreenGear from .streamgear import StreamGear diff --git a/vidgear/gears/asyncio/__init__.py b/vidgear/gears/asyncio/__init__.py index 2b6a0e845..0febd8916 100644 --- a/vidgear/gears/asyncio/__init__.py +++ b/vidgear/gears/asyncio/__init__.py @@ -1,3 +1,4 @@ +# import all APIs from .webgear import WebGear from .webgear_rtc import WebGear_RTC from .netgear_async import NetGear_Async diff --git a/vidgear/gears/asyncio/__main__.py b/vidgear/gears/asyncio/__main__.py index 7f4d8ffa8..cd8bcef52 100644 --- a/vidgear/gears/asyncio/__main__.py +++ b/vidgear/gears/asyncio/__main__.py @@ -1,196 +1,195 @@ -""" -=============================================== -vidgear library source-code is deployed under the Apache 2.0 License: - -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -=============================================== -""" - -if __name__ == "__main__": - # import libs - import yaml - import argparse - - try: - import uvicorn - except ImportError: - raise ImportError( - "[VidGear:ERROR] :: Failed to detect correct uvicorn executables, install it with `pip3 install uvicorn` command." - ) - - # define argument parser and parse command line arguments - usage = """python -m vidgear.gears.asyncio [-h] [-m MODE] [-s SOURCE] [-ep ENABLEPICAMERA] [-S STABILIZE] - [-cn CAMERA_NUM] [-yt stream_mode] [-b BACKEND] [-cs COLORSPACE] - [-r RESOLUTION] [-f FRAMERATE] [-td TIME_DELAY] - [-ip IPADDRESS] [-pt PORT] [-l LOGGING] [-op OPTIONS]""" - - ap = argparse.ArgumentParser( - usage=usage, - description="Runs WebGear/WebGear_RTC Video Server through terminal.", - ) - ap.add_argument( - "-m", - "--mode", - type=str, - default="mjpeg", - choices=["mjpeg", "webrtc"], - help='Whether to use "MJPEG" or "WebRTC" mode for streaming.', - ) - # VideoGear API specific params - ap.add_argument( - "-s", - "--source", - default=0, - type=str, - help="Path to input source for CamGear API.", - ) - ap.add_argument( - "-ep", - "--enablePiCamera", - type=bool, - default=False, - help="Sets the flag to access PiGear(if True) or otherwise CamGear API respectively.", - ) - ap.add_argument( - "-S", - "--stabilize", - type=bool, - default=False, - help="Enables/disables real-time video stabilization.", - ) - ap.add_argument( - "-cn", - "--camera_num", - default=0, - help="Sets the camera module index that will be used by PiGear API.", - ) - ap.add_argument( - "-yt", - "--stream_mode", - default=False, - type=bool, - help="Enables YouTube Mode in CamGear API.", - ) - ap.add_argument( - "-b", - "--backend", - default=0, - type=int, - help="Sets the backend of the video source in CamGear API.", - ) - ap.add_argument( - "-cs", - "--colorspace", - type=str, - help="Sets the colorspace of the output video stream.", - ) - ap.add_argument( - "-r", - "--resolution", - default=(640, 480), - help="Sets the resolution (width,height) for camera module in PiGear API.", - ) - ap.add_argument( - "-f", - "--framerate", - default=30, - type=int, - help="Sets the framerate for camera module in PiGear API.", - ) - ap.add_argument( - "-td", - "--time_delay", - default=0, - help="Sets the time delay(in seconds) before start reading the frames.", - ) - # define WebGear exclusive params - ap.add_argument( - "-ip", - "--ipaddress", - type=str, - default="0.0.0.0", - help="Uvicorn binds the socket to this ipaddress.", - ) - ap.add_argument( - "-pt", - "--port", - type=int, - default=8000, - help="Uvicorn binds the socket to this port.", - ) - # define common params - ap.add_argument( - "-l", - "--logging", - type=bool, - default=False, - help="Enables/disables error logging, essential for debugging.", - ) - ap.add_argument( - "-op", - "--options", - type=str, - help="Sets the parameters supported by APIs(whichever being accessed) to the input videostream, \ - But make sure to wrap your dict value in single or double quotes.", - ) - args = vars(ap.parse_args()) - - options = {} - # handle `options` params - if not (args["options"] is None): - options = yaml.safe_load(args["options"]) - - if args["mode"] == "mjpeg": - from .webgear import WebGear - - # initialize WebGear object - web = WebGear( - enablePiCamera=args["enablePiCamera"], - stabilize=args["stabilize"], - source=args["source"], - camera_num=args["camera_num"], - stream_mode=args["stream_mode"], - backend=args["backend"], - colorspace=args["colorspace"], - resolution=args["resolution"], - framerate=args["framerate"], - logging=args["logging"], - time_delay=args["time_delay"], - **options - ) - else: - from .webgear_rtc import WebGear_RTC - - # initialize WebGear object - web = WebGear_RTC( - enablePiCamera=args["enablePiCamera"], - stabilize=args["stabilize"], - source=args["source"], - camera_num=args["camera_num"], - stream_mode=args["stream_mode"], - backend=args["backend"], - colorspace=args["colorspace"], - resolution=args["resolution"], - framerate=args["framerate"], - logging=args["logging"], - time_delay=args["time_delay"], - **options - ) - # run this object on Uvicorn server - uvicorn.run(web(), host=args["ipaddress"], port=args["port"]) - - if args["mode"] == "mjpeg": - # close app safely - web.shutdown() +""" +=============================================== +vidgear library source-code is deployed under the Apache 2.0 License: + +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +=============================================== +""" + +if __name__ == "__main__": + # import neccessary libs + import yaml + import argparse + + try: + import uvicorn + except ImportError: + raise ImportError( + "[VidGear:ERROR] :: Failed to detect correct uvicorn executables, install it with `pip3 install uvicorn` command." + ) + + # define argument parser and parse command line arguments + usage = """python -m vidgear.gears.asyncio [-h] [-m MODE] [-s SOURCE] [-ep ENABLEPICAMERA] [-S STABILIZE] + [-cn CAMERA_NUM] [-yt stream_mode] [-b BACKEND] [-cs COLORSPACE] + [-r RESOLUTION] [-f FRAMERATE] [-td TIME_DELAY] + [-ip IPADDRESS] [-pt PORT] [-l LOGGING] [-op OPTIONS]""" + + ap = argparse.ArgumentParser( + usage=usage, + description="Runs WebGear/WebGear_RTC Video Server through terminal.", + ) + ap.add_argument( + "-m", + "--mode", + type=str, + default="mjpeg", + choices=["mjpeg", "webrtc"], + help='Whether to use "MJPEG" or "WebRTC" mode for streaming.', + ) + # VideoGear API specific params + ap.add_argument( + "-s", + "--source", + default=0, + type=str, + help="Path to input source for CamGear API.", + ) + ap.add_argument( + "-ep", + "--enablePiCamera", + type=bool, + default=False, + help="Sets the flag to access PiGear(if True) or otherwise CamGear API respectively.", + ) + ap.add_argument( + "-S", + "--stabilize", + type=bool, + default=False, + help="Enables/disables real-time video stabilization.", + ) + ap.add_argument( + "-cn", + "--camera_num", + default=0, + help="Sets the camera module index that will be used by PiGear API.", + ) + ap.add_argument( + "-yt", + "--stream_mode", + default=False, + type=bool, + help="Enables YouTube Mode in CamGear API.", + ) + ap.add_argument( + "-b", + "--backend", + default=0, + type=int, + help="Sets the backend of the video source in CamGear API.", + ) + ap.add_argument( + "-cs", + "--colorspace", + type=str, + help="Sets the colorspace of the output video stream.", + ) + ap.add_argument( + "-r", + "--resolution", + default=(640, 480), + help="Sets the resolution (width,height) for camera module in PiGear API.", + ) + ap.add_argument( + "-f", + "--framerate", + default=30, + type=int, + help="Sets the framerate for camera module in PiGear API.", + ) + ap.add_argument( + "-td", + "--time_delay", + default=0, + help="Sets the time delay(in seconds) before start reading the frames.", + ) + # define WebGear exclusive params + ap.add_argument( + "-ip", + "--ipaddress", + type=str, + default="0.0.0.0", + help="Uvicorn binds the socket to this ipaddress.", + ) + ap.add_argument( + "-pt", + "--port", + type=int, + default=8000, + help="Uvicorn binds the socket to this port.", + ) + # define common params + ap.add_argument( + "-l", + "--logging", + type=bool, + default=False, + help="Enables/disables error logging, essential for debugging.", + ) + ap.add_argument( + "-op", + "--options", + type=str, + help="Sets the parameters supported by APIs(whichever being accessed) to the input videostream, \ + But make sure to wrap your dict value in single or double quotes.", + ) + args = vars(ap.parse_args()) + + options = {} + # handle `options` params + if not (args["options"] is None): + options = yaml.safe_load(args["options"]) + + if args["mode"] == "mjpeg": + from .webgear import WebGear + + # initialize WebGear object + web = WebGear( + enablePiCamera=args["enablePiCamera"], + stabilize=args["stabilize"], + source=args["source"], + camera_num=args["camera_num"], + stream_mode=args["stream_mode"], + backend=args["backend"], + colorspace=args["colorspace"], + resolution=args["resolution"], + framerate=args["framerate"], + logging=args["logging"], + time_delay=args["time_delay"], + **options + ) + else: + from .webgear_rtc import WebGear_RTC + + # initialize WebGear object + web = WebGear_RTC( + enablePiCamera=args["enablePiCamera"], + stabilize=args["stabilize"], + source=args["source"], + camera_num=args["camera_num"], + stream_mode=args["stream_mode"], + backend=args["backend"], + colorspace=args["colorspace"], + resolution=args["resolution"], + framerate=args["framerate"], + logging=args["logging"], + time_delay=args["time_delay"], + **options + ) + # run this object on Uvicorn server + uvicorn.run(web(), host=args["ipaddress"], port=args["port"]) + + # close app safely + web.shutdown() diff --git a/vidgear/gears/asyncio/helper.py b/vidgear/gears/asyncio/helper.py index 14503746a..ebe3ce722 100755 --- a/vidgear/gears/asyncio/helper.py +++ b/vidgear/gears/asyncio/helper.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -21,68 +21,22 @@ # Contains all the support functions/modules required by Vidgear Asyncio packages # import the necessary packages - import os import cv2 import sys import errno import numpy as np -import aiohttp import asyncio import logging as log import platform import requests from tqdm import tqdm from colorlog import ColoredFormatter -from pkg_resources import parse_version from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry - -def logger_handler(): - """ - ### logger_handler - - Returns the logger handler - - **Returns:** A logger handler - """ - # logging formatter - formatter = ColoredFormatter( - "%(bold_cyan)s%(asctime)s :: %(bold_blue)s%(name)s%(reset)s :: %(log_color)s%(levelname)s%(reset)s :: %(message)s", - datefmt="%H:%M:%S", - reset=False, - log_colors={ - "INFO": "bold_green", - "DEBUG": "bold_yellow", - "WARNING": "bold_purple", - "ERROR": "bold_red", - "CRITICAL": "bold_red,bg_white", - }, - ) - # check if VIDGEAR_LOGFILE defined - file_mode = os.environ.get("VIDGEAR_LOGFILE", False) - # define handler - handler = log.StreamHandler() - if file_mode and isinstance(file_mode, str): - file_path = os.path.abspath(file_mode) - if (os.name == "nt" or os.access in os.supports_effective_ids) and os.access( - os.path.dirname(file_path), os.W_OK - ): - file_path = ( - os.path.join(file_path, "vidgear.log") - if os.path.isdir(file_path) - else file_path - ) - handler = log.FileHandler(file_path, mode="a") - formatter = log.Formatter( - "%(asctime)s :: %(name)s :: %(levelname)s :: %(message)s", - datefmt="%H:%M:%S", - ) - - handler.setFormatter(formatter) - return handler - +# import helper packages +from ..helper import logger_handler, mkdir_safe # define logger logger = log.getLogger("Helper Asyncio") @@ -114,30 +68,9 @@ def send(self, request, **kwargs): return super().send(request, **kwargs) -def mkdir_safe(dir, logging=False): - """ - ### mkdir_safe - - Safely creates directory at given path. - - Parameters: - logging (bool): enables logging for its operations - - """ - try: - os.makedirs(dir) - if logging: - logger.debug("Created directory at `{}`".format(dir)) - except OSError as e: - if e.errno != errno.EEXIST: - raise - if logging: - logger.debug("Directory already exists at `{}`".format(dir)) - - def create_blank_frame(frame=None, text="", logging=False): """ - ### create_blank_frame + ## create_blank_frame Create blank frames of given frame size with text @@ -147,12 +80,12 @@ def create_blank_frame(frame=None, text="", logging=False): **Returns:** A reduced numpy ndarray array. """ # check if frame is valid - if frame is None: - raise ValueError("[Helper:ERROR] :: Input frame cannot be NoneType!") + if frame is None or not (isinstance(frame, np.ndarray)): + raise ValueError("[Helper:ERROR] :: Input frame is invalid!") # grab the frame size (height, width) = frame.shape[:2] # create blank frame - blank_frame = np.zeros((height, width, 3), np.uint8) + blank_frame = np.zeros(frame.shape, frame.dtype) # setup text if text and isinstance(text, str): if logging: @@ -169,19 +102,21 @@ def create_blank_frame(frame=None, text="", logging=False): cv2.putText( blank_frame, text, (textX, textY), font, fontScale, (125, 125, 125), 6 ) + # return frame return blank_frame -async def reducer(frame=None, percentage=0): +async def reducer(frame=None, percentage=0, interpolation=cv2.INTER_LANCZOS4): """ - ### reducer + ## reducer Asynchronous method that reduces frame size by given percentage. Parameters: frame (numpy.ndarray): inputs numpy array(frame). percentage (int/float): inputs size-reduction percentage. + interpolation (int): Change resize interpolation. **Returns:** A reduced numpy ndarray array. """ @@ -195,6 +130,11 @@ async def reducer(frame=None, percentage=0): "[Helper:ERROR] :: Given frame-size reduction percentage is invalid, Kindly refer docs." ) + if not (isinstance(interpolation, int)): + raise ValueError( + "[Helper:ERROR] :: Given interpolation is invalid, Kindly refer docs." + ) + # grab the frame size (height, width) = frame.shape[:2] @@ -205,12 +145,12 @@ async def reducer(frame=None, percentage=0): dimensions = (int(reduction), int(height * ratio)) # return the resized frame - return cv2.resize(frame, dimensions, interpolation=cv2.INTER_LANCZOS4) + return cv2.resize(frame, dimensions, interpolation=interpolation) def generate_webdata(path, c_name="webgear", overwrite_default=False, logging=False): """ - ### generate_webdata + ## generate_webdata Auto-Generates, and Auto-validates default data for WebGear API. @@ -284,7 +224,7 @@ def generate_webdata(path, c_name="webgear", overwrite_default=False, logging=Fa def download_webdata(path, c_name="webgear", files=[], logging=False): """ - ### download_webdata + ## download_webdata Downloads given list of files for WebGear API(if not available) from GitHub Server, and also Validates them. @@ -352,7 +292,7 @@ def download_webdata(path, c_name="webgear", files=[], logging=False): def validate_webdata(path, files=[], logging=False): """ - ### validate_auth_keys + ## validate_auth_keys Validates, and also maintains downloaded list of files. diff --git a/vidgear/gears/asyncio/netgear_async.py b/vidgear/gears/asyncio/netgear_async.py index c91d33f2e..d4564e03a 100755 --- a/vidgear/gears/asyncio/netgear_async.py +++ b/vidgear/gears/asyncio/netgear_async.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,23 +18,31 @@ =============================================== """ # import the necessary packages - import cv2 import sys -import zmq import numpy as np import asyncio import inspect import logging as log -import msgpack +import string +import secrets import platform -import zmq.asyncio -import msgpack_numpy as m from collections import deque -from .helper import logger_handler +# import helper packages +from ..helper import logger_handler, import_dependency_safe + +# import additional API(s) from ..videogear import VideoGear +# safe import critical Class modules +zmq = import_dependency_safe("zmq", pkg_name="pyzmq", error="silent", min_version="4.0") +if not (zmq is None): + import zmq.asyncio +msgpack = import_dependency_safe("msgpack", error="silent") +m = import_dependency_safe("msgpack_numpy", error="silent") +uvloop = import_dependency_safe("uvloop", error="silent") + # define logger logger = log.getLogger("NetGear_Async") logger.propagate = False @@ -45,14 +53,17 @@ class NetGear_Async: """ NetGear_Async can generate the same performance as NetGear API at about one-third the memory consumption, and also provide complete server-client handling with various - options to use variable protocols/patterns similar to NetGear, but it doesn't support any of yet. + options to use variable protocols/patterns similar to NetGear, but lacks in term of flexibility as it supports only a few NetGear's Exclusive Modes. - NetGear_Async is built on zmq.asyncio, and powered by a high-performance asyncio event loop called uvloop to achieve unmatchable high-speed and lag-free video streaming + NetGear_Async is built on `zmq.asyncio`, and powered by a high-performance asyncio event loop called uvloop to achieve unwatchable high-speed and lag-free video streaming over the network with minimal resource constraints. NetGear_Async can transfer thousands of frames in just a few seconds without causing any significant load on your system. - NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to NetGear API but doesn't support any NetGear's Exclusive - Modes yet. Furthermore, NetGear_Async allows us to define our custom Server as source to manipulate frames easily before sending them across the network. + NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to NetGear API. Furthermore, NetGear_Async allows us to define + our custom Server as source to transform frames easily before sending them across the network. + + NetGear_Async now supports additional [**bidirectional data transmission**](../advanced/bidirectional_mode) between receiver(client) and sender(server) while transferring frames. + Users can easily build complex applications such as like [Real-Time Video Chat](../advanced/bidirectional_mode/#using-bidirectional-mode-for-video-frames-transfer) in just few lines of code. In addition to all this, NetGear_Async API also provides internal wrapper around VideoGear, which itself provides internal access to both CamGear and PiGear APIs, thereby granting it exclusive power for transferring frames incoming from any source to the network. @@ -100,7 +111,7 @@ def __init__( port (str): sets the valid Network Port of the Server/Client. protocol (str): sets the valid messaging protocol between Server/Client. pattern (int): sets the supported messaging pattern(flow of communication) between Server/Client - receive_mode (bool): select the Netgear's Mode of operation. + receive_mode (bool): select the NetGear_Async's Mode of operation. timeout (int/float): controls the maximum waiting time(in sec) after which Client throws `TimeoutError`. enablePiCamera (bool): provide access to PiGear(if True) or CamGear(if False) APIs respectively. stabilize (bool): enable access to Stabilizer Class for stabilizing frames. @@ -113,8 +124,12 @@ def __init__( colorspace (str): selects the colorspace of the input stream. logging (bool): enables/disables logging. time_delay (int): time delay (in sec) before start reading the frames. - options (dict): provides ability to alter Tweak Parameters of NetGear, CamGear, PiGear & Stabilizer. + options (dict): provides ability to alter Tweak Parameters of NetGear_Async, CamGear, PiGear & Stabilizer. """ + # raise error(s) for critical Class imports + import_dependency_safe("zmq" if zmq is None else "", min_version="4.0") + import_dependency_safe("msgpack" if msgpack is None else "") + import_dependency_safe("msgpack_numpy" if m is None else "") # enable logging if specified self.__logging = logging @@ -161,20 +176,70 @@ def __init__( self.__stream = None # initialize Messaging Socket self.__msg_socket = None - # initialize NetGear's configuration dictionary + # initialize NetGear_Async's configuration dictionary self.config = {} + # asyncio queue handler + self.__queue = None + # define Bidirectional mode + self.__bi_mode = False # handles Bidirectional mode state # assign timeout for Receiver end - if timeout > 0 and isinstance(timeout, (int, float)): + if timeout and isinstance(timeout, (int, float)): self.__timeout = float(timeout) else: self.__timeout = 15.0 + + # generate 8-digit random system id + self.__id = "".join( + secrets.choice(string.ascii_uppercase + string.digits) for i in range(8) + ) + + # Handle user-defined options dictionary values + # reformat dictionary + options = {str(k).strip(): v for k, v in options.items()} + # handle bidirectional mode + if "bidirectional_mode" in options: + value = options["bidirectional_mode"] + # also check if pattern and source is valid + if isinstance(value, bool) and pattern < 2 and source is None: + # activate Bidirectional mode if specified + self.__bi_mode = value + else: + # otherwise disable it + self.__bi_mode = False + logger.warning("Bidirectional data transmission is disabled!") + # handle errors and logging + if pattern >= 2: + # raise error + raise ValueError( + "[NetGear_Async:ERROR] :: `{}` pattern is not valid when Bidirectional Mode is enabled. Kindly refer Docs for more Information!".format( + pattern + ) + ) + elif not (source is None): + raise ValueError( + "[NetGear_Async:ERROR] :: Custom source must be used when Bidirectional Mode is enabled. Kindly refer Docs for more Information!".format( + pattern + ) + ) + elif isinstance(value, bool) and self.__logging: + # log Bidirectional mode activation + logger.debug( + "Bidirectional Data Transmission is {} for this connection!".format( + "enabled" if value else "disabled" + ) + ) + else: + logger.error("`bidirectional_mode` value is invalid!") + # clean + del options["bidirectional_mode"] + # define messaging asynchronous Context self.__msg_context = zmq.asyncio.Context() # check whether `Receive Mode` is enabled if receive_mode: - # assign local ip address if None + # assign local IP address if None if address is None: self.__address = "*" # define address else: @@ -229,18 +294,18 @@ def __init__( if sys.version_info[:2] >= (3, 8): asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) else: - try: - # import library - import uvloop - - # Latest uvloop eventloop is only available for UNIX machines & python>=3.7. + if not (uvloop is None): + # Latest uvloop eventloop is only available for UNIX machines. asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) - except ImportError: - pass + else: + # log if not present + import_dependency_safe("uvloop", error="log") # Retrieve event loop and assign it self.loop = asyncio.get_event_loop() - + # create asyncio queue if bidirectional mode activated + self.__queue = asyncio.Queue() if self.__bi_mode else None + # log eventloop for debugging if self.__logging: # debugging logger.info( @@ -256,25 +321,23 @@ def launch(self): # check if receive mode enabled if self.__receive_mode: if self.__logging: - logger.debug("Launching NetGear asynchronous generator!") + logger.debug("Launching NetGear_Async asynchronous generator!") # run loop executor for Receiver asynchronous generator self.loop.run_in_executor(None, self.recv_generator) - # return instance - return self else: # Otherwise launch Server handler if self.__logging: - logger.debug("Creating NetGear asynchronous server handler!") + logger.debug("Creating NetGear_Async asynchronous server handler!") # create task for Server Handler - self.task = asyncio.ensure_future(self.__server_handler(), loop=self.loop) - # return instance - return self + self.task = asyncio.ensure_future(self.__server_handler()) + # return instance + return self async def __server_handler(self): """ Handles various Server-end processes/tasks. """ - # validate assigned frame generator in NetGear configuration + # validate assigned frame generator in NetGear_Async configuration if isinstance(self.config, dict) and "generator" in self.config: # check if its assigned value is a asynchronous generator if self.config["generator"] is None or not inspect.isasyncgen( @@ -287,7 +350,7 @@ async def __server_handler(self): else: # raise error if validation fails raise RuntimeError( - "[NetGear_Async:ERROR] :: Assigned NetGear configuration is invalid!" + "[NetGear_Async:ERROR] :: Assigned NetGear_Async configuration is invalid!" ) # define our messaging socket @@ -321,12 +384,16 @@ async def __server_handler(self): self.__msg_pattern, ) ) - logger.debug( - "Send Mode is successfully activated and ready to send data!" - ) + logger.critical( + "Send Mode is successfully activated and ready to send data!" + ) except Exception as e: # log ad raise error if failed logger.exception(str(e)) + if self.__bi_mode: + logger.error( + "Failed to activate Bidirectional Mode for this connection!" + ) raise ValueError( "[NetGear_Async:ERROR] :: Failed to connect address: {} and pattern: {}!".format( ( @@ -341,41 +408,101 @@ async def __server_handler(self): ) # loop over our Asynchronous frame generator - async for frame in self.config["generator"]: + async for dataframe in self.config["generator"]: + # extract data if bidirectional mode + if self.__bi_mode and len(dataframe) == 2: + (data, frame) = dataframe + if not (data is None) and isinstance(data, np.ndarray): + logger.warning( + "Skipped unsupported `data` of datatype: {}!".format( + type(data).__name__ + ) + ) + data = None + assert isinstance( + frame, np.ndarray + ), "[NetGear_Async:ERROR] :: Invalid data received from server end!" + elif self.__bi_mode: + # raise error for invalid data + raise ValueError( + "[NetGear_Async:ERROR] :: Send Mode only accepts tuple(data, frame) as input in Bidirectional Mode. \ + Kindly refer vidgear docs!" + ) + else: + # otherwise just make a copy of frame + frame = np.copy(dataframe) + data = None + # check if retrieved frame is `CONTIGUOUS` if not (frame.flags["C_CONTIGUOUS"]): # otherwise make it frame = np.ascontiguousarray(frame, dtype=frame.dtype) - # encode message - msg_enc = msgpack.packb(frame, default=m.encode) - # send it over network - await self.__msg_socket.send_multipart([msg_enc]) - # check if bidirectional patterns + + # create data dict + data_dict = dict( + terminate=False, + bi_mode=self.__bi_mode, + data=data if not (data is None) else "", + ) + # encode it + data_enc = msgpack.packb(data_dict) + # send the encoded data with correct flags + await self.__msg_socket.send(data_enc, flags=zmq.SNDMORE) + + # encode frame + frame_enc = msgpack.packb(frame, default=m.encode) + # send the encoded frame + await self.__msg_socket.send_multipart([frame_enc]) + + # check if bidirectional patterns used if self.__msg_pattern < 2: - # then receive and log confirmation - recv_confirmation = await self.__msg_socket.recv_multipart() - if self.__logging: - logger.debug(recv_confirmation) - - # send `exit` flag when done! - await self.__msg_socket.send_multipart([b"exit"]) - # check if bidirectional patterns - if self.__msg_pattern < 2: - # then receive and log confirmation - recv_confirmation = await self.__msg_socket.recv_multipart() - if self.__logging: - logger.debug(recv_confirmation) + # handle bidirectional data transfer if enabled + if self.__bi_mode: + # get receiver encoded message withing timeout limit + recvdmsg_encoded = await asyncio.wait_for( + self.__msg_socket.recv(), timeout=self.__timeout + ) + # retrieve receiver data from encoded message + recvd_data = msgpack.unpackb(recvdmsg_encoded, use_list=False) + # check message type + if recvd_data["return_type"] == "ndarray": # numpy.ndarray + # get encoded frame from receiver + recvdframe_encoded = await asyncio.wait_for( + self.__msg_socket.recv_multipart(), timeout=self.__timeout + ) + # retrieve frame and put in queue + await self.__queue.put( + msgpack.unpackb( + recvdframe_encoded[0], + use_list=False, + object_hook=m.decode, + ) + ) + else: + # otherwise put data directly in queue + await self.__queue.put( + recvd_data["return_data"] + if recvd_data["return_data"] + else None + ) + else: + # otherwise log received confirmation + recv_confirmation = await asyncio.wait_for( + self.__msg_socket.recv(), timeout=self.__timeout + ) + if self.__logging: + logger.debug(recv_confirmation) async def recv_generator(self): """ - A default Asynchronous Frame Generator for NetGear's Receiver-end. + A default Asynchronous Frame Generator for NetGear_Async's Receiver-end. """ # check whether `receive mode` is activated if not (self.__receive_mode): # raise Value error and exit self.__terminate = True raise ValueError( - "[NetGear:ERROR] :: `recv_generator()` function cannot be accessed while `receive_mode` is disabled. Kindly refer vidgear docs!" + "[NetGear_Async:ERROR] :: `recv_generator()` function cannot be accessed while `receive_mode` is disabled. Kindly refer vidgear docs!" ) # initialize and define messaging socket @@ -394,7 +521,7 @@ async def recv_generator(self): # finally log progress if self.__logging: logger.debug( - "Successfully Binded to address: {} with pattern: {}.".format( + "Successfully binded to address: {} with pattern: {}.".format( ( self.__protocol + "://" @@ -405,11 +532,11 @@ async def recv_generator(self): self.__msg_pattern, ) ) - logger.debug("Receive Mode is activated successfully!") + logger.critical("Receive Mode is activated successfully!") except Exception as e: logger.exception(str(e)) - raise ValueError( - "[NetGear:ERROR] :: Failed to bind address: {} and pattern: {}!".format( + raise RuntimeError( + "[NetGear_Async:ERROR] :: Failed to bind address: {} and pattern: {}{}!".format( ( self.__protocol + "://" @@ -418,32 +545,117 @@ async def recv_generator(self): + str(self.__port) ), self.__msg_pattern, + " and Bidirectional Mode enabled" if self.__bi_mode else "", ) ) # loop until terminated while not self.__terminate: - # get message withing timeout limit - recvd_msg = await asyncio.wait_for( + # get encoded data message from server withing timeout limit + datamsg_encoded = await asyncio.wait_for( + self.__msg_socket.recv(), timeout=self.__timeout + ) + # retrieve data from message + data = msgpack.unpackb(datamsg_encoded, use_list=False) + # terminate if exit` flag received from server + if data["terminate"]: + # send confirmation message to server if bidirectional patterns + if self.__msg_pattern < 2: + # create termination confirmation message + return_dict = dict( + terminated="Client-`{}` successfully terminated!".format( + self.__id + ), + ) + # encode message + retdata_enc = msgpack.packb(return_dict) + # send message back to server + await self.__msg_socket.send(retdata_enc) + if self.__logging: + logger.info("Termination signal received from server!") + # break loop and terminate + self.__terminate = True + break + # get encoded frame message from server withing timeout limit + framemsg_encoded = await asyncio.wait_for( self.__msg_socket.recv_multipart(), timeout=self.__timeout ) + # retrieve frame from message + frame = msgpack.unpackb( + framemsg_encoded[0], use_list=False, object_hook=m.decode + ) + # check if bidirectional patterns if self.__msg_pattern < 2: - # send confirmation - await self.__msg_socket.send_multipart([b"Message Received!"]) - # terminate if exit` flag received - if recvd_msg[0] == b"exit": - break - # retrieve frame from message - frame = msgpack.unpackb(recvd_msg[0], object_hook=m.decode) - # yield received frame - yield frame + # handle bidirectional data transfer if enabled + if self.__bi_mode and data["bi_mode"]: + # handle empty queue + if not self.__queue.empty(): + return_data = await self.__queue.get() + self.__queue.task_done() + else: + return_data = None + # check if we are returning `ndarray` frames + if not (return_data is None) and isinstance( + return_data, np.ndarray + ): + # check whether the incoming frame is contiguous + if not (return_data.flags["C_CONTIGUOUS"]): + return_data = np.ascontiguousarray( + return_data, dtype=return_data.dtype + ) + + # create return type dict without data + rettype_dict = dict( + return_type=(type(return_data).__name__), + return_data=None, + ) + # encode it + rettype_enc = msgpack.packb(rettype_dict) + # send it to server with correct flags + await self.__msg_socket.send(rettype_enc, flags=zmq.SNDMORE) + + # encode return ndarray data + retframe_enc = msgpack.packb(return_data, default=m.encode) + # send it over network to server + await self.__msg_socket.send_multipart([retframe_enc]) + else: + # otherwise create type and data dict + return_dict = dict( + return_type=(type(return_data).__name__), + return_data=return_data + if not (return_data is None) + else "", + ) + # encode it + retdata_enc = msgpack.packb(return_dict) + # send it over network to server + await self.__msg_socket.send(retdata_enc) + elif self.__bi_mode or data["bi_mode"]: + # raise error if bidirectional mode is disabled at server or client but not both + raise RuntimeError( + "[NetGear_Async:ERROR] :: Invalid configuration! Bidirectional Mode is not activate on {} end.".format( + "client" if self.__bi_mode else "server" + ) + ) + else: + # otherwise just send confirmation message to server + await self.__msg_socket.send( + bytes( + "Data received on client: {} !".format(self.__id), "utf-8" + ) + ) + # yield received tuple(data-frame) if bidirectional mode or else just frame + if self.__bi_mode: + yield (data["data"], frame) if data["data"] else (None, frame) + else: + yield frame # sleep for sometime - await asyncio.sleep(0.00001) + await asyncio.sleep(0) async def __frame_generator(self): """ - Returns a default frame-generator for NetGear's Server Handler. + Returns a default frame-generator for NetGear_Async's Server Handler. """ # start stream self.__stream.start() @@ -457,23 +669,46 @@ async def __frame_generator(self): # yield frame yield frame # sleep for sometime - await asyncio.sleep(0.00001) + await asyncio.sleep(0) - def close(self, skip_loop=False): + async def transceive_data(self, data=None): """ - Terminates all NetGear Asynchronous processes gracefully. + Bidirectional Mode exclusive method to Transmit data _(in Receive mode)_ and Receive data _(in Send mode)_. Parameters: - skip_loop (Boolean): (optional)used only if closing executor loop throws an error. + data (any): inputs data _(of any datatype)_ for sending back to Server. + """ + recvd_data = None + if not self.__terminate: + if self.__bi_mode: + if self.__receive_mode: + await self.__queue.put(data) + else: + if not self.__queue.empty(): + recvd_data = await self.__queue.get() + self.__queue.task_done() + else: + logger.error( + "`transceive_data()` function cannot be used when Bidirectional Mode is disabled." + ) + return recvd_data + + async def __terminate_connection(self, disable_confirmation=False): + """ + Internal asyncio method to safely terminate ZMQ connection and queues + + Parameters: + disable_confirmation (boolean): Force disable termination confirmation from client in bidirectional patterns. """ # log termination if self.__logging: logger.debug( - "Terminating various {} Processes.".format( + "Terminating various {} Processes. Please wait.".format( "Receive Mode" if self.__receive_mode else "Send Mode" ) ) - # whether `receive_mode` is enabled or not + + # check whether `receive_mode` is enabled or not if self.__receive_mode: # indicate that process should be terminated self.__terminate = True @@ -483,7 +718,52 @@ def close(self, skip_loop=False): # terminate stream if not (self.__stream is None): self.__stream.stop() + # signal `exit` flag for termination! + data_dict = dict(terminate=True) + data_enc = msgpack.packb(data_dict) + await self.__msg_socket.send(data_enc) + # check if bidirectional patterns + if self.__msg_pattern < 2 and not disable_confirmation: + # then receive and log confirmation + recv_confirmation = await self.__msg_socket.recv() + recvd_conf = msgpack.unpackb(recv_confirmation, use_list=False) + if self.__logging and "terminated" in recvd_conf: + logger.debug(recvd_conf["terminated"]) + # close socket + self.__msg_socket.setsockopt(zmq.LINGER, 0) + self.__msg_socket.close() + # handle asyncio queues in bidirectional mode + if self.__bi_mode: + # empty queue if not + while not self.__queue.empty(): + try: + self.__queue.get_nowait() + except asyncio.QueueEmpty: + continue + self.__queue.task_done() + # join queues + await self.__queue.join() + + logger.critical( + "{} successfully terminated!".format( + "Receive Mode" if self.__receive_mode else "Send Mode" + ) + ) + + def close(self, skip_loop=False): + """ + Terminates all NetGear_Async Asynchronous processes gracefully. + Parameters: + skip_loop (Boolean): (optional)used only if don't want to close eventloop(required in pytest). + """ # close event loop if specified if not (skip_loop): + # close connection gracefully + self.loop.run_until_complete(self.__terminate_connection()) self.loop.close() + else: + # otherwise create a task + asyncio.ensure_future( + self.__terminate_connection(disable_confirmation=True) + ) diff --git a/vidgear/gears/asyncio/webgear.py b/vidgear/gears/asyncio/webgear.py index ecd7e9518..be6e3f634 100755 --- a/vidgear/gears/asyncio/webgear.py +++ b/vidgear/gears/asyncio/webgear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,23 +18,38 @@ =============================================== """ # import the necessary packages - import os import cv2 import sys import asyncio import inspect +import numpy as np import logging as log from collections import deque -from starlette.routing import Mount, Route -from starlette.responses import StreamingResponse -from starlette.templating import Jinja2Templates -from starlette.staticfiles import StaticFiles -from starlette.applications import Starlette +from os.path import expanduser + +# import helper packages +from .helper import ( + reducer, + generate_webdata, + create_blank_frame, +) +from ..helper import logger_handler, retrieve_best_interpolation, import_dependency_safe -from .helper import reducer, logger_handler, generate_webdata, create_blank_frame +# import additional API(s) from ..videogear import VideoGear +# safe import critical Class modules +starlette = import_dependency_safe("starlette", error="silent") +if not (starlette is None): + from starlette.routing import Mount, Route + from starlette.responses import StreamingResponse + from starlette.templating import Jinja2Templates + from starlette.staticfiles import StaticFiles + from starlette.applications import Starlette + from starlette.middleware import Middleware +simplejpeg = import_dependency_safe("simplejpeg", error="silent", min_version="1.6.1") + # define logger logger = log.getLogger("WebGear") logger.propagate = False @@ -91,13 +106,24 @@ def __init__( time_delay (int): time delay (in sec) before start reading the frames. options (dict): provides ability to alter Tweak Parameters of WebGear, CamGear, PiGear & Stabilizer. """ + # raise error(s) for critical Class imports + import_dependency_safe("starlette" if starlette is None else "") + import_dependency_safe( + "simplejpeg" if simplejpeg is None else "", min_version="1.6.1" + ) # initialize global params - self.__jpeg_quality = 90 # 90% quality - self.__jpeg_optimize = 0 # optimization off - self.__jpeg_progressive = 0 # jpeg will be baseline instead - self.__frame_size_reduction = 20 # 20% reduction + # define frame-compression handler + self.__jpeg_compression_quality = 90 # 90% quality + self.__jpeg_compression_fastdct = True # fastest DCT on by default + self.__jpeg_compression_fastupsample = False # fastupsample off by default + self.__jpeg_compression_colorspace = "BGR" # use BGR colorspace by default self.__logging = logging + self.__frame_size_reduction = 25 # use 25% reduction + # retrieve interpolation for reduction + self.__interpolation = retrieve_best_interpolation( + ["INTER_LINEAR_EXACT", "INTER_LINEAR", "INTER_AREA"] + ) custom_data_location = "" # path to save data-files to custom location data_path = "" # path to WebGear data-files @@ -109,37 +135,66 @@ def __init__( # assign values to global variables if specified and valid if options: - if "frame_size_reduction" in options: - value = options["frame_size_reduction"] - if isinstance(value, (int, float)) and value >= 0 and value <= 90: - self.__frame_size_reduction = value + if "jpeg_compression_colorspace" in options: + value = options["jpeg_compression_colorspace"] + if isinstance(value, str) and value.strip().upper() in [ + "RGB", + "BGR", + "RGBX", + "BGRX", + "XBGR", + "XRGB", + "GRAY", + "RGBA", + "BGRA", + "ABGR", + "ARGB", + "CMYK", + ]: + # set encoding colorspace + self.__jpeg_compression_colorspace = value.strip().upper() else: - logger.warning("Skipped invalid `frame_size_reduction` value!") - del options["frame_size_reduction"] # clean + logger.warning( + "Skipped invalid `jpeg_compression_colorspace` value!" + ) + del options["jpeg_compression_colorspace"] # clean - if "frame_jpeg_quality" in options: - value = options["frame_jpeg_quality"] - if isinstance(value, (int, float)) and value >= 10 and value <= 95: - self.__jpeg_quality = int(value) + if "jpeg_compression_quality" in options: + value = options["jpeg_compression_quality"] + # set valid jpeg quality + if isinstance(value, (int, float)) and value >= 10 and value <= 100: + self.__jpeg_compression_quality = int(value) else: - logger.warning("Skipped invalid `frame_jpeg_quality` value!") - del options["frame_jpeg_quality"] # clean + logger.warning("Skipped invalid `jpeg_compression_quality` value!") + del options["jpeg_compression_quality"] # clean - if "frame_jpeg_optimize" in options: - value = options["frame_jpeg_optimize"] + if "jpeg_compression_fastdct" in options: + value = options["jpeg_compression_fastdct"] + # enable jpeg fastdct if isinstance(value, bool): - self.__jpeg_optimize = int(value) + self.__jpeg_compression_fastdct = value else: - logger.warning("Skipped invalid `frame_jpeg_optimize` value!") - del options["frame_jpeg_optimize"] # clean + logger.warning("Skipped invalid `jpeg_compression_fastdct` value!") + del options["jpeg_compression_fastdct"] # clean - if "frame_jpeg_progressive" in options: - value = options["frame_jpeg_progressive"] + if "jpeg_compression_fastupsample" in options: + value = options["jpeg_compression_fastupsample"] + # enable jpeg fastupsample if isinstance(value, bool): - self.__jpeg_progressive = int(value) + self.__jpeg_compression_fastupsample = value + else: + logger.warning( + "Skipped invalid `jpeg_compression_fastupsample` value!" + ) + del options["jpeg_compression_fastupsample"] # clean + + if "frame_size_reduction" in options: + value = options["frame_size_reduction"] + if isinstance(value, (int, float)) and value >= 0 and value <= 90: + self.__frame_size_reduction = value else: - logger.warning("Skipped invalid `frame_jpeg_progressive` value!") - del options["frame_jpeg_progressive"] # clean + logger.warning("Skipped invalid `frame_size_reduction` value!") + del options["frame_size_reduction"] # clean if "custom_data_location" in options: value = options["custom_data_location"] @@ -183,8 +238,6 @@ def __init__( ) else: # otherwise generate suitable path - from os.path import expanduser - data_path = generate_webdata( os.path.join(expanduser("~"), ".vidgear"), c_name="webgear", @@ -199,15 +252,6 @@ def __init__( data_path ) ) - logger.debug( - "Setting params:: Size Reduction:{}%, JPEG quality:{}%, JPEG optimizations:{}, JPEG progressive:{}{}.".format( - self.__frame_size_reduction, - self.__jpeg_quality, - bool(self.__jpeg_optimize), - bool(self.__jpeg_progressive), - " and emulating infinite frames" if self.__enable_inf else "", - ) - ) # define Jinja2 templates handler self.__templates = Jinja2Templates(directory="{}/templates".format(data_path)) @@ -224,12 +268,12 @@ def __init__( name="static", ), ] + # define middleware support + self.middleware = [] # Handle video source if source is None: self.config = {"generator": None} self.__stream = None - if self.__logging: - logger.warning("Given source is of NoneType!") else: # define stream with necessary params self.__stream = VideoGear( @@ -248,6 +292,25 @@ def __init__( ) # define default frame generator in configuration self.config = {"generator": self.__producer} + + # log if specified + if self.__logging: + if source is None: + logger.warning( + "Given source is of NoneType. Therefore, JPEG Frame-Compression is disabled!" + ) + else: + logger.debug( + "Enabling JPEG Frame-Compression with Colorspace:`{}`, Quality:`{}`%, Fastdct:`{}`, and Fastupsample:`{}`.".format( + self.__jpeg_compression_colorspace, + self.__jpeg_compression_quality, + "enabled" if self.__jpeg_compression_fastdct else "disabled", + "enabled" + if self.__jpeg_compression_fastupsample + else "disabled", + ) + ) + # copying original routing tables for further validation self.__rt_org_copy = self.routes[:] # initialize blank frame @@ -266,6 +329,14 @@ def __call__(self): ): raise RuntimeError("[WebGear:ERROR] :: Routing tables are not valid!") + # validate middlewares + assert not (self.middleware is None), "Middlewares are NoneType!" + if self.middleware and ( + not isinstance(self.middleware, list) + or not all(isinstance(x, Middleware) for x in self.middleware) + ): + raise RuntimeError("[WebGear:ERROR] :: Middlewares are not valid!") + # validate assigned frame generator in WebGear configuration if isinstance(self.config, dict) and "generator" in self.config: # check if its assigned value is a asynchronous generator @@ -291,6 +362,7 @@ def __call__(self): return Starlette( debug=(True if self.__logging else False), routes=self.routes, + middleware=self.middleware, exception_handlers=self.__exception_handlers, on_shutdown=[self.shutdown], ) @@ -324,32 +396,44 @@ async def __producer(self): # reducer frames size if specified if self.__frame_size_reduction: - frame = await reducer(frame, percentage=self.__frame_size_reduction) + frame = await reducer( + frame, + percentage=self.__frame_size_reduction, + interpolation=self.__interpolation, + ) + # handle JPEG encoding - encodedImage = cv2.imencode( - ".jpg", - frame, - [ - cv2.IMWRITE_JPEG_QUALITY, - self.__jpeg_quality, - cv2.IMWRITE_JPEG_PROGRESSIVE, - self.__jpeg_progressive, - cv2.IMWRITE_JPEG_OPTIMIZE, - self.__jpeg_optimize, - ], - )[1].tobytes() + if self.__jpeg_compression_colorspace == "GRAY": + if frame.ndim == 2: + # patch for https://gitlab.com/jfolz/simplejpeg/-/issues/11 + frame = np.expand_dims(frame, axis=2) + encodedImage = simplejpeg.encode_jpeg( + frame, + quality=self.__jpeg_compression_quality, + colorspace=self.__jpeg_compression_colorspace, + fastdct=self.__jpeg_compression_fastdct, + ) + else: + encodedImage = simplejpeg.encode_jpeg( + frame, + quality=self.__jpeg_compression_quality, + colorspace=self.__jpeg_compression_colorspace, + colorsubsampling="422", + fastdct=self.__jpeg_compression_fastdct, + ) + # yield frame in byte format yield ( b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n" ) - await asyncio.sleep(0.00001) + # sleep for sometime. + await asyncio.sleep(0) async def __video(self, scope): """ Return a async video streaming response. """ assert scope["type"] in ["http", "https"] - await asyncio.sleep(0.00001) return StreamingResponse( self.config["generator"](), media_type="multipart/x-mixed-replace; boundary=frame", diff --git a/vidgear/gears/asyncio/webgear_rtc.py b/vidgear/gears/asyncio/webgear_rtc.py index 24727305b..86cfc2a37 100644 --- a/vidgear/gears/asyncio/webgear_rtc.py +++ b/vidgear/gears/asyncio/webgear_rtc.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,41 +18,61 @@ =============================================== """ # import the necessary packages - import os import cv2 import sys +import time +import fractions import asyncio import logging as log from collections import deque -from starlette.routing import Mount, Route -from starlette.templating import Jinja2Templates -from starlette.staticfiles import StaticFiles -from starlette.applications import Starlette -from starlette.responses import JSONResponse - -from aiortc.rtcrtpsender import RTCRtpSender -from aiortc import RTCPeerConnection, RTCSessionDescription, VideoStreamTrack -from aiortc.contrib.media import MediaRelay -from av import VideoFrame - +from os.path import expanduser +# import helper packages from .helper import ( reducer, - logger_handler, generate_webdata, create_blank_frame, ) +from ..helper import logger_handler, retrieve_best_interpolation, import_dependency_safe + +# import additional API(s) from ..videogear import VideoGear +# safe import critical Class modules +starlette = import_dependency_safe("starlette", error="silent") +if not (starlette is None): + from starlette.routing import Mount, Route + from starlette.templating import Jinja2Templates + from starlette.staticfiles import StaticFiles + from starlette.applications import Starlette + from starlette.middleware import Middleware + from starlette.responses import JSONResponse, PlainTextResponse +aiortc = import_dependency_safe("aiortc", error="silent") +if not (aiortc is None): + from aiortc.rtcrtpsender import RTCRtpSender + from aiortc import ( + RTCPeerConnection, + RTCSessionDescription, + VideoStreamTrack, + ) + from aiortc.contrib.media import MediaRelay + from aiortc.mediastreams import MediaStreamError + from av import VideoFrame # aiortc dependency + # define logger -logger = log.getLogger("WeGear_RTC") +logger = log.getLogger("WebGear_RTC") if logger.hasHandlers(): logger.handlers.clear() logger.propagate = False logger.addHandler(logger_handler()) logger.setLevel(log.DEBUG) +# add global vars +VIDEO_CLOCK_RATE = 90000 +VIDEO_PTIME = 1 / 30 # 30fps +VIDEO_TIME_BASE = fractions.Fraction(1, VIDEO_CLOCK_RATE) + class RTC_VideoServer(VideoStreamTrack): """ @@ -90,17 +110,25 @@ def __init__( colorspace (str): selects the colorspace of the input stream. logging (bool): enables/disables logging. time_delay (int): time delay (in sec) before start reading the frames. - options (dict): provides ability to alter Tweak Parameters of WeGear_RTC, CamGear, PiGear & Stabilizer. + options (dict): provides ability to alter Tweak Parameters of WebGear_RTC, CamGear, PiGear & Stabilizer. """ super().__init__() # don't forget this! + # raise error(s) for critical Class import + import_dependency_safe("aiortc" if aiortc is None else "") + # initialize global params self.__logging = logging self.__enable_inf = False # continue frames even when video ends. - self.__frame_size_reduction = 20 # 20% reduction - self.is_running = True # check if running self.is_launched = False # check if launched already + self.is_running = False # check if running + + self.__frame_size_reduction = 20 # 20% reduction + # retrieve interpolation for reduction + self.__interpolation = retrieve_best_interpolation( + ["INTER_LINEAR_EXACT", "INTER_LINEAR", "INTER_AREA"] + ) if options: if "frame_size_reduction" in options: @@ -146,6 +174,9 @@ def __init__( # initialize blank frame self.blank_frame = None + # handles reset signal + self.__reset_enabled = False + def launch(self): """ Launches VideoGear stream @@ -153,8 +184,35 @@ def launch(self): if self.__logging: logger.debug("Launching Internal RTC Video-Server") self.is_launched = True + self.is_running = True self.__stream.start() + async def next_timestamp(self): + """ + VideoStreamTrack internal method for generating accurate timestamp. + """ + # check if ready state not live + if self.readyState != "live": + # otherwise reset + self.stop() + if hasattr(self, "_timestamp") and not self.__reset_enabled: + self._timestamp += int(VIDEO_PTIME * VIDEO_CLOCK_RATE) + wait = self._start + (self._timestamp / VIDEO_CLOCK_RATE) - time.time() + await asyncio.sleep(wait) + else: + if self.__logging: + logger.debug( + "{} timestamps".format( + "Resetting" if self.__reset_enabled else "Setting" + ) + ) + self._start = time.time() + self._timestamp = 0 + if self.__reset_enabled: + self.__reset_enabled = False + self.is_running = True + return self._timestamp, VIDEO_TIME_BASE + async def recv(self): """ A coroutine function that yields `av.frame.Frame`. @@ -165,17 +223,19 @@ async def recv(self): # read video frame f_stream = None if self.__stream is None: - return None + raise MediaStreamError else: f_stream = self.__stream.read() # display blank if NoneType if f_stream is None: if self.blank_frame is None or not self.is_running: - return None + raise MediaStreamError else: f_stream = self.blank_frame[:] - if not self.__enable_inf: + if not self.__enable_inf and not self.__reset_enabled: + if self.__logging: + logger.debug("Video-Stream Ended.") self.terminate() else: # create blank @@ -188,7 +248,11 @@ async def recv(self): # reducer frames size if specified if self.__frame_size_reduction: - f_stream = await reducer(f_stream, percentage=self.__frame_size_reduction) + f_stream = await reducer( + f_stream, + percentage=self.__frame_size_reduction, + interpolation=self.__interpolation, + ) # construct `av.frame.Frame` from `numpy.nd.array` frame = VideoFrame.from_ndarray(f_stream, format="bgr24") @@ -198,15 +262,20 @@ async def recv(self): # return `av.frame.Frame` return frame + async def reset(self): + """ + Resets timestamp clock + """ + self.__reset_enabled = True + self.is_running = False + def terminate(self): """ Gracefully terminates VideoGear stream """ - # log if not (self.__stream is None): # terminate running flag self.is_running = False - self.is_launched = False if self.__logging: logger.debug("Terminating Internal RTC Video-Server") # terminate @@ -227,7 +296,7 @@ class WebGear_RTC: WebGear_RTC can handle multiple consumers seamlessly and provides native support for ICE (Interactive Connectivity Establishment) protocol, STUN (Session Traversal Utilities for NAT), and TURN (Traversal Using Relays around NAT) servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server - as a source to manipulate frames easily before sending them across the network(see this doc example). + as a source to transform frames easily before sending them across the network(see this doc example). WebGear_RTC API works in conjunction with Starlette ASGI application and can also flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, Response classes, Routing tables, Static Files, Templating engine(with Jinja2), etc. @@ -253,7 +322,7 @@ def __init__( ): """ - This constructor method initializes the object state and attributes of the WeGear_RTC class. + This constructor method initializes the object state and attributes of the WebGear_RTC class. Parameters: enablePiCamera (bool): provide access to PiGear(if True) or CamGear(if False) APIs respectively. @@ -267,14 +336,17 @@ def __init__( colorspace (str): selects the colorspace of the input stream. logging (bool): enables/disables logging. time_delay (int): time delay (in sec) before start reading the frames. - options (dict): provides ability to alter Tweak Parameters of WeGear_RTC, CamGear, PiGear & Stabilizer. + options (dict): provides ability to alter Tweak Parameters of WebGear_RTC, CamGear, PiGear & Stabilizer. """ + # raise error(s) for critical Class imports + import_dependency_safe("starlette" if starlette is None else "") + import_dependency_safe("aiortc" if aiortc is None else "") # initialize global params self.__logging = logging custom_data_location = "" # path to save data-files to custom location - data_path = "" # path to WeGear_RTC data-files + data_path = "" # path to WebGear_RTC data-files overwrite_default = False self.__relay = None # act as broadcaster @@ -288,12 +360,12 @@ def __init__( if isinstance(value, str): assert os.access( value, os.W_OK - ), "[WeGear_RTC:ERROR] :: Permission Denied!, cannot write WeGear_RTC data-files to '{}' directory!".format( + ), "[WebGear_RTC:ERROR] :: Permission Denied!, cannot write WebGear_RTC data-files to '{}' directory!".format( value ) assert os.path.isdir( os.path.abspath(value) - ), "[WeGear_RTC:ERROR] :: `custom_data_location` value must be the path to a directory and not to a file!" + ), "[WebGear_RTC:ERROR] :: `custom_data_location` value must be the path to a directory and not to a file!" custom_data_location = os.path.abspath(value) else: logger.warning("Skipped invalid `custom_data_location` value!") @@ -316,7 +388,7 @@ def __init__( "enable_infinite_frames" ] = True # enforce infinite frames logger.critical( - "Enabled live broadcasting with emulated infinite frames." + "Enabled live broadcasting for Peer connection(s)." ) else: None @@ -334,8 +406,6 @@ def __init__( ) else: # otherwise generate suitable path - from os.path import expanduser - data_path = generate_webdata( os.path.join(expanduser("~"), ".vidgear"), c_name="webgear_rtc", @@ -346,7 +416,7 @@ def __init__( # log it if self.__logging: logger.debug( - "`{}` is the default location for saving WeGear_RTC data-files.".format( + "`{}` is the default location for saving WebGear_RTC data-files.".format( data_path ) ) @@ -367,6 +437,9 @@ def __init__( ), ] + # define middleware support + self.middleware = [] + # Handle RTC video server if source is None: self.config = {"server": None} @@ -391,27 +464,35 @@ def __init__( ) # define default frame generator in configuration self.config = {"server": self.__default_rtc_server} - + # add exclusive reset connection node + self.routes.append( + Route("/close_connection", self.__reset_connections, methods=["POST"]) + ) # copying original routing tables for further validation self.__rt_org_copy = self.routes[:] - # keeps check if producer loop should be running - self.__isrunning = True # collects peer RTC connections self.__pcs = set() def __call__(self): """ - Implements a custom Callable method for WeGear_RTC application. + Implements a custom Callable method for WebGear_RTC application. """ # validate routing tables - # validate routing tables assert not (self.routes is None), "Routing tables are NoneType!" if not isinstance(self.routes, list) or not all( x in self.routes for x in self.__rt_org_copy ): - raise RuntimeError("[WeGear_RTC:ERROR] :: Routing tables are not valid!") + raise RuntimeError("[WebGear_RTC:ERROR] :: Routing tables are not valid!") + + # validate middlewares + assert not (self.middleware is None), "Middlewares are NoneType!" + if self.middleware and ( + not isinstance(self.middleware, list) + or not all(isinstance(x, Middleware) for x in self.middleware) + ): + raise RuntimeError("[WebGear_RTC:ERROR] :: Middlewares are not valid!") - # validate assigned RTC video-server in WeGear_RTC configuration + # validate assigned RTC video-server in WebGear_RTC configuration if isinstance(self.config, dict) and "server" in self.config: # check if assigned RTC server class is inherit from `VideoStreamTrack` API.i if self.config["server"] is None or not issubclass( @@ -419,7 +500,7 @@ def __call__(self): ): # otherwise raise error raise ValueError( - "[WeGear_RTC:ERROR] :: Invalid configuration. {}. Refer Docs for more information!".format( + "[WebGear_RTC:ERROR] :: Invalid configuration. {}. Refer Docs for more information!".format( "Video-Server not assigned" if self.config["server"] is None else "Assigned Video-Server class must be inherit from `aiortc.VideoStreamTrack` only" @@ -432,12 +513,12 @@ def __call__(self): ): # otherwise raise error raise ValueError( - "[WeGear_RTC:ERROR] :: Invalid configuration. Assigned Video-Server Class must have `terminate` method defined. Refer Docs for more information!" + "[WebGear_RTC:ERROR] :: Invalid configuration. Assigned Video-Server Class must have `terminate` method defined. Refer Docs for more information!" ) else: # raise error if validation fails raise RuntimeError( - "[WeGear_RTC:ERROR] :: Assigned configuration is invalid!" + "[WebGear_RTC:ERROR] :: Assigned configuration is invalid!" ) # return Starlette application if self.__logging: @@ -445,6 +526,7 @@ def __call__(self): return Starlette( debug=(True if self.__logging else False), routes=self.routes, + middleware=self.middleware, exception_handlers=self.__exception_handlers, on_shutdown=[self.__on_shutdown], ) @@ -477,9 +559,12 @@ async def __offer(self, request): async def on_iceconnectionstatechange(): logger.debug("ICE connection state is %s" % pc.iceConnectionState) if pc.iceConnectionState == "failed": - logger.error("ICE connection state failed!") - await pc.close() - self.__pcs.discard(pc) + logger.error("ICE connection state failed.") + # check if Live Broadcasting is enabled + if self.__relay is None: + # if not, close connection. + await pc.close() + self.__pcs.discard(pc) # Change the remote description associated with the connection. await pc.setRemoteDescription(offer) @@ -530,6 +615,30 @@ async def __server_error(self, request, exc): "500.html", {"request": request}, status_code=500 ) + async def __reset_connections(self, request): + """ + Resets all connections and recreates VideoServer timestamps + """ + # get additional parameter + parameter = await request.json() + # check if Live Broadcasting is enabled + if ( + self.__relay is None + and not (self.__default_rtc_server is None) + and (self.__default_rtc_server.is_running) + ): + logger.critical("Resetting Server") + # close old peer connections + if parameter != 0: # disable if specified explicitly + coros = [pc.close() for pc in self.__pcs] + await asyncio.gather(*coros) + self.__pcs.clear() + await self.__default_rtc_server.reset() + return PlainTextResponse("OK") + else: + # if does, then do nothing + return PlainTextResponse("DISABLED") + async def __on_shutdown(self): """ Implements a Callable to be run on application shutdown diff --git a/vidgear/gears/camgear.py b/vidgear/gears/camgear.py index b973e11ce..badc22b7d 100644 --- a/vidgear/gears/camgear.py +++ b/vidgear/gears/camgear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -17,15 +17,15 @@ limitations under the License. =============================================== """ -# import the necessary packages +# import the necessary packages import cv2 import time import queue import logging as log from threading import Thread, Event -from pkg_resources import parse_version +# import helper packages from .helper import ( capPropId, logger_handler, @@ -35,6 +35,7 @@ get_supported_resolution, check_gstreamer_support, dimensions_to_resolutions, + import_dependency_safe, ) # define logger @@ -105,8 +106,7 @@ def __init__( video_url = youtube_url_validator(source) if video_url: # import backend library - import pafy - + pafy = import_dependency_safe("pafy") logger.info("Using Youtube-dl Backend") # create new pafy object source_object = pafy.new(video_url, ydl_opts=stream_params) @@ -129,10 +129,9 @@ def __init__( # handle live-streams if is_live: # Enforce GStreamer backend for YouTube-livestreams - if logging: - logger.critical( - "YouTube livestream URL detected. Enforcing GStreamer backend." - ) + logger.critical( + "YouTube livestream URL detected. Enforcing GStreamer backend." + ) backend = cv2.CAP_GSTREAMER # convert stream dimensions to streams resolutions available_streams = dimensions_to_resolutions( @@ -184,8 +183,9 @@ def __init__( ) else: # import backend library - from streamlink import Streamlink - + Streamlink = import_dependency_safe( + "from streamlink import Streamlink" + ) restore_levelnames() logger.info("Using Streamlink Backend") # check session @@ -247,12 +247,6 @@ def __init__( logger.debug( "Enabling Threaded Queue Mode for the current video source!" ) - if self.__thread_timeout: - logger.debug( - "Setting Video-Thread Timeout to {}s.".format( - self.__thread_timeout - ) - ) else: # otherwise disable it self.__threaded_queue_mode = False @@ -262,6 +256,11 @@ def __init__( "Threaded Queue Mode is disabled for the current video source!" ) + if self.__thread_timeout: + logger.debug( + "Setting Video-Thread Timeout to {}s.".format(self.__thread_timeout) + ) + # stream variable initialization self.stream = None @@ -322,7 +321,7 @@ def __init__( self.__queue.put(self.frame) else: raise RuntimeError( - "[CamGear:ERROR] :: Source is invalid, CamGear failed to intitialize stream on this source!" + "[CamGear:ERROR] :: Source is invalid, CamGear failed to initialize stream on this source!" ) # thread initialization @@ -331,6 +330,9 @@ def __init__( # initialize termination flag event self.__terminate = Event() + # initialize stream read flag event + self.__stream_read = Event() + def start(self): """ Launches the internal *Threaded Frames Extractor* daemon. @@ -349,16 +351,24 @@ def __update(self): until the thread is terminated, or frames runs out. """ - # keep iterating infinitely until the thread is terminated or frames runs out + # keep iterating infinitely + # until the thread is terminated + # or frames runs out while True: # if the thread indicator variable is set, stop the thread if self.__terminate.is_set(): break + # stream not read yet + self.__stream_read.clear() + # otherwise, read the next frame from the stream (grabbed, frame) = self.stream.read() - # check for valid frames + # stream read completed + self.__stream_read.set() + + # check for valid frame if received if not grabbed: # no frames received, then safely exit if self.__threaded_queue_mode: @@ -398,8 +408,10 @@ def __update(self): if self.__threaded_queue_mode: self.__queue.put(self.frame) + # indicate immediate termination self.__threaded_queue_mode = False - self.frame = None + self.__terminate.set() + self.__stream_read.set() # release resources self.stream.release() @@ -412,7 +424,14 @@ def read(self): """ while self.__threaded_queue_mode: return self.__queue.get(timeout=self.__thread_timeout) - return self.frame + # return current frame + # only after stream is read + return ( + self.frame + if not self.__terminate.is_set() # check if already terminated + and self.__stream_read.wait(timeout=self.__thread_timeout) # wait for it + else None + ) def stop(self): """ @@ -424,8 +443,10 @@ def stop(self): if self.__threaded_queue_mode: self.__threaded_queue_mode = False - # indicate that the thread should be terminate + # indicate that the thread + # should be terminated immediately self.__terminate.set() + self.__stream_read.set() # wait until stream resources are released (producer thread might be still grabbing frame) if self.__thread is not None: diff --git a/vidgear/gears/helper.py b/vidgear/gears/helper.py index a406ad167..b5ade1d8f 100755 --- a/vidgear/gears/helper.py +++ b/vidgear/gears/helper.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -21,40 +21,31 @@ # Contains all the support functions/modules required by Vidgear packages # import the necessary packages - import os import re import sys +import cv2 +import types import errno import shutil +import importlib +import requests import numpy as np import logging as log import platform -import requests +import socket from tqdm import tqdm +from contextlib import closing +from pathlib import Path from colorlog import ColoredFormatter -from pkg_resources import parse_version +from distutils.version import LooseVersion from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry -try: - # import OpenCV Binaries - import cv2 - - # check whether OpenCV Binaries are 3.x+ - if parse_version(cv2.__version__) < parse_version("3"): - raise ImportError( - "[Vidgear:ERROR] :: Installed OpenCV API version(< 3.0) is not supported!" - ) -except ImportError: - raise ImportError( - "[Vidgear:ERROR] :: Failed to detect correct OpenCV executables, install it with `pip3 install opencv-python` command." - ) - def logger_handler(): """ - ### logger_handler + ## logger_handler Returns the logger handler @@ -62,7 +53,7 @@ def logger_handler(): """ # logging formatter formatter = ColoredFormatter( - "%(bold_cyan)s%(asctime)s :: %(bold_blue)s%(name)s%(reset)s :: %(log_color)s%(levelname)s%(reset)s :: %(message)s", + "%(bold_blue)s%(name)s%(reset)s :: %(log_color)s%(levelname)s%(reset)s :: %(message)s", datefmt="%H:%M:%S", reset=True, log_colors={ @@ -103,6 +94,134 @@ def logger_handler(): logger.addHandler(logger_handler()) logger.setLevel(log.DEBUG) + +def get_module_version(module=None): + """ + ## get_module_version + + Retrieves version of specified module + + Parameters: + name (ModuleType): module of datatype `ModuleType`. + + **Returns:** version of specified module as string + """ + # check if module type is valid + assert not (module is None) and isinstance( + module, types.ModuleType + ), "[Vidgear:ERROR] :: Invalid module!" + + # get version from attribute + version = getattr(module, "__version__", None) + # retry if failed + if version is None: + # some modules uses a capitalized attribute name + version = getattr(module, "__VERSION__", None) + # raise if still failed + if version is None: + raise ImportError( + "[Vidgear:ERROR] :: Can't determine version for module: `{}`!".format( + module.__name__ + ) + ) + return str(version) + + +def import_dependency_safe( + name, + error="raise", + pkg_name=None, + min_version=None, + custom_message=None, +): + """ + ## import_dependency_safe + + Imports specified dependency safely. By default(`error = raise`), if a dependency is missing, + an ImportError with a meaningful message will be raised. Otherwise if `error = log` a warning + will be logged and on `error = silent` everything will be quit. But If a dependency is present, + but older than specified, an error is raised if specified. + + Parameters: + name (string): name of dependency to be imported. + error (string): raise or Log or silence ImportError. Possible values are `"raise"`, `"log"` and `silent`. Default is `"raise"`. + pkg_name (string): (Optional) package name of dependency(if different `pip` name). Otherwise `name` will be used. + min_version (string): (Optional) required minimum version of the dependency to be imported. + custom_message (string): (Optional) custom Import error message to be raised or logged. + + **Returns:** The imported module, when found and the version is correct(if specified). Otherwise `None`. + """ + # check specified parameters + sub_class = "" + if not name or not isinstance(name, str): + return None + else: + # extract name in case of relative import + name = name.strip() + if name.startswith("from"): + name = name.split(" ") + name, sub_class = (name[1].strip(), name[-1].strip()) + + assert error in [ + "raise", + "log", + "silent", + ], "[Vidgear:ERROR] :: Invalid value at `error` parameter." + + # specify package name of dependency(if defined). Otherwise use name + install_name = pkg_name if not (pkg_name is None) else name + + # create message + msg = ( + custom_message + if not (custom_message is None) + else "Failed to find required dependency '{}'. Install it with `pip install {}` command.".format( + name, install_name + ) + ) + # try importing dependency + try: + module = importlib.import_module(name) + if sub_class: + module = getattr(module, sub_class) + except Exception: + # handle errors. + if error == "raise": + raise ImportError(msg) from None + elif error == "log": + logger.error(msg) + return None + else: + return None + + # check if minimum required version + if not (min_version) is None: + # Handle submodules + parent_module = name.split(".")[0] + if parent_module != name: + # grab parent module + module_to_get = sys.modules[parent_module] + else: + module_to_get = module + # extract version + version = get_module_version(module_to_get) + # verify + if LooseVersion(version) < LooseVersion(min_version): + # create message + msg = """Unsupported version '{}' found. Vidgear requires '{}' dependency installed with version '{}' or greater. + Update it with `pip install -U {}` command.""".format( + parent_module, min_version, version, install_name + ) + # handle errors. + if error == "silent": + return None + else: + # raise + raise ImportError(msg) + + return module + + # set default timer for download requests DEFAULT_TIMEOUT = 3 @@ -128,7 +247,7 @@ def send(self, request, **kwargs): def restore_levelnames(): """ - ### restore_levelnames + ## restore_levelnames Auxiliary method to restore logger levelnames. """ @@ -145,19 +264,40 @@ def restore_levelnames(): def check_CV_version(): """ - ### check_CV_version + ## check_CV_version **Returns:** OpenCV's version first bit """ - if parse_version(cv2.__version__) >= parse_version("4"): + if LooseVersion(cv2.__version__) >= LooseVersion("4"): return 4 else: return 3 +def check_open_port(address, port=22): + """ + ## check_open_port + + Checks whether specified port open at given IP address. + + Parameters: + address (string): given IP address. + port (int): check if port is open at given address. + + **Returns:** A boolean value, confirming whether given port is open at given IP address. + """ + if not address: + return False + with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as sock: + if sock.connect_ex((address, port)) == 0: + return True + else: + return False + + def check_WriteAccess(path, is_windows=False): """ - ### check_WriteAccess + ## check_WriteAccess Checks whether given path directory has Write-Access. @@ -182,13 +322,13 @@ def check_WriteAccess(path, is_windows=False): write_accessible = False finally: if os.path.exists(temp_fname): - os.remove(temp_fname) + delete_file_safe(temp_fname) return write_accessible def check_gstreamer_support(logging=False): """ - ### check_gstreamer_support + ## check_gstreamer_support Checks whether OpenCV is compiled with Gstreamer(`>=1.0.0`) support. @@ -215,7 +355,7 @@ def check_gstreamer_support(logging=False): def get_supported_resolution(value, logging=False): """ - ### get_supported_resolution + ## get_supported_resolution Parameters: value (string): value to be validated @@ -261,7 +401,7 @@ def get_supported_resolution(value, logging=False): def dimensions_to_resolutions(value): """ - ### dimensions_to_resolutions + ## dimensions_to_resolutions Parameters: value (list): list of dimensions (e.g. `640x360`) @@ -277,6 +417,7 @@ def dimensions_to_resolutions(value): "1920x1080": "1080p", "2560x1440": "1440p", "3840x2160": "2160p", + "7680x4320": "4320p", } return ( list(map(supported_resolutions.get, value, value)) @@ -287,7 +428,7 @@ def dimensions_to_resolutions(value): def get_supported_vencoders(path): """ - ### get_supported_vencoders + ## get_supported_vencoders Find and returns FFmpeg's supported video encoders @@ -305,16 +446,38 @@ def get_supported_vencoders(path): if x.decode("utf-8").strip().startswith("V") ] # compile regex - finder = re.compile(r"\.\.\s[a-z0-9_-]+") + finder = re.compile(r"[A-Z]*[\.]+[A-Z]*\s[a-z0-9_-]*") # find all outputs outputs = finder.findall("\n".join(supported_vencoders)) - # return outputs - return [s.replace(".. ", "") for s in outputs] + # return output findings + return [[s for s in o.split(" ")][-1] for o in outputs] + + +def get_supported_demuxers(path): + """ + ## get_supported_demuxers + + Find and returns FFmpeg's supported demuxers + + Parameters: + path (string): absolute path of FFmpeg binaries + + **Returns:** List of supported demuxers. + """ + demuxers = check_output([path, "-hide_banner", "-demuxers"]) + splitted = [x.decode("utf-8").strip() for x in demuxers.split(b"\n")] + supported_demuxers = splitted[splitted.index("--") + 1 : len(splitted) - 1] + # compile regex + finder = re.compile(r"\s\s[a-z0-9_,-]+\s+") + # find all outputs + outputs = finder.findall("\n".join(supported_demuxers)) + # return output findings + return [o.strip() for o in outputs] def is_valid_url(path, url=None, logging=False): """ - ### is_valid_url + ## is_valid_url Checks URL validity by testing its scheme against FFmpeg's supported protocols @@ -333,11 +496,10 @@ def is_valid_url(path, url=None, logging=False): extracted_scheme_url = url.split("://", 1)[0] # extract all FFmpeg supported protocols protocols = check_output([path, "-hide_banner", "-protocols"]) - splitted = protocols.split(b"\n") - supported_protocols = [ - x.decode("utf-8").strip() for x in splitted[2 : len(splitted) - 1] - ] - supported_protocols += ["rtsp"] # rtsp not included somehow + splitted = [x.decode("utf-8").strip() for x in protocols.split(b"\n")] + supported_protocols = splitted[splitted.index("Output:") + 1 : len(splitted) - 1] + # rtsp is a demuxer somehow + supported_protocols += ["rtsp"] if "rtsp" in get_supported_demuxers(path) else [] # Test and return result whether scheme is supported if extracted_scheme_url and extracted_scheme_url in supported_protocols: if logging: @@ -352,9 +514,9 @@ def is_valid_url(path, url=None, logging=False): return False -def validate_video(path, video_path=None): +def validate_video(path, video_path=None, logging=False): """ - ### validate_video + ## validate_video Validates video by retrieving resolution/size and framerate from file. @@ -374,6 +536,8 @@ def validate_video(path, video_path=None): ) # clean and search stripped_data = [x.decode("utf-8").strip() for x in metadata.split(b"\n")] + if logging: + logger.debug(stripped_data) result = {} for data in stripped_data: output_a = re.findall(r"([1-9]\d+)x([1-9]\d+)", data) @@ -391,7 +555,7 @@ def validate_video(path, video_path=None): def create_blank_frame(frame=None, text="", logging=False): """ - ### create_blank_frame + ## create_blank_frame Create blank frames of given frame size with text @@ -401,12 +565,12 @@ def create_blank_frame(frame=None, text="", logging=False): **Returns:** A reduced numpy ndarray array. """ # check if frame is valid - if frame is None: - raise ValueError("[Helper:ERROR] :: Input frame cannot be NoneType!") + if frame is None or not (isinstance(frame, np.ndarray)): + raise ValueError("[Helper:ERROR] :: Input frame is invalid!") # grab the frame size (height, width) = frame.shape[:2] # create blank frame - blank_frame = np.zeros((height, width, 3), np.uint8) + blank_frame = np.zeros(frame.shape, frame.dtype) # setup text if text and isinstance(text, str): if logging: @@ -423,13 +587,14 @@ def create_blank_frame(frame=None, text="", logging=False): cv2.putText( blank_frame, text, (textX, textY), font, fontScale, (125, 125, 125), 6 ) + # return frame return blank_frame def extract_time(value): """ - ### extract_time + ## extract_time Extract time from give string value. @@ -456,31 +621,36 @@ def extract_time(value): ) -def validate_audio(path, file_path=None): +def validate_audio(path, source=None): """ - ### validate_audio + ## validate_audio Validates audio by retrieving audio-bitrate from file. Parameters: path (string): absolute path of FFmpeg binaries - file_path (string): absolute path to file to be validated. + source (string/list): source to be validated. **Returns:** A string value, confirming whether audio is present, or not?. """ - if file_path is None or not (file_path): - logger.warning("File path is empty!") + if source is None or not (source): + logger.warning("Audio input source is empty!") return "" - # extract audio sample-rate from metadata - metadata = check_output( - [path, "-hide_banner", "-i", file_path], force_retrieve_stderr=True + # create ffmpeg command + cmd = [path, "-hide_banner"] + ( + source if isinstance(source, list) else ["-i", source] ) + # extract audio sample-rate from metadata + metadata = check_output(cmd, force_retrieve_stderr=True) audio_bitrate = re.findall(r"fltp,\s[0-9]+\s\w\w[/]s", metadata.decode("utf-8")) + sample_rate_identifiers = ["Audio", "Hz"] + ( + ["fltp"] if isinstance(source, str) else [] + ) audio_sample_rate = [ line.strip() for line in metadata.decode("utf-8").split("\n") - if all(x in line for x in ["Audio", "Hz", "fltp"]) + if all(x in line for x in sample_rate_identifiers) ] if audio_bitrate: filtered = audio_bitrate[0].split(" ")[1:3] @@ -502,7 +672,7 @@ def validate_audio(path, file_path=None): def get_video_bitrate(width, height, fps, bpp): """ - ### get_video_bitrate + ## get_video_bitrate Calculate optimum Bitrate from resolution, framerate, bits-per-pixels values @@ -517,9 +687,29 @@ def get_video_bitrate(width, height, fps, bpp): return round((width * height * bpp * fps) / 1000) +def delete_file_safe(file_path): + """ + ## delete_ext_safe + + Safely deletes files at given path. + + Parameters: + file_path (string): path to the file + """ + try: + dfile = Path(file_path) + if sys.version_info >= (3, 8, 0): + dfile.unlink(missing_ok=True) + else: + if dfile.exists(): + dfile.unlink() + except Exception as e: + logger.exception(e) + + def mkdir_safe(dir_path, logging=False): """ - ### mkdir_safe + ## mkdir_safe Safely creates directory at given path. @@ -539,9 +729,9 @@ def mkdir_safe(dir_path, logging=False): logger.debug("Directory already exists at `{}`".format(dir_path)) -def delete_safe(dir_path, extensions=[], logging=False): +def delete_ext_safe(dir_path, extensions=[], logging=False): """ - ### delete_safe + ## delete_ext_safe Safely deletes files with given extensions at given path. @@ -555,27 +745,36 @@ def delete_safe(dir_path, extensions=[], logging=False): logger.warning("Invalid input provided for deleting!") return - if logging: - logger.debug("Clearing Assets at `{}`!".format(dir_path)) + logger.critical("Clearing Assets at `{}`!".format(dir_path)) for ext in extensions: - files_ext = [ - os.path.join(dir_path, f) for f in os.listdir(dir_path) if f.endswith(ext) - ] + if len(ext) == 2: + files_ext = [ + os.path.join(dir_path, f) + for f in os.listdir(dir_path) + if f.startswith(ext[0]) and f.endswith(ext[1]) + ] + else: + files_ext = [ + os.path.join(dir_path, f) + for f in os.listdir(dir_path) + if f.endswith(ext) + ] for file in files_ext: - os.remove(file) + delete_file_safe(file) if logging: logger.debug("Deleted file: `{}`".format(file)) -def capPropId(property): +def capPropId(property, logging=True): """ - ### capPropId + ## capPropId Retrieves the OpenCV property's Integer(Actual) value from string. Parameters: property (string): inputs OpenCV property as string. + logging (bool): enables logging for its operations **Returns:** Resultant integer value. """ @@ -583,15 +782,33 @@ def capPropId(property): try: integer_value = getattr(cv2, property) except Exception as e: - logger.exception(str(e)) - logger.critical("`{}` is not a valid OpenCV property!".format(property)) + if logging: + logger.exception(str(e)) + logger.critical("`{}` is not a valid OpenCV property!".format(property)) return None return integer_value +def retrieve_best_interpolation(interpolations): + """ + ## retrieve_best_interpolation + Retrieves best interpolation for resizing + + Parameters: + interpolations (list): list of interpolations as string. + **Returns:** Resultant integer value of found interpolation. + """ + if isinstance(interpolations, list): + for intp in interpolations: + interpolation = capPropId(intp, logging=False) + if not (interpolation is None): + return interpolation + return None + + def youtube_url_validator(url): """ - ### youtube_url_validator + ## youtube_url_validator Validates & extracts Youtube video ID from URL. @@ -612,15 +829,16 @@ def youtube_url_validator(url): return "" -def reducer(frame=None, percentage=0): +def reducer(frame=None, percentage=0, interpolation=cv2.INTER_LANCZOS4): """ - ### reducer + ## reducer Reduces frame size by given percentage Parameters: frame (numpy.ndarray): inputs numpy array(frame). percentage (int/float): inputs size-reduction percentage. + interpolation (int): Change resize interpolation. **Returns:** A reduced numpy ndarray array. """ @@ -634,6 +852,11 @@ def reducer(frame=None, percentage=0): "[Helper:ERROR] :: Given frame-size reduction percentage is invalid, Kindly refer docs." ) + if not (isinstance(interpolation, int)): + raise ValueError( + "[Helper:ERROR] :: Given interpolation is invalid, Kindly refer docs." + ) + # grab the frame size (height, width) = frame.shape[:2] @@ -644,12 +867,12 @@ def reducer(frame=None, percentage=0): dimensions = (int(reduction), int(height * ratio)) # return the resized frame - return cv2.resize(frame, dimensions, interpolation=cv2.INTER_LANCZOS4) + return cv2.resize(frame, dimensions, interpolation=interpolation) def dict2Args(param_dict): """ - ### dict2Args + ## dict2Args Converts dictionary attributes to list(args) @@ -680,7 +903,7 @@ def get_valid_ffmpeg_path( custom_ffmpeg="", is_windows=False, ffmpeg_download_path="", logging=False ): """ - ### get_valid_ffmpeg_path + ## get_valid_ffmpeg_path Validate the given FFmpeg path/binaries, and returns a valid FFmpeg executable path. @@ -773,7 +996,7 @@ def get_valid_ffmpeg_path( def download_ffmpeg_binaries(path, os_windows=False, os_bit=""): """ - ### download_ffmpeg_binaries + ## download_ffmpeg_binaries Generates FFmpeg Static Binaries for windows(if not available) @@ -813,7 +1036,7 @@ def download_ffmpeg_binaries(path, os_windows=False, os_bit=""): ) # remove leftovers if exists if os.path.isfile(file_name): - os.remove(file_name) + delete_file_safe(file_name) # download and write file to the given path with open(file_name, "wb") as f: logger.debug( @@ -847,7 +1070,7 @@ def download_ffmpeg_binaries(path, os_windows=False, os_bit=""): zip_fname, _ = os.path.split(zip_ref.infolist()[0].filename) zip_ref.extractall(base_path) # perform cleaning - os.remove(file_name) + delete_file_safe(file_name) logger.debug("FFmpeg binaries for Windows configured successfully!") final_path += file_path # return final path @@ -856,7 +1079,7 @@ def download_ffmpeg_binaries(path, os_windows=False, os_bit=""): def validate_ffmpeg(path, logging=False): """ - ### validate_ffmpeg + ## validate_ffmpeg Validate FFmeg Binaries. returns `True` if tests are passed. @@ -890,7 +1113,7 @@ def validate_ffmpeg(path, logging=False): def check_output(*args, **kwargs): """ - ### check_output + ## check_output Returns stdin output from subprocess module """ @@ -910,7 +1133,7 @@ def check_output(*args, **kwargs): stdout=sp.PIPE, stderr=sp.DEVNULL if not (retrieve_stderr) else sp.PIPE, *args, - **kwargs + **kwargs, ) output, stderr = process.communicate() retcode = process.poll() @@ -929,7 +1152,7 @@ def check_output(*args, **kwargs): def generate_auth_certificates(path, overwrite=False, logging=False): """ - ### generate_auth_certificates + ## generate_auth_certificates Auto-Generates, and Auto-validates CURVE ZMQ key-pairs for NetGear API's Secure Mode. @@ -981,7 +1204,7 @@ def generate_auth_certificates(path, overwrite=False, logging=False): # clean redundant keys if present redundant_key = os.path.join(keys_dir, key_file) if os.path.isfile(redundant_key): - os.remove(redundant_key) + delete_file_safe(redundant_key) else: # otherwise validate available keys status_public_keys = validate_auth_keys(public_keys_dir, ".key") @@ -1021,7 +1244,7 @@ def generate_auth_certificates(path, overwrite=False, logging=False): # clean redundant keys if present redundant_key = os.path.join(keys_dir, key_file) if os.path.isfile(redundant_key): - os.remove(redundant_key) + delete_file_safe(redundant_key) # validate newly generated keys status_public_keys = validate_auth_keys(public_keys_dir, ".key") @@ -1041,7 +1264,7 @@ def generate_auth_certificates(path, overwrite=False, logging=False): def validate_auth_keys(path, extension): """ - ### validate_auth_keys + ## validate_auth_keys Validates, and also maintains generated ZMQ CURVE Key-pairs. @@ -1070,7 +1293,7 @@ def validate_auth_keys(path, extension): # remove invalid keys if found if len(keys_buffer) == 1: - os.remove(os.path.join(path, keys_buffer[0])) + delete_file_safe(os.path.join(path, keys_buffer[0])) # return results return True if (len(keys_buffer) == 2) else False diff --git a/vidgear/gears/netgear.py b/vidgear/gears/netgear.py index 1163d8f94..53be57f89 100644 --- a/vidgear/gears/netgear.py +++ b/vidgear/gears/netgear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,19 +18,35 @@ =============================================== """ # import the necessary packages - import os import cv2 import time -import simplejpeg +import string +import secrets import numpy as np -import random import logging as log from threading import Thread from collections import deque -from pkg_resources import parse_version - -from .helper import logger_handler, generate_auth_certificates, check_WriteAccess +from os.path import expanduser + +# import helper packages +from .helper import ( + logger_handler, + generate_auth_certificates, + check_WriteAccess, + check_open_port, + import_dependency_safe, +) + +# safe import critical Class modules +zmq = import_dependency_safe("zmq", pkg_name="pyzmq", error="silent", min_version="4.0") +if not (zmq is None): + from zmq import ssh + from zmq import auth + from zmq.auth.thread import ThreadAuthenticator + from zmq.error import ZMQError +simplejpeg = import_dependency_safe("simplejpeg", error="silent", min_version="1.6.1") +paramiko = import_dependency_safe("paramiko", error="silent") # define logger logger = log.getLogger("NetGear") @@ -116,21 +132,11 @@ def __init__( logging (bool): enables/disables logging. options (dict): provides the flexibility to alter various NetGear internal properties. """ - - try: - # import PyZMQ library - import zmq - from zmq.error import ZMQError - - # assign values to global variable for further use - self.__zmq = zmq - self.__ZMQError = ZMQError - - except ImportError as error: - # raise error - raise ImportError( - "[NetGear:ERROR] :: pyzmq python library not installed. Kindly install it with `pip install pyzmq` command." - ) + # raise error(s) for critical Class imports + import_dependency_safe("zmq" if zmq is None else "", min_version="4.0") + import_dependency_safe( + "simplejpeg" if simplejpeg is None else "", error="log", min_version="1.6.1" + ) # enable logging if specified self.__logging = True if logging else False @@ -177,14 +183,20 @@ def __init__( # Handle NetGear's internal exclusive modes and params + # define SSH Tunneling Mode + self.__ssh_tunnel_mode = None # handles ssh_tunneling mode state + self.__ssh_tunnel_pwd = None + self.__ssh_tunnel_keyfile = None + self.__paramiko_present = False if paramiko is None else True + # define Multi-Server mode - self.__multiserver_mode = False # handles multi-server_mode state + self.__multiserver_mode = False # handles multi-server mode state # define Multi-Client mode - self.__multiclient_mode = False # handles multi-client_mode state + self.__multiclient_mode = False # handles multi-client mode state - # define Bi-directional mode - self.__bi_mode = False # handles bi-directional mode state + # define Bidirectional mode + self.__bi_mode = False # handles Bidirectional mode state # define Secure mode valid_security_mech = {0: "Grasslands", 1: "StoneHouse", 2: "IronHouse"} @@ -196,10 +208,13 @@ def __init__( custom_cert_location = "" # handles custom ZMQ certificates path # define frame-compression handler - self.__jpeg_compression = True # enabled by default for all connections + self.__jpeg_compression = ( + True if not (simplejpeg is None) else False + ) # enabled by default for all connections if simplejpeg is installed self.__jpeg_compression_quality = 90 # 90% quality self.__jpeg_compression_fastdct = True # fastest DCT on by default self.__jpeg_compression_fastupsample = False # fastupsample off by default + self.__jpeg_compression_colorspace = "BGR" # use BGR colorspace by default # defines frame compression on return data self.__ex_compression_params = None @@ -207,8 +222,10 @@ def __init__( # define receiver return data handler self.__return_data = None - # generate random system id - self.__id = "".join(random.choice("0123456789ABCDEF") for i in range(5)) + # generate 8-digit random system id + self.__id = "".join( + secrets.choice(string.ascii_uppercase + string.digits) for i in range(8) + ) # define termination flag self.__terminate = False @@ -223,13 +240,12 @@ def __init__( self.__request_timeout = 4000 # 4 secs # Handle user-defined options dictionary values - # reformat dictionary options = {str(k).strip(): v for k, v in options.items()} # loop over dictionary key & values and assign to global variables if valid for key, value in options.items(): - + # handle multi-server mode if key == "multiserver_mode" and isinstance(value, bool): # check if valid pattern assigned if pattern > 0: @@ -245,7 +261,8 @@ def __init__( ) ) - if key == "multiclient_mode" and isinstance(value, bool): + # handle multi-client mode + elif key == "multiclient_mode" and isinstance(value, bool): # check if valid pattern assigned if pattern > 0: # activate Multi-client mode @@ -260,17 +277,28 @@ def __init__( ) ) + # handle bidirectional mode + elif key == "bidirectional_mode" and isinstance(value, bool): + # check if pattern is valid + if pattern < 2: + # activate Bidirectional mode if specified + self.__bi_mode = value + else: + # otherwise disable it and raise error + self.__bi_mode = False + logger.warning("Bidirectional data transmission is disabled!") + raise ValueError( + "[NetGear:ERROR] :: `{}` pattern is not valid when Bidirectional Mode is enabled. Kindly refer Docs for more Information!".format( + pattern + ) + ) + + # handle secure mode elif ( key == "secure_mode" and isinstance(value, int) and (value in valid_security_mech) ): - # check if installed libzmq version is valid - assert zmq.zmq_version_info() >= ( - 4, - 0, - ), "[NetGear:ERROR] :: ZMQ Security feature is not supported in libzmq version < 4.0." - # assign valid mode self.__secure_mode = value elif key == "custom_cert_location" and isinstance(value, str): @@ -285,29 +313,54 @@ def __init__( ), "[NetGear:ERROR] :: Permission Denied!, cannot write ZMQ authentication certificates to '{}' directory!".format( value ) - elif key == "overwrite_cert" and isinstance(value, bool): # enable/disable auth certificate overwriting in secure mode overwrite_cert = value - elif key == "bidirectional_mode" and isinstance(value, bool): - # check if pattern is valid - if pattern < 2: - # activate bi-directional mode if specified - self.__bi_mode = value - else: - # otherwise disable it and raise error - self.__bi_mode = False - logger.critical("Bi-Directional data transmission is disabled!") - raise ValueError( - "[NetGear:ERROR] :: `{}` pattern is not valid when Bi-Directional Mode is enabled. Kindly refer Docs for more Information!".format( - pattern + # handle ssh-tunneling mode + elif key == "ssh_tunnel_mode" and isinstance(value, str): + # enable SSH Tunneling Mode + self.__ssh_tunnel_mode = value.strip() + elif key == "ssh_tunnel_pwd" and isinstance(value, str): + # add valid SSH Tunneling password + self.__ssh_tunnel_pwd = value + elif key == "ssh_tunnel_keyfile" and isinstance(value, str): + # add valid SSH Tunneling key-file + self.__ssh_tunnel_keyfile = value if os.path.isfile(value) else None + if self.__ssh_tunnel_keyfile is None: + logger.warning( + "Discarded invalid or non-existential SSH Tunnel Key-file at {}!".format( + value ) ) - elif key == "jpeg_compression" and isinstance(value, bool): - # enable frame-compression encoding value - self.__jpeg_compression = value + # handle jpeg compression + elif ( + key == "jpeg_compression" + and not (simplejpeg is None) + and isinstance(value, (bool, str)) + ): + if isinstance(value, str) and value.strip().upper() in [ + "RGB", + "BGR", + "RGBX", + "BGRX", + "XBGR", + "XRGB", + "GRAY", + "RGBA", + "BGRA", + "ABGR", + "ARGB", + "CMYK", + ]: + # set encoding colorspace + self.__jpeg_compression_colorspace = value.strip().upper() + # enable frame-compression encoding value + self.__jpeg_compression = True + else: + # enable frame-compression encoding value + self.__jpeg_compression = value elif key == "jpeg_compression_quality" and isinstance(value, (int, float)): # set valid jpeg quality if value >= 10 and value <= 100: @@ -334,7 +387,7 @@ def __init__( else: logger.warning("Invalid `request_timeout` value skipped!") - # assign ZMQ flags + # handle ZMQ flags elif key == "flag" and isinstance(value, int): self.__msg_flag = value elif key == "copy" and isinstance(value, bool): @@ -347,10 +400,6 @@ def __init__( # Handle Secure mode if self.__secure_mode: - # import required libs - import zmq.auth - from zmq.auth.thread import ThreadAuthenticator - # activate and log if overwriting is enabled if overwrite_cert: if not receive_mode: @@ -378,8 +427,6 @@ def __init__( ) else: # otherwise auto-generate suitable path - from os.path import expanduser - ( auth_cert_dir, self.__auth_secretkeys_dir, @@ -404,27 +451,74 @@ def __init__( "ZMQ Security Mechanism is disabled for this connection due to errors!" ) - # Handle multiple exclusive modes if enabled + # Handle ssh tunneling if enabled + if not (self.__ssh_tunnel_mode is None): + # SSH Tunnel Mode only available for server mode + if receive_mode: + logger.error("SSH Tunneling cannot be enabled for Client-end!") + else: + # check if SSH tunneling possible + ssh_address = self.__ssh_tunnel_mode + ssh_address, ssh_port = ( + ssh_address.split(":") + if ":" in ssh_address + else [ssh_address, "22"] + ) # default to port 22 + if "47" in ssh_port: + self.__ssh_tunnel_mode = self.__ssh_tunnel_mode.replace( + ":47", "" + ) # port-47 is reserved for testing + else: + # extract ip for validation + ssh_user, ssh_ip = ( + ssh_address.split("@") + if "@" in ssh_address + else ["", ssh_address] + ) + # validate ip specified port + assert check_open_port( + ssh_ip, port=int(ssh_port) + ), "[NetGear:ERROR] :: Host `{}` is not available for SSH Tunneling at port-{}!".format( + ssh_address, ssh_port + ) + # Handle multiple exclusive modes if enabled if self.__multiclient_mode and self.__multiserver_mode: raise ValueError( "[NetGear:ERROR] :: Multi-Client and Multi-Server Mode cannot be enabled simultaneously!" ) elif self.__multiserver_mode or self.__multiclient_mode: - # check if Bi-directional Mode also enabled + # check if Bidirectional Mode also enabled if self.__bi_mode: # disable bi_mode if enabled self.__bi_mode = False logger.warning( - "Bi-Directional Data Transmission is disabled when {} Mode is Enabled due to incompatibility!".format( + "Bidirectional Data Transmission is disabled when {} Mode is Enabled due to incompatibility!".format( + "Multi-Server" if self.__multiserver_mode else "Multi-Client" + ) + ) + # check if SSH Tunneling Mode also enabled + if self.__ssh_tunnel_mode: + # raise error + raise ValueError( + "[NetGear:ERROR] :: SSH Tunneling and {} Mode cannot be enabled simultaneously. Kindly refer docs!".format( "Multi-Server" if self.__multiserver_mode else "Multi-Client" ) ) elif self.__bi_mode: - # log Bi-directional mode activation + # log Bidirectional mode activation + if self.__logging: + logger.debug( + "Bidirectional Data Transmission is enabled for this connection!" + ) + elif self.__ssh_tunnel_mode: + # log Bidirectional mode activation if self.__logging: logger.debug( - "Bi-Directional Data Transmission is enabled for this connection!" + "SSH Tunneling is enabled for host:`{}` with `{}` back-end.".format( + self.__ssh_tunnel_mode, + "paramiko" if self.__paramiko_present else "pexpect", + ) ) # define messaging context instance @@ -481,20 +575,20 @@ def __init__( # activate secure_mode threaded authenticator if self.__secure_mode > 0: # start an authenticator for this context - auth = ThreadAuthenticator(self.__msg_context) - auth.start() - auth.allow(str(address)) # allow current address + z_auth = ThreadAuthenticator(self.__msg_context) + z_auth.start() + z_auth.allow(str(address)) # allow current address # check if `IronHouse` is activated if self.__secure_mode == 2: # tell authenticator to use the certificate from given valid dir - auth.configure_curve( + z_auth.configure_curve( domain="*", location=self.__auth_publickeys_dir ) else: # otherwise tell the authenticator how to handle the CURVE requests, if `StoneHouse` is activated - auth.configure_curve( - domain="*", location=zmq.auth.CURVE_ALLOW_ANY + z_auth.configure_curve( + domain="*", location=auth.CURVE_ALLOW_ANY ) # define thread-safe messaging socket @@ -510,7 +604,7 @@ def __init__( server_secret_file = os.path.join( self.__auth_secretkeys_dir, "server.key_secret" ) - server_public, server_secret = zmq.auth.load_certificate( + server_public, server_secret = auth.load_certificate( server_secret_file ) # load all CURVE keys @@ -580,7 +674,7 @@ def __init__( else: if self.__bi_mode: logger.critical( - "Failed to activate Bi-Directional Mode for this connection!" + "Failed to activate Bidirectional Mode for this connection!" ) raise RuntimeError( "[NetGear:ERROR] :: Receive Mode failed to bind address: {} and pattern: {}! Kindly recheck all parameters.".format( @@ -611,7 +705,8 @@ def __init__( ) if self.__jpeg_compression: logger.debug( - "JPEG Frame-Compression is activated for this connection with Quality:`{}`%, Fastdct:`{}`, and Fastupsample:`{}`.".format( + "JPEG Frame-Compression is activated for this connection with Colorspace:`{}`, Quality:`{}`%, Fastdct:`{}`, and Fastupsample:`{}`.".format( + self.__jpeg_compression_colorspace, self.__jpeg_compression_quality, "enabled" if self.__jpeg_compression_fastdct @@ -680,20 +775,20 @@ def __init__( # activate secure_mode threaded authenticator if self.__secure_mode > 0: # start an authenticator for this context - auth = ThreadAuthenticator(self.__msg_context) - auth.start() - auth.allow(str(address)) # allow current address + z_auth = ThreadAuthenticator(self.__msg_context) + z_auth.start() + z_auth.allow(str(address)) # allow current address # check if `IronHouse` is activated if self.__secure_mode == 2: # tell authenticator to use the certificate from given valid dir - auth.configure_curve( + z_auth.configure_curve( domain="*", location=self.__auth_publickeys_dir ) else: # otherwise tell the authenticator how to handle the CURVE requests, if `StoneHouse` is activated - auth.configure_curve( - domain="*", location=zmq.auth.CURVE_ALLOW_ANY + z_auth.configure_curve( + domain="*", location=auth.CURVE_ALLOW_ANY ) # define thread-safe messaging socket @@ -714,7 +809,7 @@ def __init__( client_secret_file = os.path.join( self.__auth_secretkeys_dir, "client.key_secret" ) - client_public, client_secret = zmq.auth.load_certificate( + client_public, client_secret = auth.load_certificate( client_secret_file ) # load all CURVE keys @@ -724,7 +819,7 @@ def __init__( server_public_file = os.path.join( self.__auth_publickeys_dir, "server.key" ) - server_public, _ = zmq.auth.load_certificate(server_public_file) + server_public, _ = auth.load_certificate(server_public_file) # inject public key to make a CURVE connection. self.__msg_socket.curve_serverkey = server_public @@ -736,10 +831,22 @@ def __init__( protocol + "://" + str(address) + ":" + str(pt) ) else: - # connect socket to given protocol, address and port - self.__msg_socket.connect( - protocol + "://" + str(address) + ":" + str(port) - ) + # handle SSH tuneling if enabled + if self.__ssh_tunnel_mode: + # establish tunnel connection + ssh.tunnel_connection( + self.__msg_socket, + protocol + "://" + str(address) + ":" + str(port), + self.__ssh_tunnel_mode, + keyfile=self.__ssh_tunnel_keyfile, + password=self.__ssh_tunnel_pwd, + paramiko=self.__paramiko_present, + ) + else: + # connect socket to given protocol, address and port + self.__msg_socket.connect( + protocol + "://" + str(address) + ":" + str(port) + ) # additional settings if pattern < 2: @@ -785,7 +892,13 @@ def __init__( else: if self.__bi_mode: logger.critical( - "Failed to activate Bi-Directional Mode for this connection!" + "Failed to activate Bidirectional Mode for this connection!" + ) + if self.__ssh_tunnel_mode: + logger.critical( + "Failed to initiate SSH Tunneling Mode for this server with `{}` back-end!".format( + "paramiko" if self.__paramiko_present else "pexpect" + ) ) raise RuntimeError( "[NetGear:ERROR] :: Send Mode failed to connect address: {} and pattern: {}! Kindly recheck all parameters.".format( @@ -802,7 +915,8 @@ def __init__( ) if self.__jpeg_compression: logger.debug( - "JPEG Frame-Compression is activated for this connection with Quality:`{}`%, Fastdct:`{}`, and Fastupsample:`{}`.".format( + "JPEG Frame-Compression is activated for this connection with Colorspace:`{}`, Quality:`{}`%, Fastdct:`{}`, and Fastupsample:`{}`.".format( + self.__jpeg_compression_colorspace, self.__jpeg_compression_quality, "enabled" if self.__jpeg_compression_fastdct @@ -843,9 +957,9 @@ def __recv_handler(self): if self.__pattern < 2: socks = dict(self.__poll.poll(self.__request_timeout * 3)) - if socks.get(self.__msg_socket) == self.__zmq.POLLIN: + if socks.get(self.__msg_socket) == zmq.POLLIN: msg_json = self.__msg_socket.recv_json( - flags=self.__msg_flag | self.__zmq.DONTWAIT + flags=self.__msg_flag | zmq.DONTWAIT ) else: logger.critical("No response from Server(s), Reconnecting again...") @@ -875,7 +989,7 @@ def __recv_handler(self): logger.exception(str(e)) self.__terminate = True raise RuntimeError("API failed to restart the Client-end!") - self.__poll.register(self.__msg_socket, self.__zmq.POLLIN) + self.__poll.register(self.__msg_socket, zmq.POLLIN) continue else: @@ -919,21 +1033,20 @@ def __recv_handler(self): continue msg_data = self.__msg_socket.recv( - flags=self.__msg_flag | self.__zmq.DONTWAIT, + flags=self.__msg_flag | zmq.DONTWAIT, copy=self.__msg_copy, track=self.__msg_track, ) + # handle data transfer in synchronous modes. if self.__pattern < 2: - if self.__bi_mode or self.__multiclient_mode: - + # check if we are returning `ndarray` frames if not (self.__return_data is None) and isinstance( self.__return_data, np.ndarray ): - - # handle return data - return_data = self.__return_data[:] + # handle return data for compression + return_data = np.copy(self.__return_data) # check whether exit_flag is False if not (return_data.flags["C_CONTIGUOUS"]): @@ -944,52 +1057,52 @@ def __recv_handler(self): # handle jpeg-compression encoding if self.__jpeg_compression: - return_data = simplejpeg.encode_jpeg( - return_data, - quality=self.__jpeg_compression_quality, - colorspace="BGR", - colorsubsampling="422", - fastdct=self.__jpeg_compression_fastdct, - ) + if self.__jpeg_compression_colorspace == "GRAY": + if return_data.ndim == 2: + # patch for https://gitlab.com/jfolz/simplejpeg/-/issues/11 + return_data = return_data[:, :, np.newaxis] + return_data = simplejpeg.encode_jpeg( + return_data, + quality=self.__jpeg_compression_quality, + colorspace=self.__jpeg_compression_colorspace, + fastdct=self.__jpeg_compression_fastdct, + ) + else: + return_data = simplejpeg.encode_jpeg( + return_data, + quality=self.__jpeg_compression_quality, + colorspace=self.__jpeg_compression_colorspace, + colorsubsampling="422", + fastdct=self.__jpeg_compression_fastdct, + ) - if self.__bi_mode: - return_dict = dict( - return_type=(type(return_data).__name__), - compression={ - "dct": self.__jpeg_compression_fastdct, - "ups": self.__jpeg_compression_fastupsample, - } - if self.__jpeg_compression - else False, - array_dtype=str(return_data.dtype) - if not (self.__jpeg_compression) - else "", - array_shape=return_data.shape - if not (self.__jpeg_compression) - else "", - data=None, - ) - else: - return_dict = dict( - port=self.__port, - return_type=(type(return_data).__name__), + return_dict = ( + dict() if self.__bi_mode else dict(port=self.__port) + ) + + return_dict.update( + dict( + return_type=(type(self.__return_data).__name__), compression={ "dct": self.__jpeg_compression_fastdct, "ups": self.__jpeg_compression_fastupsample, + "colorspace": self.__jpeg_compression_colorspace, } if self.__jpeg_compression else False, - array_dtype=str(return_data.dtype) + array_dtype=str(self.__return_data.dtype) if not (self.__jpeg_compression) else "", - array_shape=return_data.shape + array_shape=self.__return_data.shape if not (self.__jpeg_compression) else "", data=None, ) + ) + # send the json dict self.__msg_socket.send_json( - return_dict, self.__msg_flag | self.__zmq.SNDMORE + return_dict, self.__msg_flag | zmq.SNDMORE ) # send the array with correct flags self.__msg_socket.send( @@ -999,17 +1112,15 @@ def __recv_handler(self): track=self.__msg_track, ) else: - if self.__bi_mode: - return_dict = dict( - return_type=(type(self.__return_data).__name__), - data=self.__return_data, - ) - else: - return_dict = dict( - port=self.__port, + return_dict = ( + dict() if self.__bi_mode else dict(port=self.__port) + ) + return_dict.update( + dict( return_type=(type(self.__return_data).__name__), data=self.__return_data, ) + ) self.__msg_socket.send_json(return_dict, self.__msg_flag) else: # send confirmation message to server @@ -1017,7 +1128,8 @@ def __recv_handler(self): "Data received on device: {} !".format(self.__id) ) else: - if self.__return_data and self.__logging: + # else raise warning + if self.__return_data: logger.warning("`return_data` is disabled for this pattern!") # check if encoding was enabled @@ -1025,7 +1137,7 @@ def __recv_handler(self): # decode JPEG frame frame = simplejpeg.decode_jpeg( msg_data, - colorspace="BGR", + colorspace=msg_json["compression"]["colorspace"], fastdct=self.__jpeg_compression_fastdct or msg_json["compression"]["dct"], fastupsample=self.__jpeg_compression_fastupsample @@ -1038,6 +1150,9 @@ def __recv_handler(self): raise RuntimeError( "[NetGear:ERROR] :: Received compressed JPEG frame decoding failed" ) + if msg_json["compression"]["colorspace"] == "GRAY" and frame.ndim == 3: + # patch for https://gitlab.com/jfolz/simplejpeg/-/issues/11 + frame = np.squeeze(frame, axis=2) else: # recover and reshape frame from buffer frame_buffer = np.frombuffer(msg_data, dtype=msg_json["dtype"]) @@ -1054,7 +1169,7 @@ def __recv_handler(self): else: # append recovered unique port and frame to queue self.__queue.append((msg_json["port"], frame)) - # extract if any message from server if Bi-Directional Mode is enabled + # extract if any message from server if Bidirectional Mode is enabled elif self.__bi_mode: if msg_json["message"]: # append grouped frame and data to queue @@ -1083,7 +1198,7 @@ def recv(self, return_data=None): "[NetGear:ERROR] :: `recv()` function cannot be used while receive_mode is disabled. Kindly refer vidgear docs!" ) - # handle bi-directional return data + # handle Bidirectional return data if (self.__bi_mode or self.__multiclient_mode) and not (return_data is None): self.__return_data = return_data @@ -1137,40 +1252,38 @@ def send(self, frame, message=None): # check whether the incoming frame is contiguous frame = np.ascontiguousarray(frame, dtype=frame.dtype) - # handle JPEG compresssion encoding + # handle JPEG compression encoding if self.__jpeg_compression: - frame = simplejpeg.encode_jpeg( - frame, - quality=self.__jpeg_compression_quality, - colorspace="BGR", - colorsubsampling="422", - fastdct=self.__jpeg_compression_fastdct, - ) + if self.__jpeg_compression_colorspace == "GRAY": + if frame.ndim == 2: + # patch for https://gitlab.com/jfolz/simplejpeg/-/issues/11 + frame = np.expand_dims(frame, axis=2) + frame = simplejpeg.encode_jpeg( + frame, + quality=self.__jpeg_compression_quality, + colorspace=self.__jpeg_compression_colorspace, + fastdct=self.__jpeg_compression_fastdct, + ) + else: + frame = simplejpeg.encode_jpeg( + frame, + quality=self.__jpeg_compression_quality, + colorspace=self.__jpeg_compression_colorspace, + colorsubsampling="422", + fastdct=self.__jpeg_compression_fastdct, + ) - # check if multiserver_mode is activated - if self.__multiserver_mode: - # prepare the exclusive json dict and assign values with unique port - msg_dict = dict( - terminate_flag=exit_flag, - compression={ - "dct": self.__jpeg_compression_fastdct, - "ups": self.__jpeg_compression_fastupsample, - } - if self.__jpeg_compression - else False, - port=self.__port, - pattern=str(self.__pattern), - message=message, - dtype=str(frame.dtype) if not (self.__jpeg_compression) else "", - shape=frame.shape if not (self.__jpeg_compression) else "", - ) - else: - # otherwise prepare normal json dict and assign values - msg_dict = dict( + # check if multiserver_mode is activated and assign values with unique port + msg_dict = dict(port=self.__port) if self.__multiserver_mode else dict() + + # prepare the exclusive json dict + msg_dict.update( + dict( terminate_flag=exit_flag, compression={ "dct": self.__jpeg_compression_fastdct, "ups": self.__jpeg_compression_fastupsample, + "colorspace": self.__jpeg_compression_colorspace, } if self.__jpeg_compression else False, @@ -1179,9 +1292,10 @@ def send(self, frame, message=None): dtype=str(frame.dtype) if not (self.__jpeg_compression) else "", shape=frame.shape if not (self.__jpeg_compression) else "", ) + ) # send the json dict - self.__msg_socket.send_json(msg_dict, self.__msg_flag | self.__zmq.SNDMORE) + self.__msg_socket.send_json(msg_dict, self.__msg_flag | zmq.SNDMORE) # send the frame array with correct flags self.__msg_socket.send( frame, flags=self.__msg_flag, copy=self.__msg_copy, track=self.__msg_track @@ -1189,20 +1303,20 @@ def send(self, frame, message=None): # check if synchronous patterns, then wait for confirmation if self.__pattern < 2: - # check if bi-directional data transmission is enabled + # check if Bidirectional data transmission is enabled if self.__bi_mode or self.__multiclient_mode: # handles return data recvd_data = None socks = dict(self.__poll.poll(self.__request_timeout)) - if socks.get(self.__msg_socket) == self.__zmq.POLLIN: + if socks.get(self.__msg_socket) == zmq.POLLIN: # handle return data recv_json = self.__msg_socket.recv_json(flags=self.__msg_flag) else: logger.critical("No response from Client, Reconnecting again...") # Socket is confused. Close and remove it. - self.__msg_socket.setsockopt(self.__zmq.LINGER, 0) + self.__msg_socket.setsockopt(zmq.LINGER, 0) self.__msg_socket.close() self.__poll.unregister(self.__msg_socket) self.__max_retries -= 1 @@ -1223,15 +1337,26 @@ def send(self, frame, message=None): # Create new connection self.__msg_socket = self.__msg_context.socket(self.__msg_pattern) - if isinstance(self.__connection_address, list): for _connection in self.__connection_address: self.__msg_socket.connect(_connection) else: - self.__msg_socket.connect(self.__connection_address) - - self.__poll.register(self.__msg_socket, self.__zmq.POLLIN) - + # handle SSH tunneling if enabled + if self.__ssh_tunnel_mode: + # establish tunnel connection + ssh.tunnel_connection( + self.__msg_socket, + self.__connection_address, + self.__ssh_tunnel_mode, + keyfile=self.__ssh_tunnel_keyfile, + password=self.__ssh_tunnel_pwd, + paramiko=self.__paramiko_present, + ) + else: + # connect normally + self.__msg_socket.connect(self.__connection_address) + self.__poll.register(self.__msg_socket, zmq.POLLIN) + # return None for mean-time return None # save the unique port addresses @@ -1252,7 +1377,7 @@ def send(self, frame, message=None): # decode JPEG frame recvd_data = simplejpeg.decode_jpeg( recv_array, - colorspace="BGR", + colorspace=recv_json["compression"]["colorspace"], fastdct=self.__jpeg_compression_fastdct or recv_json["compression"]["dct"], fastupsample=self.__jpeg_compression_fastupsample @@ -1268,6 +1393,13 @@ def send(self, frame, message=None): self.__ex_compression_params, ) ) + + if ( + recv_json["compression"]["colorspace"] == "GRAY" + and recvd_data.ndim == 3 + ): + # patch for https://gitlab.com/jfolz/simplejpeg/-/issues/11 + recvd_data = np.squeeze(recvd_data, axis=2) else: recvd_data = np.frombuffer( recv_array, dtype=recv_json["array_dtype"] @@ -1283,12 +1415,12 @@ def send(self, frame, message=None): else: # otherwise log normally socks = dict(self.__poll.poll(self.__request_timeout)) - if socks.get(self.__msg_socket) == self.__zmq.POLLIN: + if socks.get(self.__msg_socket) == zmq.POLLIN: recv_confirmation = self.__msg_socket.recv() else: logger.critical("No response from Client, Reconnecting again...") # Socket is confused. Close and remove it. - self.__msg_socket.setsockopt(self.__zmq.LINGER, 0) + self.__msg_socket.setsockopt(zmq.LINGER, 0) self.__msg_socket.close() self.__poll.unregister(self.__msg_socket) self.__max_retries -= 1 @@ -1302,8 +1434,21 @@ def send(self, frame, message=None): # Create new connection self.__msg_socket = self.__msg_context.socket(self.__msg_pattern) - self.__msg_socket.connect(self.__connection_address) - self.__poll.register(self.__msg_socket, self.__zmq.POLLIN) + # handle SSH tunneling if enabled + if self.__ssh_tunnel_mode: + # establish tunnel connection + ssh.tunnel_connection( + self.__msg_socket, + self.__connection_address, + self.__ssh_tunnel_mode, + keyfile=self.__ssh_tunnel_keyfile, + password=self.__ssh_tunnel_pwd, + paramiko=self.__paramiko_present, + ) + else: + # connect normally + self.__msg_socket.connect(self.__connection_address) + self.__poll.register(self.__msg_socket, zmq.POLLIN) return None @@ -1351,11 +1496,12 @@ def close(self): ): try: # properly close the socket - self.__msg_socket.setsockopt(self.__zmq.LINGER, 0) + self.__msg_socket.setsockopt(zmq.LINGER, 0) self.__msg_socket.close() - except self.__ZMQError: + except ZMQError: pass finally: + # exit return if self.__multiserver_mode: @@ -1368,35 +1514,23 @@ def close(self): try: if self.__multiclient_mode: - if self.__port_buffer: - for _ in self.__port_buffer: - self.__msg_socket.send_json(term_dict) - - # check for confirmation if available within half timeout - if self.__pattern < 2: - if self.__logging: - logger.debug("Terminating. Please wait...") - if self.__msg_socket.poll( - self.__request_timeout // 5, self.__zmq.POLLIN - ): - self.__msg_socket.recv() + for _ in self.__port_buffer: + self.__msg_socket.send_json(term_dict) else: self.__msg_socket.send_json(term_dict) - # check for confirmation if available within half timeout - if self.__pattern < 2: - if self.__logging: - logger.debug("Terminating. Please wait...") - if self.__msg_socket.poll( - self.__request_timeout // 5, self.__zmq.POLLIN - ): - self.__msg_socket.recv() + # check for confirmation if available within 1/5 timeout + if self.__pattern < 2: + if self.__logging: + logger.debug("Terminating. Please wait...") + if self.__msg_socket.poll(self.__request_timeout // 5, zmq.POLLIN): + self.__msg_socket.recv() except Exception as e: - if not isinstance(e, self.__ZMQError): + if not isinstance(e, ZMQError): logger.exception(str(e)) finally: # properly close the socket - self.__msg_socket.setsockopt(self.__zmq.LINGER, 0) + self.__msg_socket.setsockopt(zmq.LINGER, 0) self.__msg_socket.close() if self.__logging: logger.debug("Terminated Successfully!") diff --git a/vidgear/gears/pigear.py b/vidgear/gears/pigear.py index f5757671a..975284406 100644 --- a/vidgear/gears/pigear.py +++ b/vidgear/gears/pigear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -17,17 +17,21 @@ limitations under the License. =============================================== """ - -# import the packages - +# import the necessary packages import cv2 import sys import time import logging as log from threading import Thread -from pkg_resources import parse_version -from .helper import capPropId, logger_handler +# import helper packages +from .helper import capPropId, logger_handler, import_dependency_safe + +# safe import critical Class modules +picamera = import_dependency_safe("picamera", error="silent") +if not (picamera is None): + from picamera import PiCamera + from picamera.array import PiRGBArray # define logger logger = log.getLogger("PiGear") @@ -70,22 +74,10 @@ def __init__( time_delay (int): time delay (in sec) before start reading the frames. options (dict): provides ability to alter Source Tweak Parameters. """ - - try: - import picamera - from picamera import PiCamera - from picamera.array import PiRGBArray - except Exception as error: - if isinstance(error, ImportError): - # Output expected ImportErrors. - raise ImportError( - '[PiGear:ERROR] :: Failed to detect Picamera executables, install it with "pip3 install picamera" command.' - ) - else: - # Handle any API errors - raise RuntimeError( - "[PiGear:ERROR] :: Picamera API failure: {}".format(error) - ) + # raise error(s) for critical Class imports + import_dependency_safe( + "picamera" if picamera is None else "", + ) # enable logging if specified self.__logging = False diff --git a/vidgear/gears/screengear.py b/vidgear/gears/screengear.py index 5d7805469..094dd1d0b 100644 --- a/vidgear/gears/screengear.py +++ b/vidgear/gears/screengear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,21 +18,24 @@ =============================================== """ # import the necessary packages - import cv2 import time import queue import numpy as np import logging as log -from mss import mss -import pyscreenshot as pysct from threading import Thread, Event from collections import deque, OrderedDict -from pkg_resources import parse_version -from mss.exception import ScreenShotError -from pyscreenshot.err import FailedBackendError -from .helper import capPropId, logger_handler +# import helper packages +from .helper import import_dependency_safe, capPropId, logger_handler + +# safe import critical Class modules +mss = import_dependency_safe("from mss import mss", error="silent") +if not (mss is None): + from mss.exception import ScreenShotError +pysct = import_dependency_safe("pyscreenshot", error="silent") +if not (pysct is None): + from pyscreenshot.err import FailedBackendError # define logger logger = log.getLogger("ScreenGear") @@ -62,6 +65,10 @@ def __init__( logging (bool): enables/disables logging. options (dict): provides the flexibility to manually set the dimensions of capture screen area. """ + # raise error(s) for critical Class imports + import_dependency_safe("mss.mss" if mss is None else "") + import_dependency_safe("pyscreenshot" if pysct is None else "") + # enable logging if specified: self.__logging = logging if isinstance(logging, bool) else False diff --git a/vidgear/gears/stabilizer.py b/vidgear/gears/stabilizer.py index 2f61c24da..2bf34e6eb 100644 --- a/vidgear/gears/stabilizer.py +++ b/vidgear/gears/stabilizer.py @@ -5,7 +5,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -21,13 +21,13 @@ =============================================== """ # import the necessary packages - import cv2 import numpy as np import logging as log from collections import deque -from .helper import logger_handler, check_CV_version +# import helper packages +from .helper import logger_handler, check_CV_version, retrieve_best_interpolation # define logger logger = log.getLogger("Stabilizer") @@ -138,6 +138,11 @@ def __init__( # define OpenCV version self.__cv2_version = check_CV_version() + # retrieve best interpolation + self.__interpolation = retrieve_best_interpolation( + ["INTER_LINEAR_EXACT", "INTER_LINEAR", "INTER_AREA"] + ) + # define normalized box filter self.__box_filter = np.ones(smoothing_radius) / smoothing_radius @@ -374,11 +379,10 @@ def __apply_transformations(self): self.__crop_n_zoom : -self.__crop_n_zoom, ] # zoom stabilized frame - interpolation = ( - cv2.INTER_CUBIC if (self.__cv2_version < 4) else cv2.INTER_LINEAR_EXACT - ) frame_stabilized = cv2.resize( - frame_cropped, self.__frame_size[::-1], interpolation=interpolation + frame_cropped, + self.__frame_size[::-1], + interpolation=self.__interpolation, ) # finally return stabilized frame diff --git a/vidgear/gears/streamgear.py b/vidgear/gears/streamgear.py index 8e6f00338..e1ef5cb33 100644 --- a/vidgear/gears/streamgear.py +++ b/vidgear/gears/streamgear.py @@ -18,23 +18,23 @@ =============================================== """ # import the necessary packages - import os import cv2 import sys import time +import math import difflib import logging as log import subprocess as sp from tqdm import tqdm from fractions import Fraction -from pkg_resources import parse_version from collections import OrderedDict +# import helper packages from .helper import ( capPropId, dict2Args, - delete_safe, + delete_ext_safe, extract_time, is_valid_url, logger_handler, @@ -47,30 +47,39 @@ # define logger logger = log.getLogger("StreamGear") +logger.propagate = False logger.addHandler(logger_handler()) logger.setLevel(log.DEBUG) class StreamGear: """ - StreamGear automates transcoding workflow for generating Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH) in just few lines of python code. + StreamGear automates transcoding workflow for generating Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH and HLS) in just few lines of python code. StreamGear provides a standalone, highly extensible, and flexible wrapper around FFmpeg multimedia framework for generating chunked-encoded media segments of the content. - SteamGear easily transcodes source videos/audio files & real-time video-frames and breaks them into a sequence of multiple smaller chunks/segments of fixed length. These segments make it + SteamGear easily transcodes source videos/audio files & real-time video-frames and breaks them into a sequence of multiple smaller chunks/segments of suitable length. These segments make it possible to stream videos at different quality levels (different bitrates or spatial resolutions) and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests. - SteamGear also creates a Manifest file (such as MPD in-case of DASH) besides segments that describe these segment information (timing, URL, media characteristics like video resolution and bit rates) + SteamGear also creates a Manifest/Playlist file (such as MPD in-case of DASH and M3U8 in-case of HLS) besides segments that describe these segment information (timing, URL, media characteristics like video resolution and bit rates) and is provided to the client before the streaming session. - SteamGear currently only supports MPEG-DASH (Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1) , but other adaptive streaming technologies such as Apple HLS, Microsoft Smooth Streaming, will be - added soon. Also, Multiple DRM support is yet to be implemented. + SteamGear currently supports MPEG-DASH (Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1) and Apple HLS (HTTP live streaming). """ def __init__( self, output="", format="dash", custom_ffmpeg="", logging=False, **stream_params ): + """ + This constructor method initializes the object state and attributes of the StreamGear class. + Parameters: + output (str): sets the valid filename/path for storing the StreamGear assets. + format (str): select the adaptive HTTP streaming format(DASH and HLS). + custom_ffmpeg (str): assigns the location of custom path/directory for custom FFmpeg executables. + logging (bool): enables/disables logging. + stream_params (dict): provides the flexibility to control supported internal parameters and FFmpeg properities. + """ # checks if machine in-use is running windows os or not self.__os_windows = True if os.name == "nt" else False # enable logging if specified @@ -134,6 +143,8 @@ def __init__( self.__audio = audio else: self.__audio = "" + elif audio and isinstance(audio, list): + self.__audio = audio else: self.__audio = "" @@ -173,7 +184,7 @@ def __init__( ) ) else: - logger.warning("No valid video_source detected.") + logger.warning("No valid video_source provided.") else: self.__video_source = "" @@ -199,12 +210,17 @@ def __init__( self.__livestreaming = False # handle Streaming formats - supported_formats = ["dash"] # will be extended in future + supported_formats = ["dash", "hls"] # will be extended in future # Validate if not (format is None) and format and isinstance(format, str): _format = format.strip().lower() if _format in supported_formats: self.__format = _format + logger.info( + "StreamGear will generate files for {} HTTP streaming format.".format( + self.__format.upper() + ) + ) elif difflib.get_close_matches(_format, supported_formats): raise ValueError( "[StreamGear:ERROR] :: Incorrect format! Did you mean `{}`?".format( @@ -236,9 +252,23 @@ def __init__( ): # check if given path is directory valid_extension = "mpd" if self.__format == "dash" else "m3u8" + # get all assets extensions + assets_exts = [ + ("chunk-stream", ".m4s"), # filename prefix, extension + ("chunk-stream", ".ts"), # filename prefix, extension + ".{}".format(valid_extension), + ] + # add source file extension too + if self.__video_source: + assets_exts.append( + ( + "chunk-stream", + os.path.splitext(self.__video_source)[1], + ) # filename prefix, extension + ) if os.path.isdir(abs_path): if self.__clear_assets: - delete_safe(abs_path, [".m4s", ".mpd"], logging=self.__logging) + delete_ext_safe(abs_path, assets_exts, logging=self.__logging) abs_path = os.path.join( abs_path, "{}-{}.{}".format( @@ -248,9 +278,9 @@ def __init__( ), ) # auto-assign valid name and adds it to path elif self.__clear_assets and os.path.isfile(abs_path): - delete_safe( + delete_ext_safe( os.path.dirname(abs_path), - [".m4s", ".mpd"], + assets_exts, logging=self.__logging, ) # check if path has valid file extension @@ -285,7 +315,7 @@ def __init__( ) ) # log Mode of operation - logger.critical( + logger.info( "StreamGear has been successfully configured for {} Mode.".format( "Single-Source" if self.__video_source else "Real-time Frames" ) @@ -395,14 +425,14 @@ def __PreProcess(self, channels=0, rgb=False): "libx265", "libvpx-vp9", ]: - output_parameters["-crf"] = self.__params.pop("-crf", "18") + output_parameters["-crf"] = self.__params.pop("-crf", "20") if output_parameters["-vcodec"] in ["libx264", "libx264rgb"]: if not (self.__video_source): output_parameters["-profile:v"] = self.__params.pop( "-profile:v", "high" ) output_parameters["-tune"] = self.__params.pop("-tune", "zerolatency") - output_parameters["-preset"] = self.__params.pop("-preset", "ultrafast") + output_parameters["-preset"] = self.__params.pop("-preset", "veryfast") if output_parameters["-vcodec"] == "libx265": output_parameters["-x265-params"] = self.__params.pop( "-x265-params", "lossless=1" @@ -410,28 +440,40 @@ def __PreProcess(self, channels=0, rgb=False): # enable audio (if present) if self.__audio: # validate audio source - bitrate = validate_audio(self.__ffmpeg, file_path=self.__audio) + bitrate = validate_audio(self.__ffmpeg, source=self.__audio) if bitrate: logger.info( "Detected External Audio Source is valid, and will be used for streams." ) - # assign audio - output_parameters["-i"] = self.__audio + + # assign audio source + output_parameters[ + "{}".format( + "-core_asource" if isinstance(self.__audio, list) else "-i" + ) + ] = self.__audio + # assign audio codec - output_parameters["-acodec"] = self.__params.pop("-acodec", "copy") + output_parameters["-acodec"] = self.__params.pop( + "-acodec", "aac" if isinstance(self.__audio, list) else "copy" + ) output_parameters["a_bitrate"] = bitrate # temporary handler - output_parameters["-core_audio"] = ["-map", "1:a:0"] + output_parameters["-core_audio"] = ( + ["-map", "1:a:0"] if self.__format == "dash" else [] + ) else: - logger.critical( + logger.warning( "Audio source `{}` is not valid, Skipped!".format(self.__audio) ) elif self.__video_source: # validate audio source - bitrate = validate_audio(self.__ffmpeg, file_path=self.__video_source) + bitrate = validate_audio(self.__ffmpeg, source=self.__video_source) if bitrate: logger.info("Source Audio will be used for streams.") # assign audio codec - output_parameters["-acodec"] = "copy" + output_parameters["-acodec"] = ( + "aac" if self.__format == "hls" else "copy" + ) output_parameters["a_bitrate"] = bitrate # temporary handler else: logger.warning( @@ -472,12 +514,9 @@ def __PreProcess(self, channels=0, rgb=False): "[StreamGear:ERROR] :: Frames with channels outside range 1-to-4 are not supported!" ) # process assigned format parameters - process_params = None - if self.__format == "dash": - process_params = self.__generate_dash_stream( - input_params=input_parameters, - output_params=output_parameters, - ) + process_params = self.__handle_streams( + input_params=input_parameters, output_params=output_parameters + ) # check if processing completed successfully assert not ( process_params is None @@ -487,22 +526,130 @@ def __PreProcess(self, channels=0, rgb=False): # Finally start FFmpef pipline and process everything self.__Build_n_Execute(process_params[0], process_params[1]) + def __handle_streams(self, input_params, output_params): + """ + An internal function that parses various streams and its parameters. + + Parameters: + input_params (dict): Input FFmpeg parameters + output_params (dict): Output FFmpeg parameters + """ + # handle bit-per-pixels + bpp = self.__params.pop("-bpp", 0.1000) + if isinstance(bpp, (float, int)) and bpp > 0.0: + bpp = float(bpp) if (bpp > 0.001) else 0.1000 + else: + # reset to defaut if invalid + bpp = 0.1000 + # log it + if self.__logging: + logger.debug("Setting bit-per-pixels: {} for this stream.".format(bpp)) + + # handle gop + gop = self.__params.pop("-gop", 0) + if isinstance(gop, (int, float)) and gop > 0: + gop = int(gop) + else: + # reset to some recommended value + gop = 2 * int(self.__sourceframerate) + # log it + if self.__logging: + logger.debug("Setting GOP: {} for this stream.".format(gop)) + + # define and map default stream + if self.__format != "hls": + output_params["-map"] = 0 + else: + output_params["-corev0"] = ["-map", "0:v"] + if "-acodec" in output_params: + output_params["-corea0"] = [ + "-map", + "{}:a".format(1 if "-core_audio" in output_params else 0), + ] + # assign resolution + if "-s:v:0" in self.__params: + # prevent duplicates + del self.__params["-s:v:0"] + output_params["-s:v:0"] = "{}x{}".format(self.__inputwidth, self.__inputheight) + # assign video-bitrate + if "-b:v:0" in self.__params: + # prevent duplicates + del self.__params["-b:v:0"] + output_params["-b:v:0"] = ( + str( + get_video_bitrate( + int(self.__inputwidth), + int(self.__inputheight), + self.__sourceframerate, + bpp, + ) + ) + + "k" + ) + # assign audio-bitrate + if "-b:a:0" in self.__params: + # prevent duplicates + del self.__params["-b:a:0"] + # extract audio-bitrate from temporary handler + a_bitrate = output_params.pop("a_bitrate", "") + if "-acodec" in output_params and a_bitrate: + output_params["-b:a:0"] = a_bitrate + + # handle user-defined streams + streams = self.__params.pop("-streams", {}) + output_params = self.__evaluate_streams(streams, output_params, bpp) + + # define additional stream optimization parameters + if output_params["-vcodec"] in ["libx264", "libx264rgb"]: + if not "-bf" in self.__params: + output_params["-bf"] = 1 + if not "-sc_threshold" in self.__params: + output_params["-sc_threshold"] = 0 + if not "-keyint_min" in self.__params: + output_params["-keyint_min"] = gop + if output_params["-vcodec"] in ["libx264", "libx264rgb", "libvpx-vp9"]: + if not "-g" in self.__params: + output_params["-g"] = gop + if output_params["-vcodec"] == "libx265": + output_params["-core_x265"] = [ + "-x265-params", + "keyint={}:min-keyint={}".format(gop, gop), + ] + + # process given dash/hls stream + processed_params = None + if self.__format == "dash": + processed_params = self.__generate_dash_stream( + input_params=input_params, + output_params=output_params, + ) + else: + processed_params = self.__generate_hls_stream( + input_params=input_params, + output_params=output_params, + ) + + return processed_params + def __evaluate_streams(self, streams, output_params, bpp): """ Internal function that Extracts, Evaluates & Validates user-defined streams Parameters: streams (dict): Indivisual streams formatted as list of dict. - rgb_mode (boolean): activates RGB mode _(if enabled)_. + output_params (dict): Output FFmpeg parameters """ + # temporary streams count variable + output_params["stream_count"] = 0 # default is 0 + # check if streams are empty if not streams: - logger.warning("No `-streams` are provided for this stream.") + logger.warning("No `-streams` are provided!") return output_params - # check if data is valid + # check if streams are valid if isinstance(streams, list) and all(isinstance(x, dict) for x in streams): - stream_num = 1 # keep track of streams + stream_count = 1 # keep track of streams # calculate source aspect-ratio source_aspect_ratio = self.__inputwidth / self.__inputheight # log the process @@ -514,7 +661,15 @@ def __evaluate_streams(self, streams, output_params, bpp): intermediate_dict = {} # handles intermediate stream data as dictionary # define and map stream to intermediate dict - intermediate_dict["-core{}".format(stream_num)] = ["-map", "0"] + if self.__format != "hls": + intermediate_dict["-core{}".format(stream_count)] = ["-map", "0"] + else: + intermediate_dict["-corev{}".format(stream_count)] = ["-map", "0:v"] + if "-acodec" in output_params: + intermediate_dict["-corea{}".format(stream_count)] = [ + "-map", + "{}:a".format(1 if "-core_audio" in output_params else 0), + ] # extract resolution & indivisual dimension of stream resolution = stream.pop("-resolution", "") @@ -530,15 +685,17 @@ def __evaluate_streams(self, streams, output_params, bpp): and dimensions[1].isnumeric() ): # verify resolution is w.r.t source aspect-ratio - expected_width = self.__inputheight * source_aspect_ratio - if int(dimensions[0]) != round(expected_width): + expected_width = math.floor( + int(dimensions[1]) * source_aspect_ratio + ) + if int(dimensions[0]) != expected_width: logger.warning( - "Given Stream Resolution `{}` is not in accordance with the Source Aspect-Ratio. Stream Output may appear Distorted!".format( + "Given stream resolution `{}` is not in accordance with the Source Aspect-Ratio. Stream Output may appear Distorted!".format( resolution ) ) # assign stream resolution to intermediate dict - intermediate_dict["-s:v:{}".format(stream_num)] = resolution + intermediate_dict["-s:v:{}".format(stream_count)] = resolution else: # otherwise log error and skip stream logger.error( @@ -556,12 +713,14 @@ def __evaluate_streams(self, streams, output_params, bpp): and video_bitrate.endswith(("k", "M")) ): # assign it - intermediate_dict["-b:v:{}".format(stream_num)] = video_bitrate + intermediate_dict["-b:v:{}".format(stream_count)] = video_bitrate else: # otherwise calculate video-bitrate fps = stream.pop("-framerate", 0.0) if dimensions and isinstance(fps, (float, int)) and fps > 0: - intermediate_dict["-b:v:{}".format(stream_num)] = "{}k".format( + intermediate_dict[ + "-b:v:{}".format(stream_count) + ] = "{}k".format( get_video_bitrate( int(dimensions[0]), int(dimensions[1]), fps, bpp ) @@ -578,13 +737,15 @@ def __evaluate_streams(self, streams, output_params, bpp): audio_bitrate = stream.pop("-audio_bitrate", "") if "-acodec" in output_params: if audio_bitrate and audio_bitrate.endswith(("k", "M")): - intermediate_dict["-b:a:{}".format(stream_num)] = audio_bitrate + intermediate_dict[ + "-b:a:{}".format(stream_count) + ] = audio_bitrate else: # otherwise calculate audio-bitrate if dimensions: aspect_width = int(dimensions[0]) intermediate_dict[ - "-b:a:{}".format(stream_num) + "-b:a:{}".format(stream_count) ] = "{}k".format(128 if (aspect_width > 800) else 96) # update output parameters output_params.update(intermediate_dict) @@ -593,96 +754,80 @@ def __evaluate_streams(self, streams, output_params, bpp): # clear stream copy stream_copy.clear() # increment to next stream - stream_num += 1 + stream_count += 1 + output_params["stream_count"] = stream_count if self.__logging: logger.debug("All streams processed successfully!") else: - logger.critical("Invalid `-streams` values skipped!") + logger.warning("Invalid type `-streams` skipped!") return output_params - def __generate_dash_stream(self, input_params, output_params): + def __generate_hls_stream(self, input_params, output_params): """ An internal function that parses user-defined parameters and generates - suitable FFmpeg Terminal Command for transcoding input into MPEG-dash Stream. + suitable FFmpeg Terminal Command for transcoding input into HLS Stream. Parameters: input_params (dict): Input FFmpeg parameters output_params (dict): Output FFmpeg parameters """ - # handle bit-per-pixels - bpp = self.__params.pop("-bpp", 0.1000) - if isinstance(bpp, (float, int)) and bpp > 0.0: - bpp = float(bpp) if (bpp > 0.001) else 0.1000 - else: - # reset to defaut if invalid - bpp = 0.1000 - # log it - if self.__logging: - logger.debug("Setting bit-per-pixels: {} for this stream.".format(bpp)) + # Check if live-streaming or not? - # handle gop - gop = self.__params.pop("-gop", 0) - if isinstance(gop, (int, float)) and gop > 0: - gop = int(gop) + # validate `hls_segment_type` + default_hls_segment_type = self.__params.pop("-hls_segment_type", "mpegts") + if isinstance( + default_hls_segment_type, int + ) and default_hls_segment_type.strip() in ["fmp4", "mpegts"]: + output_params["-hls_segment_type"] = default_hls_segment_type.strip() else: - # reset to some recommended value - gop = 2 * int(self.__sourceframerate) - # log it - if self.__logging: - logger.debug("Setting GOP: {} for this stream.".format(gop)) + output_params["-hls_segment_type"] = "mpegts" - # define and map default stream - output_params["-map"] = 0 - # assign resolution - if "-s:v:0" in self.__params: - # prevent duplicates - del self.__params["-s:v:0"] - output_params["-s:v:0"] = "{}x{}".format(self.__inputwidth, self.__inputheight) - # assign video-bitrate - if "-b:v:0" in self.__params: - # prevent duplicates - del self.__params["-b:v:0"] - output_params["-b:v:0"] = ( - str( - get_video_bitrate( - int(self.__inputwidth), - int(self.__inputheight), - self.__sourceframerate, - bpp, - ) + # gather required parameters + if self.__livestreaming: + # `hls_list_size` must be greater than 0 + default_hls_list_size = self.__params.pop("-hls_list_size", 6) + if isinstance(default_hls_list_size, int) and default_hls_list_size > 0: + output_params["-hls_list_size"] = default_hls_list_size + else: + # otherwise reset to default + output_params["-hls_list_size"] = 6 + # default behaviour + output_params["-hls_init_time"] = self.__params.pop("-hls_init_time", 4) + output_params["-hls_time"] = self.__params.pop("-hls_time", 6) + output_params["-hls_flags"] = self.__params.pop( + "-hls_flags", "delete_segments+discont_start+split_by_time" ) - + "k" + # clean everything at exit? + output_params["-remove_at_exit"] = self.__params.pop("-remove_at_exit", 0) + else: + # enforce "contain all the segments" + output_params["-hls_list_size"] = 0 + output_params["-hls_playlist_type"] = "vod" + + # handle base URL for absolute paths + output_params["-hls_base_url"] = self.__params.pop("-hls_base_url", "") + + # Finally, some hardcoded HLS parameters (Refer FFmpeg docs for more info.) + output_params["-allowed_extensions"] = "ALL" + output_params["-hls_segment_filename"] = "{}-stream%v-%03d.{}".format( + os.path.join(os.path.dirname(self.__out_file), "chunk"), + "m4s" if output_params["-hls_segment_type"] == "fmp4" else "ts", ) - # assign audio-bitrate - if "-b:a:0" in self.__params: - # prevent duplicates - del self.__params["-b:a:0"] - # extract audio-bitrate from temporary handler - a_bitrate = output_params.pop("a_bitrate", "") - if "-acodec" in output_params and a_bitrate: - output_params["-b:a:0"] = a_bitrate + output_params["-hls_allow_cache"] = 0 + # enable hls formatting + output_params["-f"] = "hls" + return (input_params, output_params) - # handle user-defined streams - streams = self.__params.pop("-streams", {}) - output_params = self.__evaluate_streams(streams, output_params, bpp) + def __generate_dash_stream(self, input_params, output_params): + """ + An internal function that parses user-defined parameters and generates + suitable FFmpeg Terminal Command for transcoding input into MPEG-dash Stream. - # define additional stream optimization parameters - if output_params["-vcodec"] in ["libx264", "libx264rgb"]: - if not "-bf" in self.__params: - output_params["-bf"] = 1 - if not "-sc_threshold" in self.__params: - output_params["-sc_threshold"] = 0 - if not "-keyint_min" in self.__params: - output_params["-keyint_min"] = gop - if output_params["-vcodec"] in ["libx264", "libx264rgb", "libvpx-vp9"]: - if not "-g" in self.__params: - output_params["-g"] = gop - if output_params["-vcodec"] == "libx265": - output_params["-core_x265"] = [ - "-x265-params", - "keyint={}:min-keyint={}".format(gop, gop), - ] + Parameters: + input_params (dict): Input FFmpeg parameters + output_params (dict): Output FFmpeg parameters + """ # Check if live-streaming or not? if self.__livestreaming: @@ -692,17 +837,20 @@ def __generate_dash_stream(self, input_params, output_params): ) # clean everything at exit? output_params["-remove_at_exit"] = self.__params.pop("-remove_at_exit", 0) + # default behaviour + output_params["-seg_duration"] = self.__params.pop("-seg_duration", 20) + # Disable (0) the use of a SegmentTimline inside a SegmentTemplate. + output_params["-use_timeline"] = 0 else: # default behaviour - output_params["-min_seg_duration"] = self.__params.pop( - "-min_seg_duration", 5000000 - ) + output_params["-seg_duration"] = self.__params.pop("-seg_duration", 5) + # Enable (1) the use of a SegmentTimline inside a SegmentTemplate. + output_params["-use_timeline"] = 1 # Finally, some hardcoded DASH parameters (Refer FFmpeg docs for more info.) - output_params["-use_timeline"] = 1 output_params["-use_template"] = 1 - output_params["-adaptation_sets"] = "id=0,streams=v{}".format( - " id=1,streams=a" if ("-acodec" in output_params) else "" + output_params["-adaptation_sets"] = "id=0,streams=v {}".format( + "id=1,streams=a" if ("-acodec" in output_params) else "" ) # enable dash formatting output_params["-f"] = "dash" @@ -717,17 +865,42 @@ def __Build_n_Execute(self, input_params, output_params): input_params (dict): Input FFmpeg parameters output_params (dict): Output FFmpeg parameters """ + # handle audio source if present + if "-core_asource" in output_params: + output_params.move_to_end("-core_asource", last=False) + # finally handle `-i` if "-i" in output_params: output_params.move_to_end("-i", last=False) + # copy streams count + stream_count = output_params.pop("stream_count", 0) + # convert input parameters to list input_commands = dict2Args(input_params) # convert output parameters to list output_commands = dict2Args(output_params) # convert any additional parameters to list stream_commands = dict2Args(self.__params) - # log it + + # create exclusive HLS params + hls_commands = [] + # handle HLS multi-bitrate according to stream count + if self.__format == "hls" and stream_count > 0: + stream_map = "" + for count in range(0, stream_count): + stream_map += "v:{}{} ".format( + count, ",a:{}".format(count) if "-acodec" in output_params else "," + ) + hls_commands += [ + "-master_pl_name", + os.path.basename(self.__out_file), + "-var_stream_map", + stream_map.strip(), + os.path.join(os.path.dirname(self.__out_file), "stream_%v.m3u8"), + ] + + # log it if enabled if self.__logging: logger.debug( "User-Defined Output parameters: `{}`".format( @@ -743,8 +916,8 @@ def __Build_n_Execute(self, input_params, output_params): ffmpeg_cmd = None hide_banner = ( [] if self.__logging else ["-hide_banner"] - ) # ensure less cluterring - # format command + ) # ensuring less cluterring if specified + # format commands if self.__video_source: ffmpeg_cmd = ( [self.__ffmpeg, "-y"] @@ -754,20 +927,19 @@ def __Build_n_Execute(self, input_params, output_params): + input_commands + output_commands + stream_commands - + [self.__out_file] ) else: ffmpeg_cmd = ( [self.__ffmpeg, "-y"] - + ["-re"] # pseudo live-streaming + hide_banner + ["-f", "rawvideo", "-vcodec", "rawvideo"] + input_commands + ["-i", "-"] + output_commands + stream_commands - + [self.__out_file] ) + # format outputs + ffmpeg_cmd.extend([self.__out_file] if not (hls_commands) else hls_commands) # Launch the FFmpeg pipeline with built command logger.critical("Transcoding streaming chunks. Please wait...") # log it self.__process = sp.Popen( @@ -844,6 +1016,9 @@ def terminate(self): # close `stdin` output if self.__process.stdin: self.__process.stdin.close() + # force terminate if external audio source + if isinstance(self.__audio, list): + self.__process.terminate() # wait if still process is still processing some information self.__process.wait() self.__process = None diff --git a/vidgear/gears/videogear.py b/vidgear/gears/videogear.py index 10dc4e639..bef898dc5 100644 --- a/vidgear/gears/videogear.py +++ b/vidgear/gears/videogear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -21,7 +21,10 @@ # import the necessary packages import logging as log +# import helper packages from .helper import logger_handler + +# import additional API(s) from .camgear import CamGear # define logger diff --git a/vidgear/gears/writegear.py b/vidgear/gears/writegear.py index becbadd39..a6d8e1ba9 100644 --- a/vidgear/gears/writegear.py +++ b/vidgear/gears/writegear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -18,15 +18,14 @@ =============================================== """ # import the necessary packages - import os import cv2 import sys import time import logging as log import subprocess as sp -from pkg_resources import parse_version +# import helper packages from .helper import ( capPropId, dict2Args, @@ -47,13 +46,13 @@ class WriteGear: """ - WriteGear handles various powerful Video-Writer Tools that provide us the freedom to do almost anything imaginable with multimedia data. + WriteGear handles various powerful Video-Writer Tools that provide us the freedom to do almost anything imaginable with multimedia data. - WriteGear API provides a complete, flexible, and robust wrapper around FFmpeg, a leading multimedia framework. WriteGear can process real-time frames into a lossless - compressed video-file with any suitable specification (such asbitrate, codec, framerate, resolution, subtitles, etc.). It is powerful enough to perform complex tasks such as + WriteGear API provides a complete, flexible, and robust wrapper around FFmpeg, a leading multimedia framework. WriteGear can process real-time frames into a lossless + compressed video-file with any suitable specification (such asbitrate, codec, framerate, resolution, subtitles, etc.). It is powerful enough to perform complex tasks such as Live-Streaming (such as for Twitch) and Multiplexing Video-Audio with real-time frames in way fewer lines of code. - Best of all, WriteGear grants users the complete freedom to play with any FFmpeg parameter with its exclusive Custom Commands function without relying on any + Best of all, WriteGear grants users the complete freedom to play with any FFmpeg parameter with its exclusive Custom Commands function without relying on any third-party API. In addition to this, WriteGear also provides flexible access to OpenCV's VideoWriter API tools for video-frames encoding without compression. @@ -89,7 +88,7 @@ def __init__( compression_mode (bool): selects the WriteGear's Primary Mode of Operation. custom_ffmpeg (str): assigns the location of custom path/directory for custom FFmpeg executables. logging (bool): enables/disables logging. - output_params (dict): provides the flexibility to control supported internal parameters and properities. + output_params (dict): provides the flexibility to control supported internal parameters and FFmpeg properities. """ # assign parameter values to class variables @@ -231,7 +230,7 @@ def __init__( # display confirmation if logging is enabled/disabled if self.__compression and self.__ffmpeg: - # check whether is valid url instead + # check whether url is valid instead if self.__out_file is None: if is_valid_url( self.__ffmpeg, url=output_filename, logging=self.__logging diff --git a/vidgear/tests/__init__.py b/vidgear/tests/__init__.py index d4ca79583..0907cd103 100644 --- a/vidgear/tests/__init__.py +++ b/vidgear/tests/__init__.py @@ -1 +1,8 @@ -__author__ = "Abhishek Thakur (@abhiTronix) " +# Faking +import sys +from .utils import fake_picamera + +sys.modules["picamera"] = fake_picamera.picamera +sys.modules["picamera.array"] = fake_picamera.picamera.array + +__author__ = "Abhishek Thakur (@abhiTronix) " \ No newline at end of file diff --git a/vidgear/tests/network_tests/asyncio_tests/test_helper.py b/vidgear/tests/network_tests/asyncio_tests/test_helper.py index 75901725f..cdfed9f98 100644 --- a/vidgear/tests/network_tests/asyncio_tests/test_helper.py +++ b/vidgear/tests/network_tests/asyncio_tests/test_helper.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -20,11 +20,17 @@ # import the necessary packages import sys +import cv2 +import asyncio import numpy as np import pytest import logging as log -from vidgear.gears.asyncio.helper import reducer, create_blank_frame, logger_handler +from vidgear.gears.asyncio.helper import ( + reducer, + create_blank_frame, +) +from vidgear.gears.helper import logger_handler, retrieve_best_interpolation # define test logger logger = log.getLogger("Test_Asyncio_Helper") @@ -33,6 +39,14 @@ logger.setLevel(log.DEBUG) +@pytest.fixture +def event_loop(): + """Create an instance of the default event loop for each test case.""" + loop = asyncio.SelectorEventLoop() + yield loop + loop.close() + + def getframe(): """ returns empty numpy frame/array of dimensions: (500,800,3) @@ -40,25 +54,25 @@ def getframe(): return (np.random.standard_normal([500, 800, 3]) * 255).astype(np.uint8) -pytestmark = pytest.mark.asyncio - - -@pytest.mark.skipif( - sys.version_info >= (3, 8), - reason="python3.8 is not supported yet by pytest-asyncio", -) +@pytest.mark.asyncio @pytest.mark.parametrize( - "frame , percentage, result", - [(getframe(), 85, True), (None, 80, False), (getframe(), 95, False)], + "frame , percentage, interpolation, result", + [ + (getframe(), 85, cv2.INTER_AREA, True), + (None, 80, cv2.INTER_AREA, False), + (getframe(), 95, cv2.INTER_AREA, False), + (getframe(), 80, "invalid", False), + (getframe(), 80, 797, False), + ], ) -async def test_reducer_asyncio(frame, percentage, result): +async def test_reducer_asyncio(frame, percentage, interpolation, result): """ Testing frame size reducer function """ if not (frame is None): org_size = frame.shape[:2] try: - reduced_frame = await reducer(frame, percentage) + reduced_frame = await reducer(frame, percentage, interpolation) logger.debug(reduced_frame.shape) assert not (reduced_frame is None) reduced_frame_size = reduced_frame.shape[:2] @@ -69,28 +83,50 @@ async def test_reducer_asyncio(frame, percentage, result): 100 * reduced_frame_size[1] // (100 - percentage) == org_size[1] ) # cross-check height except Exception as e: - if isinstance(e, ValueError) and not (result): - pass + if not (result): + pytest.xfail(str(e)) else: pytest.fail(str(e)) -@pytest.mark.skipif( - sys.version_info >= (3, 8), - reason="python3.8 is not supported yet by pytest-asyncio", -) +@pytest.mark.asyncio @pytest.mark.parametrize( "frame , text", - [(getframe(), "ok"), (None, ""), (getframe(), 123)], + [ + (getframe(), "ok"), + (cv2.cvtColor(getframe(), cv2.COLOR_BGR2BGRA), "ok"), + (None, ""), + (getframe(), 123), + ], ) async def test_create_blank_frame_asyncio(frame, text): """ - Testing frame size reducer function + Testing create_blank_frame function """ try: - text_frame = create_blank_frame(frame=frame, text=text) + text_frame = create_blank_frame(frame=frame, text=text, logging=True) logger.debug(text_frame.shape) assert not (text_frame is None) except Exception as e: if not (frame is None): pytest.fail(str(e)) + + +@pytest.mark.parametrize( + "interpolations", + [ + "invalid", + ["invalid", "invalid2", "INTER_LANCZOS4"], + ["INTER_NEAREST_EXACT", "INTER_LINEAR_EXACT", "INTER_LANCZOS4"], + ], +) +def test_retrieve_best_interpolation(interpolations): + """ + Testing retrieve_best_interpolation method + """ + try: + output = retrieve_best_interpolation(interpolations) + if interpolations != "invalid": + assert output, "Test failed" + except Exception as e: + pytest.fail(str(e)) diff --git a/vidgear/tests/network_tests/asyncio_tests/test_netgear_async.py b/vidgear/tests/network_tests/asyncio_tests/test_netgear_async.py index 7bb889e69..3d7a8c6a3 100644 --- a/vidgear/tests/network_tests/asyncio_tests/test_netgear_async.py +++ b/vidgear/tests/network_tests/asyncio_tests/test_netgear_async.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -32,7 +32,7 @@ import tempfile from vidgear.gears.asyncio import NetGear_Async -from vidgear.gears.asyncio.helper import logger_handler +from vidgear.gears.helper import logger_handler # define test logger logger = log.getLogger("Test_NetGear_Async") @@ -41,6 +41,14 @@ logger.setLevel(log.DEBUG) +@pytest.fixture(scope="module") +def event_loop(): + """Create an instance of the default event loop for each test case.""" + loop = asyncio.SelectorEventLoop() + yield loop + loop.close() + + def return_testvideo_path(): """ returns Test Video path @@ -65,33 +73,89 @@ async def custom_frame_generator(): # yield frame yield frame # sleep for sometime - await asyncio.sleep(0.000001) + await asyncio.sleep(0) + # close stream stream.release() +class Custom_Generator: + """ + Custom Generator using OpenCV, for testing bidirectional mode. + """ + + def __init__(self, server=None, data=""): + # initialize global params + assert not (server is None), "Invalid Value" + # assign server + self.server = server + # data + self.data = data + + # Create a async data and frame generator as custom source + async def custom_dataframe_generator(self): + # loop over stream until its terminated + stream = cv2.VideoCapture(return_testvideo_path()) + while True: + # read frames + (grabbed, frame) = stream.read() + + # check if frame empty + if not grabbed: + break + + # recieve client's data + recv_data = await self.server.transceive_data() + if not (recv_data is None): + if isinstance(recv_data, np.ndarray): + assert not ( + recv_data is None or np.shape(recv_data) == () + ), "Failed Test" + else: + logger.debug(recv_data) + + # yield data and frame + yield (self.data, frame) + # sleep for sometime + await asyncio.sleep(0) + stream.release() + + # Create a async function where you want to show/manipulate your received frames -async def client_iterator(client): +async def client_iterator(client, data=False): # loop over Client's Asynchronous Frame Generator async for frame in client.recv_generator(): # test frame validity assert not (frame is None or np.shape(frame) == ()), "Failed Test" + if data: + # send data + await client.transceive_data(data="invalid") # await before continuing - await asyncio.sleep(0.000001) + await asyncio.sleep(0) -@pytest.fixture -def event_loop(): - """Create an instance of the default event loop for each test case.""" - loop = asyncio.SelectorEventLoop() - yield loop - loop.close() +# Create a async function made to test bidirectional mode +async def client_dataframe_iterator(client, data=""): + # loop over Client's Asynchronous Data and Frame Generator + async for (recvd_data, frame) in client.recv_generator(): + if not (recvd_data is None): + # {do something with received server recv_data here} + logger.debug(recvd_data) + + # {do something with received frames here} + + # test frame validity + assert not (frame is None or np.shape(frame) == ()), "Failed Test" + # send data + await client.transceive_data(data=data) + # await before continuing + await asyncio.sleep(0) @pytest.mark.asyncio @pytest.mark.parametrize( "pattern", - [0, 2, 3, 4], + [1, 2, 3, 4], ) async def test_netgear_async_playback(pattern): try: @@ -103,11 +167,15 @@ async def test_netgear_async_playback(pattern): server = NetGear_Async( source=return_testvideo_path(), pattern=pattern, + timeout=7.0 if pattern == 4 else 0, logging=True, **options_gear ).launch() # gather and run tasks - input_coroutines = [server.task, client_iterator(client)] + input_coroutines = [ + server.task, + client_iterator(client, data=True if pattern == 4 else False), + ] res = await asyncio.gather(*input_coroutines, return_exceptions=True) except Exception as e: if isinstance(e, queue.Empty): @@ -128,7 +196,9 @@ async def test_netgear_async_playback(pattern): @pytest.mark.parametrize("generator, result", test_data_class) async def test_netgear_async_custom_server_generator(generator, result): try: - server = NetGear_Async(protocol="udp", logging=True) # invalid protocol + server = NetGear_Async( + protocol="udp", timeout=5.0, logging=True + ) # invalid protocol server.config["generator"] = generator server.launch() # define and launch Client with `receive_mode = True` and timeout = 5.0 @@ -139,50 +209,184 @@ async def test_netgear_async_custom_server_generator(generator, result): except Exception as e: if result: pytest.fail(str(e)) + else: + pytest.xfail(str(e)) finally: + server.close(skip_loop=True) + client.close(skip_loop=True) + + +test_data_class = [ + ( + custom_frame_generator(), + "Hi", + {"bidirectional_mode": True}, + {"bidirectional_mode": True}, + False, + ), + ( + [], + 444404444, + {"bidirectional_mode": True}, + {"bidirectional_mode": False}, + False, + ), + ( + [], + [1, "string", ["list"]], + {"bidirectional_mode": True}, + {"bidirectional_mode": True}, + True, + ), + ( + [], + (np.random.random(size=(480, 640, 3)) * 255).astype(np.uint8), + {"bidirectional_mode": True}, + {"bidirectional_mode": True}, + True, + ), +] + + +@pytest.mark.asyncio +@pytest.mark.parametrize( + "generator, data, options_server, options_client, result", + test_data_class, +) +async def test_netgear_async_bidirectionalmode( + generator, data, options_server, options_client, result +): + try: + server = NetGear_Async(logging=True, timeout=5.0, **options_server) + if not generator: + cg = Custom_Generator(server, data=data) + generator = cg.custom_dataframe_generator() + server.config["generator"] = generator + server.launch() + # define and launch Client with `receive_mode = True` and timeout = 5.0 + client = NetGear_Async( + logging=True, receive_mode=True, timeout=5.0, **options_client + ).launch() + # gather and run tasks + input_coroutines = [server.task, client_dataframe_iterator(client, data=data)] + res = await asyncio.gather(*input_coroutines, return_exceptions=True) + except Exception as e: if result: - server.close(skip_loop=True) - client.close(skip_loop=True) + pytest.fail(str(e)) + else: + pytest.xfail(str(e)) + finally: + server.close(skip_loop=True) + client.close(skip_loop=True) @pytest.mark.asyncio -@pytest.mark.parametrize("address, port", [("172.31.11.15.77", "5555"), (None, "5555")]) +@pytest.mark.parametrize( + "address, port", + [("172.31.11.15.77", "5555"), ("172.31.11.33.44", "5555"), (None, "5555")], +) async def test_netgear_async_addresses(address, port): + server = None try: # define and launch Client with `receive_mode = True` client = NetGear_Async( address=address, port=port, logging=True, timeout=5.0, receive_mode=True ).launch() + options_gear = {"THREAD_TIMEOUT": 60} if address is None: - options_gear = {"THREAD_TIMEOUT": 60} server = NetGear_Async( source=return_testvideo_path(), address=address, port=port, + timeout=5.0, logging=True, **options_gear ).launch() # gather and run tasks input_coroutines = [server.task, client_iterator(client)] await asyncio.gather(*input_coroutines, return_exceptions=True) + elif address == "172.31.11.33.44": + options_gear["bidirectional_mode"] = True + server = NetGear_Async( + source=return_testvideo_path(), + address=address, + port=port, + logging=True, + timeout=5.0, + **options_gear + ).launch() + await asyncio.ensure_future(server.task) else: await asyncio.ensure_future(client_iterator(client)) except Exception as e: - if address == "172.31.11.15.77" or isinstance(e, queue.Empty): + if address in ["172.31.11.15.77", "172.31.11.33.44"] or isinstance( + e, queue.Empty + ): logger.exception(str(e)) else: pytest.fail(str(e)) finally: - if address is None: + if (address is None or address == "172.31.11.33.44") and not (server is None): server.close(skip_loop=True) client.close(skip_loop=True) @pytest.mark.asyncio -@pytest.mark.xfail(raises=ValueError) async def test_netgear_async_recv_generator(): - # define and launch server - server = NetGear_Async(source=return_testvideo_path(), logging=True) - async for frame in server.recv_generator(): - logger.error("Failed") - server.close(skip_loop=True) + server = None + try: + # define and launch server + server = NetGear_Async( + source=return_testvideo_path(), timeout=5.0, logging=True + ) + async for frame in server.recv_generator(): + logger.warning("Failed") + except Exception as e: + if isinstance(e, (ValueError, asyncio.TimeoutError)): + pytest.xfail(str(e)) + else: + pytest.fail(str(e)) + finally: + if not (server is None): + server.close(skip_loop=True) + + +@pytest.mark.asyncio +@pytest.mark.parametrize( + "pattern, options", + [ + (0, {"bidirectional_mode": True}), + (0, {"bidirectional_mode": False}), + (1, {"bidirectional_mode": "invalid"}), + (2, {"bidirectional_mode": True}), + ], +) +async def test_netgear_async_options(pattern, options): + client = None + try: + # define and launch server + client = NetGear_Async( + source=None + if options["bidirectional_mode"] != True + else return_testvideo_path(), + receive_mode=True, + timeout=5.0, + pattern=pattern, + logging=True, + **options + ) + async for frame in client.recv_generator(): + if not options["bidirectional_mode"]: + # create target data + target_data = "Client here." + # send it + await client.transceive_data(data=target_data) + logger.warning("Failed") + except Exception as e: + if isinstance(e, (ValueError, asyncio.TimeoutError)): + pytest.xfail(str(e)) + else: + pytest.fail(str(e)) + finally: + if not (client is None): + client.close(skip_loop=True) diff --git a/vidgear/tests/network_tests/test_netgear.py b/vidgear/tests/network_tests/test_netgear.py index a72b07d4f..23ab48996 100644 --- a/vidgear/tests/network_tests/test_netgear.py +++ b/vidgear/tests/network_tests/test_netgear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -20,6 +20,7 @@ # import the necessary packages import os +import platform import queue import cv2 import numpy as np @@ -153,7 +154,7 @@ def test_patterns(pattern): assert not (frame_server is None) # send frame over network server.send(frame_server) - frame_client = client.recv() + frame_client = client.recv(return_data=[1, 2, 3] if pattern == 2 else None) # check if received frame exactly matches input frame assert np.array_equal(frame_server, frame_client) except Exception as e: @@ -172,28 +173,30 @@ def test_patterns(pattern): @pytest.mark.parametrize( - "options_client", + "options_server", [ - {"compression_format": None, "compression_param": cv2.IMREAD_UNCHANGED}, { - "compression_format": ".jpg", - "compression_param": [cv2.IMWRITE_JPEG_QUALITY, 80], + "jpeg_compression": "invalid", + "jpeg_compression_quality": 5, + }, + { + "jpeg_compression": " gray ", + "jpeg_compression_quality": 50, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, + }, + { + "jpeg_compression": True, + "jpeg_compression_quality": 55.55, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, }, ], ) -def test_compression(options_client): +def test_compression(options_server): """ Testing NetGear's real-time frame compression capabilities """ - options = { - "compression_format": ".jpg", - "compression_param": [ - cv2.IMWRITE_JPEG_QUALITY, - 20, - cv2.IMWRITE_JPEG_OPTIMIZE, - True, - ], - } # JPEG compression # initialize stream = None server = None @@ -201,9 +204,17 @@ def test_compression(options_client): try: # open streams options_gear = {"THREAD_TIMEOUT": 60} - stream = VideoGear(source=return_testvideo_path(), **options_gear).start() - client = NetGear(pattern=0, receive_mode=True, logging=True, **options_client) - server = NetGear(pattern=0, logging=True, **options) + colorspace = ( + "COLOR_BGR2GRAY" + if isinstance(options_server["jpeg_compression"], str) + and options_server["jpeg_compression"].strip().upper() == "GRAY" + else None + ) + stream = VideoGear( + source=return_testvideo_path(), colorspace=colorspace, **options_gear + ).start() + client = NetGear(pattern=0, receive_mode=True, logging=True) + server = NetGear(pattern=0, logging=True, **options_server) # send over network while True: frame_server = stream.read() @@ -211,6 +222,13 @@ def test_compression(options_client): break server.send(frame_server) frame_client = client.recv() + if ( + isinstance(options_server["jpeg_compression"], str) + and options_server["jpeg_compression"].strip().upper() == "GRAY" + ): + assert ( + frame_server.ndim == frame_client.ndim + ), "Grayscale frame Test Failed!" except Exception as e: if isinstance(e, (ZMQError, ValueError, RuntimeError, queue.Empty)): logger.exception(str(e)) @@ -229,7 +247,14 @@ def test_compression(options_client): test_data_class = [ (0, 1, tempfile.gettempdir(), True), (0, 1, ["invalid"], True), - (1, 1, "unknown://invalid.com/", False), + ( + 1, + 2, + os.path.abspath(os.sep) + if platform.system() == "Linux" + else "unknown://invalid.com/", + False, + ), ] @@ -287,26 +312,40 @@ def test_secure_mode(pattern, security_mech, custom_cert_location, overwrite_cer @pytest.mark.parametrize( "pattern, target_data, options", [ - (0, [1, "string", ["list"]], {"bidirectional_mode": True}), ( - 1, - (np.random.random(size=(480, 640, 3)) * 255).astype(np.uint8), + 0, + [1, "string", ["list"]], { "bidirectional_mode": True, - "jpeg_compression_quality": 55.0, - "jpeg_compression_fastdct": True, - "jpeg_compression_fastupsample": True, + "jpeg_compression": ["invalid"], }, ), ( - 2, + 1, { - "jpeg_compression": False, 1: "apple", 2: "cat", - "jpeg_compression_quality": 5, }, - {"bidirectional_mode": True}, + { + "bidirectional_mode": True, + "jpeg_compression": False, + "jpeg_compression_quality": 55, + "jpeg_compression_fastdct": False, + "jpeg_compression_fastupsample": False, + }, + ), + ( + 1, + (np.random.random(size=(480, 640, 3)) * 255).astype(np.uint8), + {"bidirectional_mode": True, "jpeg_compression": "GRAY"}, + ), + ( + 2, + (np.random.random(size=(480, 640, 3)) * 255).astype(np.uint8), + { + "bidirectional_mode": True, + "jpeg_compression": True, + }, ), ], ) @@ -319,16 +358,29 @@ def test_bidirectional_mode(pattern, target_data, options): server = None client = None try: - logger.debug("Given Input Data: {}".format(target_data)) + logger.debug( + "Given Input Data: {}".format( + target_data if not isinstance(target_data, np.ndarray) else "IMAGE" + ) + ) # open stream options_gear = {"THREAD_TIMEOUT": 60} - stream = VideoGear(source=return_testvideo_path(), **options_gear).start() + # change colorspace + colorspace = ( + "COLOR_BGR2GRAY" + if isinstance(options["jpeg_compression"], str) + and options["jpeg_compression"].strip().upper() == "GRAY" + else None + ) + if colorspace == "COLOR_BGR2GRAY" and isinstance(target_data, np.ndarray): + target_data = cv2.cvtColor(target_data, cv2.COLOR_BGR2GRAY) + + stream = VideoGear( + source=return_testvideo_path(), colorspace=colorspace, **options_gear + ).start() # define params - client = NetGear(pattern=pattern, receive_mode=True, **options) - server = NetGear(pattern=pattern, **options) - # get frame from stream - frame_server = stream.read() - assert not (frame_server is None) + client = NetGear(pattern=pattern, receive_mode=True, logging=True, **options) + server = NetGear(pattern=pattern, logging=True, **options) # check if target data is numpy ndarray if isinstance(target_data, np.ndarray): # sent frame and data from server to client @@ -336,13 +388,13 @@ def test_bidirectional_mode(pattern, target_data, options): # client receives the data and frame and send its data server_data, frame_client = client.recv(return_data=target_data) # server receives the data and cycle continues - client_data = server.send(target_data, message=target_data) - # logger.debug data received at client-end and server-end - logger.debug("Data received at Server-end: {}".format(frame_client)) - logger.debug("Data received at Client-end: {}".format(client_data)) - if "jpeg_compression" in options and options["jpeg_compression"] == False: - assert np.array_equal(client_data, frame_client) + client_data = server.send(target_data) + # test if recieved successfully + assert not (client_data is None), "Test Failed!" else: + # get frame from stream + frame_server = stream.read() + assert not (frame_server is None) # sent frame and data from server to client server.send(frame_server, message=target_data) # client receives the data and frame and send its data @@ -350,7 +402,7 @@ def test_bidirectional_mode(pattern, target_data, options): # server receives the data and cycle continues client_data = server.send(frame_server, message=target_data) # check if received frame exactly matches input frame - if "jpeg_compression" in options and options["jpeg_compression"] == False: + if not options["jpeg_compression"] in [True, "GRAY", ["invalid"]]: assert np.array_equal(frame_server, frame_client) # logger.debug data received at client-end and server-end logger.debug("Data received at Server-end: {}".format(server_data)) @@ -398,6 +450,13 @@ def test_bidirectional_mode(pattern, target_data, options): "bidirectional_mode": True, }, ), + ( + 2, + { + "multiserver_mode": True, + "ssh_tunnel_mode": "new@sdf.org", + }, + ), ], ) def test_multiserver_mode(pattern, options): @@ -556,15 +615,22 @@ def test_multiclient_mode(pattern): @pytest.mark.parametrize( "options", [ - {"max_retries": -1, "request_timeout": 3}, - {"max_retries": 2, "request_timeout": 4, "bidirectional_mode": True}, + {"max_retries": -1, "request_timeout": 2}, + { + "max_retries": 2, + "request_timeout": 2, + "bidirectional_mode": True, + "ssh_tunnel_mode": " new@sdf.org ", + "ssh_tunnel_pwd": "xyz", + "ssh_tunnel_keyfile": "ok.txt", + }, {"max_retries": 2, "request_timeout": 4, "multiclient_mode": True}, - {"max_retries": 2, "request_timeout": 4, "multiserver_mode": True}, + {"max_retries": 2, "request_timeout": -1, "multiserver_mode": True}, ], ) def test_client_reliablity(options): """ - Testing validation function of WebGear API + Testing validation function of NetGear API """ client = None frame_client = None @@ -596,14 +662,31 @@ def test_client_reliablity(options): @pytest.mark.parametrize( "options", [ - {"max_retries": 2, "request_timeout": 4, "bidirectional_mode": True}, - {"max_retries": 2, "request_timeout": 4, "multiserver_mode": True}, - {"max_retries": 2, "request_timeout": 4, "multiclient_mode": True}, + {"max_retries": 2, "request_timeout": 2, "bidirectional_mode": True}, + {"max_retries": 2, "request_timeout": 2, "multiserver_mode": True}, + {"max_retries": 2, "request_timeout": 2, "multiclient_mode": True}, + { + "ssh_tunnel_mode": "localhost", + }, + { + "ssh_tunnel_mode": "localhost:47", + }, + { + "max_retries": 2, + "request_timeout": 2, + "bidirectional_mode": True, + "ssh_tunnel_mode": "git@github.com", + }, + { + "max_retries": 2, + "request_timeout": 2, + "ssh_tunnel_mode": "git@github.com", + }, ], ) def test_server_reliablity(options): """ - Testing validation function of WebGear API + Testing validation function of NetGear API """ server = None stream = None @@ -611,6 +694,7 @@ def test_server_reliablity(options): try: # define params server = NetGear( + address="127.0.0.1" if "ssh_tunnel_mode" in options else None, pattern=1, port=[5585] if "multiclient_mode" in options.keys() else 6654, logging=True, @@ -637,3 +721,25 @@ def test_server_reliablity(options): stream.release() if not (server is None): server.close() + + +@pytest.mark.parametrize( + "server_ports, client_ports, options", + [ + (0, 5555, {"multiserver_mode": True}), + (5555, 0, {"multiclient_mode": True}), + ], +) +@pytest.mark.xfail(raises=ValueError) +def test_ports(server_ports, client_ports, options): + """ + Test made to fail on wrong port values + """ + if server_ports: + server = NetGear(pattern=1, port=server_ports, logging=True, **options) + server.close() + else: + client = NetGear( + port=client_ports, pattern=1, receive_mode=True, logging=True, **options + ) + client.close() diff --git a/vidgear/tests/streamer_tests/asyncio_tests/test_webgear.py b/vidgear/tests/streamer_tests/asyncio_tests/test_webgear.py index 281f09287..e05d5d426 100644 --- a/vidgear/tests/streamer_tests/asyncio_tests/test_webgear.py +++ b/vidgear/tests/streamer_tests/asyncio_tests/test_webgear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -27,11 +27,13 @@ import requests import tempfile from starlette.routing import Route +from starlette.middleware import Middleware +from starlette.middleware.cors import CORSMiddleware from starlette.responses import PlainTextResponse from starlette.testclient import TestClient from vidgear.gears.asyncio import WebGear -from vidgear.gears.asyncio.helper import logger_handler +from vidgear.gears.helper import logger_handler # define test logger logger = log.getLogger("Test_webgear") @@ -72,7 +74,7 @@ async def custom_frame_generator(): encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() # yield frame in byte format yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n") - await asyncio.sleep(0.00001) + await asyncio.sleep(0) # close stream stream.release() @@ -106,36 +108,58 @@ def test_webgear_class(source, stabilize, colorspace, time_delay): pytest.fail(str(e)) -test_data = [ - { - "frame_size_reduction": 47, - "frame_jpeg_quality": 88, - "frame_jpeg_optimize": True, - "frame_jpeg_progressive": False, - "overwrite_default_files": "invalid_value", - "enable_infinite_frames": "invalid_value", - "custom_data_location": True, - }, - { - "frame_size_reduction": "invalid_value", - "frame_jpeg_quality": "invalid_value", - "frame_jpeg_optimize": "invalid_value", - "frame_jpeg_progressive": "invalid_value", - "overwrite_default_files": True, - "enable_infinite_frames": False, - "custom_data_location": "im_wrong", - }, - {"custom_data_location": tempfile.gettempdir()}, -] - - -@pytest.mark.parametrize("options", test_data) +@pytest.mark.parametrize( + "options", + [ + { + "jpeg_compression_colorspace": "invalid", + "jpeg_compression_quality": 5, + "custom_data_location": True, + "jpeg_compression_fastdct": "invalid", + "jpeg_compression_fastupsample": "invalid", + "frame_size_reduction": "invalid", + "overwrite_default_files": "invalid", + "enable_infinite_frames": "invalid", + }, + { + "jpeg_compression_colorspace": " gray ", + "jpeg_compression_quality": 50, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, + "overwrite_default_files": True, + "enable_infinite_frames": False, + "custom_data_location": tempfile.gettempdir(), + }, + { + "jpeg_compression_quality": 55.55, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, + "custom_data_location": "im_wrong", + }, + { + "enable_infinite_frames": True, + "custom_data_location": return_testvideo_path(), + }, + ], +) def test_webgear_options(options): """ Test for various WebGear API internal options """ try: - web = WebGear(source=return_testvideo_path(), logging=True, **options) + colorspace = ( + "COLOR_BGR2GRAY" + if "jpeg_compression_colorspace" in options + and isinstance(options["jpeg_compression_colorspace"], str) + and options["jpeg_compression_colorspace"].strip().upper() == "GRAY" + else None + ) + web = WebGear( + source=return_testvideo_path(), + colorspace=colorspace, + logging=True, + **options + ) client = TestClient(web(), raise_server_exceptions=True) response = client.get("/") assert response.status_code == 200 @@ -143,8 +167,8 @@ def test_webgear_options(options): assert response_video.status_code == 200 web.shutdown() except Exception as e: - if isinstance(e, AssertionError): - logger.exception(str(e)) + if isinstance(e, AssertionError) or isinstance(e, os.access): + pytest.xfail(str(e)) elif isinstance(e, requests.exceptions.Timeout): logger.exceptions(str(e)) else: @@ -175,6 +199,30 @@ def test_webgear_custom_server_generator(generator, result): pytest.fail(str(e)) +test_data_class = [ + (None, False), + ([Middleware(CORSMiddleware, allow_origins=["*"])], True), + ([Route("/hello", endpoint=hello_webpage)], False), # invalid value +] + + +@pytest.mark.parametrize("middleware, result", test_data_class) +def test_webgear_custom_middleware(middleware, result): + """ + Test for WebGear API's custom middleware + """ + try: + web = WebGear(source=return_testvideo_path(), logging=True) + web.middleware = middleware + client = TestClient(web(), raise_server_exceptions=True) + response = client.get("/") + assert response.status_code == 200 + web.shutdown() + except Exception as e: + if result: + pytest.fail(str(e)) + + def test_webgear_routes(): """ Test for WebGear API's custom routes @@ -183,9 +231,9 @@ def test_webgear_routes(): # add various performance tweaks as usual options = { "frame_size_reduction": 40, - "frame_jpeg_quality": 80, - "frame_jpeg_optimize": True, - "frame_jpeg_progressive": False, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, } # initialize WebGear app web = WebGear(source=return_testvideo_path(), logging=True, **options) diff --git a/vidgear/tests/streamer_tests/asyncio_tests/test_webgear_rtc.py b/vidgear/tests/streamer_tests/asyncio_tests/test_webgear_rtc.py index 1c868d0f2..a150c20ff 100644 --- a/vidgear/tests/streamer_tests/asyncio_tests/test_webgear_rtc.py +++ b/vidgear/tests/streamer_tests/asyncio_tests/test_webgear_rtc.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -22,14 +22,18 @@ import os import cv2 import pytest +import sys import asyncio +import platform import logging as log import requests import tempfile import json, time from starlette.routing import Route from starlette.responses import PlainTextResponse -from starlette.testclient import TestClient +from starlette.middleware import Middleware +from starlette.middleware.cors import CORSMiddleware +from async_asgi_testclient import TestClient from aiortc import ( MediaStreamTrack, RTCPeerConnection, @@ -39,8 +43,9 @@ RTCSessionDescription, ) from av import VideoFrame +from aiortc.mediastreams import MediaStreamError from vidgear.gears.asyncio import WebGear_RTC -from vidgear.gears.asyncio.helper import logger_handler +from vidgear.gears.helper import logger_handler # define test logger @@ -50,6 +55,14 @@ logger.setLevel(log.DEBUG) +@pytest.fixture +def event_loop(): + """Create an instance of the default event loop for each test case.""" + loop = asyncio.SelectorEventLoop() + yield loop + loop.close() + + def return_testvideo_path(): """ returns Test Video path @@ -60,10 +73,6 @@ def return_testvideo_path(): return os.path.abspath(path) -def run(coro): - return asyncio.get_event_loop().run_until_complete(coro) - - class VideoTransformTrack(MediaStreamTrack): """ A video stream track that transforms frames from an another track. @@ -80,53 +89,24 @@ async def recv(self): return frame -def track_states(pc): - states = { - "connectionState": [pc.connectionState], - "iceConnectionState": [pc.iceConnectionState], - "iceGatheringState": [pc.iceGatheringState], - "signalingState": [pc.signalingState], - } - - @pc.on("connectionstatechange") - def connectionstatechange(): - states["connectionState"].append(pc.connectionState) - - @pc.on("iceconnectionstatechange") - def iceconnectionstatechange(): - states["iceConnectionState"].append(pc.iceConnectionState) - - @pc.on("icegatheringstatechange") - def icegatheringstatechange(): - states["iceGatheringState"].append(pc.iceGatheringState) - - @pc.on("signalingstatechange") - def signalingstatechange(): - states["signalingState"].append(pc.signalingState) - - return states - - -def get_RTCPeer_payload(): +async def get_RTCPeer_payload(): pc = RTCPeerConnection( RTCConfiguration(iceServers=[RTCIceServer("stun:stun.l.google.com:19302")]) ) - track_states(pc) - @pc.on("track") - def on_track(track): + async def on_track(track): logger.debug("Receiving %s" % track.kind) if track.kind == "video": pc.addTrack(VideoTransformTrack(track)) @track.on("ended") - def on_ended(): + async def on_ended(): logger.info("Track %s ended", track.kind) pc.addTransceiver("video", direction="recvonly") - offer = run(pc.createOffer()) - run(pc.setLocalDescription(offer)) + offer = await pc.createOffer() + await pc.setLocalDescription(offer) new_offer = pc.localDescription payload = {"sdp": new_offer.sdp, "type": new_offer.type} return (pc, json.dumps(payload, separators=(",", ":"))) @@ -145,7 +125,6 @@ class Custom_RTCServer(VideoStreamTrack): """ def __init__(self, source=None): - # don't forget this line! super().__init__() @@ -164,7 +143,7 @@ async def recv(self): # if NoneType if not grabbed: - return None + raise MediaStreamError # contruct `av.frame.Frame` from `numpy.nd.array` av_frame = VideoFrame.from_ndarray(frame, format="bgr24") @@ -190,7 +169,6 @@ class Invalid_Custom_RTCServer_1(VideoStreamTrack): """ def __init__(self, source=None): - # don't forget this line! super().__init__() @@ -210,7 +188,7 @@ async def recv(self): # if NoneType if not grabbed: - return None + raise MediaStreamError # contruct `av.frame.Frame` from `numpy.nd.array` av_frame = VideoFrame.from_ndarray(frame, format="bgr24") @@ -238,8 +216,9 @@ def __init__(self, source=None): ] +@pytest.mark.asyncio @pytest.mark.parametrize("source, stabilize, colorspace, time_delay", test_data) -def test_webgear_rtc_class(source, stabilize, colorspace, time_delay): +async def test_webgear_rtc_class(source, stabilize, colorspace, time_delay): """ Test for various WebGear_RTC API parameters """ @@ -251,30 +230,31 @@ def test_webgear_rtc_class(source, stabilize, colorspace, time_delay): time_delay=time_delay, logging=True, ) - client = TestClient(web(), raise_server_exceptions=True) - response = client.get("/") - assert response.status_code == 200 - response_404 = client.get("/test") - assert response_404.status_code == 404 - (offer_pc, data) = get_RTCPeer_payload() - response_rtc_answer = client.post( - "/offer", - data=data, - headers={"Content-Type": "application/json"}, - ) - params = response_rtc_answer.json() - answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) - run(offer_pc.setRemoteDescription(answer)) - response_rtc_offer = client.get( - "/offer", - data=data, - headers={"Content-Type": "application/json"}, - ) - assert response_rtc_offer.status_code == 200 - run(offer_pc.close()) + async with TestClient(web()) as client: + response = await client.get("/") + assert response.status_code == 200 + response_404 = await client.get("/test") + assert response_404.status_code == 404 + (offer_pc, data) = await get_RTCPeer_payload() + response_rtc_answer = await client.post( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + params = response_rtc_answer.json() + answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) + await offer_pc.setRemoteDescription(answer) + response_rtc_offer = await client.get( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + assert response_rtc_offer.status_code == 200 + await offer_pc.close() web.shutdown() except Exception as e: - pytest.fail(str(e)) + if not isinstance(e, MediaStreamError): + pytest.fail(str(e)) test_data = [ @@ -287,44 +267,56 @@ def test_webgear_rtc_class(source, stabilize, colorspace, time_delay): }, { "frame_size_reduction": "invalid_value", - "overwrite_default_files": True, - "enable_infinite_frames": False, "enable_live_broadcast": False, "custom_data_location": "im_wrong", }, - {"custom_data_location": tempfile.gettempdir()}, + { + "custom_data_location": tempfile.gettempdir(), + "enable_infinite_frames": False, + }, + { + "overwrite_default_files": True, + "enable_live_broadcast": True, + "frame_size_reduction": 99, + }, ] +@pytest.mark.asyncio @pytest.mark.parametrize("options", test_data) -def test_webgear_rtc_options(options): +async def test_webgear_rtc_options(options): """ Test for various WebGear_RTC API internal options """ + web = None try: web = WebGear_RTC(source=return_testvideo_path(), logging=True, **options) - client = TestClient(web(), raise_server_exceptions=True) - response = client.get("/") - assert response.status_code == 200 - (offer_pc, data) = get_RTCPeer_payload() - response_rtc_answer = client.post( - "/offer", - data=data, - headers={"Content-Type": "application/json"}, - ) - params = response_rtc_answer.json() - answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) - run(offer_pc.setRemoteDescription(answer)) - response_rtc_offer = client.get( - "/offer", - data=data, - headers={"Content-Type": "application/json"}, - ) - assert response_rtc_offer.status_code == 200 - run(offer_pc.close()) + async with TestClient(web()) as client: + response = await client.get("/") + assert response.status_code == 200 + if ( + not "enable_live_broadcast" in options + or options["enable_live_broadcast"] == False + ): + (offer_pc, data) = await get_RTCPeer_payload() + response_rtc_answer = await client.post( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + params = response_rtc_answer.json() + answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) + await offer_pc.setRemoteDescription(answer) + response_rtc_offer = await client.get( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + assert response_rtc_offer.status_code == 200 + await offer_pc.close() web.shutdown() except Exception as e: - if isinstance(e, AssertionError): + if isinstance(e, (AssertionError, MediaStreamError)): logger.exception(str(e)) elif isinstance(e, requests.exceptions.Timeout): logger.exceptions(str(e)) @@ -332,6 +324,88 @@ def test_webgear_rtc_options(options): pytest.fail(str(e)) +test_data = [ + { + "frame_size_reduction": 40, + }, + { + "enable_live_broadcast": True, + "frame_size_reduction": 40, + }, +] + + +@pytest.mark.skipif((platform.system() == "Windows"), reason="Random Failures!") +@pytest.mark.asyncio +@pytest.mark.parametrize("options", test_data) +async def test_webpage_reload(options): + """ + Test for testing WebGear_RTC API against Webpage reload + disruptions + """ + web = WebGear_RTC(source=return_testvideo_path(), logging=True, **options) + try: + # run webgear_rtc + async with TestClient(web()) as client: + response = await client.get("/") + assert response.status_code == 200 + + # create offer and receive + (offer_pc, data) = await get_RTCPeer_payload() + response_rtc_answer = await client.post( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + params = response_rtc_answer.json() + answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) + await offer_pc.setRemoteDescription(answer) + response_rtc_offer = await client.get( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + assert response_rtc_offer.status_code == 200 + # simulate webpage reload + response_rtc_reload = await client.post( + "/close_connection", + data="0", + ) + # close offer + await offer_pc.close() + offer_pc = None + data = None + # verify response + logger.debug(response_rtc_reload.text) + assert response_rtc_reload.text == "OK", "Test Failed!" + + # recreate offer and continue receive + (offer_pc, data) = await get_RTCPeer_payload() + response_rtc_answer = await client.post( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + params = response_rtc_answer.json() + answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) + await offer_pc.setRemoteDescription(answer) + response_rtc_offer = await client.get( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + assert response_rtc_offer.status_code == 200 + # shutdown + await offer_pc.close() + except Exception as e: + if "enable_live_broadcast" in options and isinstance(e, (AssertionError, MediaStreamError)): + pytest.xfail("Test Passed") + else: + pytest.fail(str(e)) + finally: + web.shutdown() + + test_data_class = [ (None, False), ("Invalid", False), @@ -341,19 +415,47 @@ def test_webgear_rtc_options(options): ] -@pytest.mark.xfail(raises=ValueError) +@pytest.mark.asyncio +@pytest.mark.xfail(raises=(ValueError, MediaStreamError)) @pytest.mark.parametrize("server, result", test_data_class) -def test_webgear_rtc_custom_server_generator(server, result): +async def test_webgear_rtc_custom_server_generator(server, result): """ Test for WebGear_RTC API's custom source """ web = WebGear_RTC(logging=True) web.config["server"] = server - client = TestClient(web(), raise_server_exceptions=True) + async with TestClient(web()) as client: + pass web.shutdown() -def test_webgear_rtc_routes(): +test_data_class = [ + (None, False), + ([Middleware(CORSMiddleware, allow_origins=["*"])], True), + ([Route("/hello", endpoint=hello_webpage)], False), # invalid value +] + + +@pytest.mark.asyncio +@pytest.mark.parametrize("middleware, result", test_data_class) +async def test_webgear_rtc_custom_middleware(middleware, result): + """ + Test for WebGear_RTC API's custom middleware + """ + try: + web = WebGear_RTC(source=return_testvideo_path(), logging=True) + web.middleware = middleware + async with TestClient(web()) as client: + response = await client.get("/") + assert response.status_code == 200 + web.shutdown() + except Exception as e: + if result and not isinstance(e, MediaStreamError): + pytest.fail(str(e)) + + +@pytest.mark.asyncio +async def test_webgear_rtc_routes(): """ Test for WebGear_RTC API's custom routes """ @@ -369,34 +471,39 @@ def test_webgear_rtc_routes(): web.routes.append(Route("/hello", endpoint=hello_webpage)) # test - client = TestClient(web(), raise_server_exceptions=True) - response = client.get("/") - assert response.status_code == 200 - response_hello = client.get("/hello") - assert response_hello.status_code == 200 - (offer_pc, data) = get_RTCPeer_payload() - response_rtc_answer = client.post( - "/offer", - data=data, - headers={"Content-Type": "application/json"}, - ) - params = response_rtc_answer.json() - answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) - run(offer_pc.setRemoteDescription(answer)) - response_rtc_offer = client.get( - "/offer", - data=data, - headers={"Content-Type": "application/json"}, - ) - assert response_rtc_offer.status_code == 200 - run(offer_pc.close()) + async with TestClient(web()) as client: + response = await client.get("/") + assert response.status_code == 200 + response_hello = await client.get("/hello") + assert response_hello.status_code == 200 + (offer_pc, data) = await get_RTCPeer_payload() + response_rtc_answer = await client.post( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + params = response_rtc_answer.json() + answer = RTCSessionDescription(sdp=params["sdp"], type=params["type"]) + await offer_pc.setRemoteDescription(answer) + response_rtc_offer = await client.get( + "/offer", + data=data, + headers={"Content-Type": "application/json"}, + ) + assert response_rtc_offer.status_code == 200 + # shutdown + await offer_pc.close() web.shutdown() except Exception as e: - pytest.fail(str(e)) + if not isinstance(e, MediaStreamError): + pytest.fail(str(e)) -@pytest.mark.xfail(raises=RuntimeError) -def test_webgear_rtc_routes_validity(): +@pytest.mark.asyncio +async def test_webgear_rtc_routes_validity(): + """ + Test WebGear_RTC Routes + """ # add various tweaks for testing only options = { "enable_infinite_frames": False, @@ -404,8 +511,17 @@ def test_webgear_rtc_routes_validity(): } # initialize WebGear_RTC app web = WebGear_RTC(source=return_testvideo_path(), logging=True) - # modify route - web.routes.clear() - # test - client = TestClient(web(), raise_server_exceptions=True) - web.shutdown() + try: + # modify route + web.routes.clear() + # test + async with TestClient(web()) as client: + pass + except Exception as e: + if isinstance(e, (RuntimeError, MediaStreamError)): + pytest.xfail(str(e)) + else: + pytest.fail(str(e)) + finally: + # close + web.shutdown() diff --git a/vidgear/tests/streamer_tests/test_IO_rtf.py b/vidgear/tests/streamer_tests/test_IO_rtf.py index df19ee1b2..377c0dec0 100644 --- a/vidgear/tests/streamer_tests/test_IO_rtf.py +++ b/vidgear/tests/streamer_tests/test_IO_rtf.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -83,7 +83,8 @@ def test_method_call_rtf(): @pytest.mark.xfail(raises=ValueError) -def test_invalid_params_rtf(): +@pytest.mark.parametrize("format", ["dash", "hls"]) +def test_invalid_params_rtf(format): """ Invalid parameter Failure Test - Made to fail by calling invalid parameters """ @@ -93,7 +94,12 @@ def test_invalid_params_rtf(): input_data = random_data.astype(np.uint8) stream_params = {"-vcodec": "unknown"} - streamer = StreamGear(output="output.mpd", logging=True, **stream_params) + streamer = StreamGear( + output="output{}".format(".mpd" if format == "dash" else ".m3u8"), + format=format, + logging=True, + **stream_params + ) streamer.stream(input_data) streamer.stream(input_data) streamer.terminate() diff --git a/vidgear/tests/streamer_tests/test_IO_ss.py b/vidgear/tests/streamer_tests/test_IO_ss.py index f2ac47769..217ad6fe3 100644 --- a/vidgear/tests/streamer_tests/test_IO_ss.py +++ b/vidgear/tests/streamer_tests/test_IO_ss.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -38,16 +38,16 @@ def return_testvideo_path(): return os.path.abspath(path) -def test_failedextension(): +@pytest.mark.xfail(raises=(AssertionError, ValueError)) +@pytest.mark.parametrize("output", ["garbage.garbage", "output.m3u8"]) +def test_failedextension(output): """ IO Test - made to fail with filename with wrong extension """ - # 'garbage' extension does not exist - with pytest.raises(AssertionError): - stream_params = {"-video_source": return_testvideo_path()} - streamer = StreamGear(output="garbage.garbage", logging=True, **stream_params) - streamer.transcode_source() - streamer.terminate() + stream_params = {"-video_source": return_testvideo_path()} + streamer = StreamGear(output=output, logging=True, **stream_params) + streamer.transcode_source() + streamer.terminate() def test_failedextensionsource(): @@ -63,20 +63,21 @@ def test_failedextensionsource(): @pytest.mark.parametrize( - "path", + "path, format", [ - "rtmp://live.twitch.tv/output.mpd", - "unknown://invalid.com/output.mpd", + ("rtmp://live.twitch.tv/output.mpd", "dash"), + ("rtmp://live.twitch.tv/output.m3u8", "hls"), + ("unknown://invalid.com/output.mpd", "dash"), ], ) -def test_paths_ss(path): +def test_paths_ss(path, format): """ Paths Test - Test various paths/urls supported by StreamGear. """ streamer = None try: stream_params = {"-video_source": return_testvideo_path()} - streamer = StreamGear(output=path, logging=True, **stream_params) + streamer = StreamGear(output=path, format=format, logging=True, **stream_params) except Exception as e: if isinstance(e, ValueError): pytest.xfail("Test Passed!") @@ -110,11 +111,17 @@ def test_method_call_ss(): @pytest.mark.xfail(raises=subprocess.CalledProcessError) -def test_invalid_params_ss(): +@pytest.mark.parametrize("format", ["dash", "hls"]) +def test_invalid_params_ss(format): """ Method calling Test - Made to fail by calling method in the wrong context. """ stream_params = {"-video_source": return_testvideo_path(), "-vcodec": "unknown"} - streamer = StreamGear(output="output.mpd", logging=True, **stream_params) + streamer = StreamGear( + output="output{}".format(".mpd" if format == "dash" else ".m3u8"), + format=format, + logging=True, + **stream_params + ) streamer.transcode_source() streamer.terminate() diff --git a/vidgear/tests/streamer_tests/test_init.py b/vidgear/tests/streamer_tests/test_init.py index 029d61339..06cec587a 100644 --- a/vidgear/tests/streamer_tests/test_init.py +++ b/vidgear/tests/streamer_tests/test_init.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -66,8 +66,8 @@ def test_custom_ffmpeg(c_ffmpeg): streamer.terminate() -@pytest.mark.xfail(raises=ValueError) -@pytest.mark.parametrize("format", ["mash", "unknown", 1234, None]) +@pytest.mark.xfail(raises=(AssertionError, ValueError)) +@pytest.mark.parametrize("format", ["hls", "mash", 1234, None]) def test_formats(format): """ Testing different formats for StreamGear @@ -77,7 +77,8 @@ def test_formats(format): @pytest.mark.parametrize( - "output", [None, "output.mpd", os.path.join(expanduser("~"), "test_mpd")] + "output", + [None, "output.mpd", "output.m3u8"], ) def test_outputs(output): """ @@ -89,10 +90,15 @@ def test_outputs(output): else {"-clear_prev_assets": "invalid"} ) try: - streamer = StreamGear(output=output, logging=True, **stream_params) + streamer = StreamGear( + output=output, + format="hls" if output == "output.m3u8" else "dash", + logging=True, + **stream_params + ) streamer.terminate() except Exception as e: - if output is None: + if output is None or output.endswith("m3u8"): pytest.xfail(str(e)) else: pytest.fail(str(e)) diff --git a/vidgear/tests/streamer_tests/test_streamgear_modes.py b/vidgear/tests/streamer_tests/test_streamgear_modes.py index b402f967b..55a9f6230 100644 --- a/vidgear/tests/streamer_tests/test_streamgear_modes.py +++ b/vidgear/tests/streamer_tests/test_streamgear_modes.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -22,7 +22,9 @@ import os import cv2 import queue +import time import pytest +import m3u8 import logging as log import platform import tempfile @@ -30,7 +32,7 @@ from mpegdash.parser import MPEGDASHParser from vidgear.gears import CamGear, StreamGear -from vidgear.gears.helper import logger_handler +from vidgear.gears.helper import logger_handler, validate_video # define test logger logger = log.getLogger("Test_Streamgear") @@ -58,6 +60,26 @@ def return_testvideo_path(fmt="av"): return os.path.abspath(path) +def return_static_ffmpeg(): + """ + returns system specific FFmpeg static path + """ + path = "" + if platform.system() == "Windows": + path += os.path.join( + tempfile.gettempdir(), "Downloads/FFmpeg_static/ffmpeg/bin/ffmpeg.exe" + ) + elif platform.system() == "Darwin": + path += os.path.join( + tempfile.gettempdir(), "Downloads/FFmpeg_static/ffmpeg/bin/ffmpeg" + ) + else: + path += os.path.join( + tempfile.gettempdir(), "Downloads/FFmpeg_static/ffmpeg/ffmpeg" + ) + return os.path.abspath(path) + + def check_valid_mpd(file="", exp_reps=1): """ checks if given file is a valid MPD(MPEG-DASH Manifest file) @@ -79,6 +101,39 @@ def check_valid_mpd(file="", exp_reps=1): return (all_adapts, all_reprs) if (len(all_reprs) >= exp_reps) else False +def extract_meta_video(file): + """ + Extracts metadata from a valid video file + """ + logger.debug("Extracting Metadata from {}".format(file)) + meta = validate_video(return_static_ffmpeg(), file, logging=True) + return meta + + +def check_valid_m3u8(file=""): + """ + checks if given file is a valid M3U8 file + """ + if not file or not os.path.isfile(file): + return False + metas = [] + try: + playlist = m3u8.load(file) + if playlist.is_variant: + for pl in playlist.playlists: + meta = {} + meta["resolution"] = pl.stream_info.resolution + meta["framerate"] = pl.stream_info.frame_rate + metas.append(meta) + else: + for seg in playlist.segments: + metas.append(extract_meta_video(seg)) + except Exception as e: + logger.error(str(e)) + return False + return metas + + def extract_meta_mpd(file): """ Extracts metadata from a valid MPD(MPEG-DASH Manifest file) @@ -107,11 +162,11 @@ def extract_meta_mpd(file): return [] -def return_mpd_path(): +def return_assets_path(hls=False): """ - returns MPD assets temp path + returns assets temp path """ - return os.path.join(tempfile.gettempdir(), "temp_mpd") + return os.path.join(tempfile.gettempdir(), "temp_m3u8" if hls else "temp_mpd") def string_to_float(value): @@ -134,10 +189,8 @@ def extract_resolutions(source, streams): return {} results = {} assert os.path.isfile(source), "Not a valid source" - s_cv = cv2.VideoCapture(source) - results[int(s_cv.get(cv2.CAP_PROP_FRAME_WIDTH))] = int( - s_cv.get(cv2.CAP_PROP_FRAME_HEIGHT) - ) + results["source"] = extract_meta_video(source) + num = 0 for stream in streams: if "-resolution" in stream: try: @@ -145,7 +198,8 @@ def extract_resolutions(source, streams): assert len(res) == 2 width, height = (res[0].strip(), res[1].strip()) assert width.isnumeric() and height.isnumeric() - results[int(width)] = int(height) + results["streams{}".format(num)] = {"resolution": (width, height)} + num += 1 except Exception as e: logger.error(str(e)) continue @@ -154,48 +208,76 @@ def extract_resolutions(source, streams): return results -def test_ss_stream(): +@pytest.mark.parametrize("format", ["dash", "hls"]) +def test_ss_stream(format): """ Testing Single-Source Mode """ - mpd_file_path = os.path.join(return_mpd_path(), "dash_test.mpd") + assets_file_path = os.path.join( + return_assets_path(False if format == "dash" else True), + "format_test{}".format(".mpd" if format == "dash" else ".m3u8"), + ) try: stream_params = { "-video_source": return_testvideo_path(), "-clear_prev_assets": True, } - streamer = StreamGear(output=mpd_file_path, logging=True, **stream_params) + if format == "hls": + stream_params.update( + { + "-hls_base_url": return_assets_path( + False if format == "dash" else True + ) + + os.sep + } + ) + streamer = StreamGear( + output=assets_file_path, format=format, logging=True, **stream_params + ) streamer.transcode_source() streamer.terminate() - assert check_valid_mpd(mpd_file_path) + if format == "dash": + assert check_valid_mpd(assets_file_path), "Test Failed!" + else: + assert extract_meta_video(assets_file_path), "Test Failed!" except Exception as e: pytest.fail(str(e)) -def test_ss_livestream(): +@pytest.mark.parametrize("format", ["dash", "hls"]) +def test_ss_livestream(format): """ Testing Single-Source Mode with livestream. """ - mpd_file_path = os.path.join(return_mpd_path(), "dash_test.mpd") + assets_file_path = os.path.join( + return_assets_path(False if format == "dash" else True), + "format_test{}".format(".mpd" if format == "dash" else ".m3u8"), + ) try: stream_params = { "-video_source": return_testvideo_path(), "-livestream": True, "-remove_at_exit": 1, } - streamer = StreamGear(output=mpd_file_path, logging=True, **stream_params) + streamer = StreamGear( + output=assets_file_path, format=format, logging=True, **stream_params + ) streamer.transcode_source() streamer.terminate() except Exception as e: pytest.fail(str(e)) -@pytest.mark.parametrize("conversion", [None, "COLOR_BGR2GRAY", "COLOR_BGR2BGRA"]) -def test_rtf_stream(conversion): +@pytest.mark.parametrize( + "conversion, format", + [(None, "dash"), ("COLOR_BGR2GRAY", "hls"), ("COLOR_BGR2BGRA", "dash")], +) +def test_rtf_stream(conversion, format): """ Testing Real-Time Frames Mode """ - mpd_file_path = return_mpd_path() + assets_file_path = return_assets_path(False if format == "dash" else True) + try: # Open stream options = {"THREAD_TIMEOUT": 300} @@ -206,7 +288,16 @@ def test_rtf_stream(conversion): "-clear_prev_assets": True, "-input_framerate": "invalid", } - streamer = StreamGear(output=mpd_file_path, **stream_params) + if format == "hls": + stream_params.update( + { + "-hls_base_url": return_assets_path( + False if format == "dash" else True + ) + + os.sep + } + ) + streamer = StreamGear(output=assets_file_path, format=format, **stream_params) while True: frame = stream.read() # check if frame is None @@ -218,23 +309,28 @@ def test_rtf_stream(conversion): streamer.stream(frame) stream.stop() streamer.terminate() - mpd_file = [ - os.path.join(mpd_file_path, f) - for f in os.listdir(mpd_file_path) - if f.endswith(".mpd") + asset_file = [ + os.path.join(assets_file_path, f) + for f in os.listdir(assets_file_path) + if f.endswith(".mpd" if format == "dash" else ".m3u8") ] - assert len(mpd_file) == 1, "Failed to create MPD file!" - assert check_valid_mpd(mpd_file[0]) + assert len(asset_file) == 1, "Failed to create asset file!" + if format == "dash": + assert check_valid_mpd(asset_file[0]), "Test Failed!" + else: + assert extract_meta_video(asset_file[0]), "Test Failed!" except Exception as e: if not isinstance(e, queue.Empty): pytest.fail(str(e)) -def test_rtf_livestream(): +@pytest.mark.parametrize("format", ["dash", "hls"]) +def test_rtf_livestream(format): """ Testing Real-Time Frames Mode with livestream. """ - mpd_file_path = return_mpd_path() + assets_file_path = return_assets_path(False if format == "dash" else True) + try: # Open stream options = {"THREAD_TIMEOUT": 300} @@ -242,7 +338,7 @@ def test_rtf_livestream(): stream_params = { "-livestream": True, } - streamer = StreamGear(output=mpd_file_path, **stream_params) + streamer = StreamGear(output=assets_file_path, format=format, **stream_params) while True: frame = stream.read() # check if frame is None @@ -256,19 +352,34 @@ def test_rtf_livestream(): pytest.fail(str(e)) -def test_input_framerate_rtf(): +@pytest.mark.parametrize("format", ["dash", "hls"]) +def test_input_framerate_rtf(format): """ Testing "-input_framerate" parameter provided by StreamGear """ try: - mpd_file_path = os.path.join(return_mpd_path(), "dash_test.mpd") + assets_file_path = os.path.join( + return_assets_path(False if format == "dash" else True), + "format_test{}".format(".mpd" if format == "dash" else ".m3u8"), + ) stream = cv2.VideoCapture(return_testvideo_path()) # Open stream test_framerate = stream.get(cv2.CAP_PROP_FPS) stream_params = { "-clear_prev_assets": True, "-input_framerate": test_framerate, } - streamer = StreamGear(output=mpd_file_path, logging=True, **stream_params) + if format == "hls": + stream_params.update( + { + "-hls_base_url": return_assets_path( + False if format == "dash" else True + ) + + os.sep + } + ) + streamer = StreamGear( + output=assets_file_path, format=format, logging=True, **stream_params + ) while True: (grabbed, frame) = stream.read() if not grabbed: @@ -276,37 +387,93 @@ def test_input_framerate_rtf(): streamer.stream(frame) stream.release() streamer.terminate() - meta_data = extract_meta_mpd(mpd_file_path) - assert meta_data and len(meta_data) > 0, "Test Failed!" - framerate_mpd = string_to_float(meta_data[0]["framerate"]) - assert framerate_mpd > 0.0 and isinstance(framerate_mpd, float), "Test Failed!" - assert round(framerate_mpd) == round(test_framerate), "Test Failed!" + if format == "dash": + meta_data = extract_meta_mpd(assets_file_path) + assert meta_data and len(meta_data) > 0, "Test Failed!" + framerate_mpd = string_to_float(meta_data[0]["framerate"]) + assert framerate_mpd > 0.0 and isinstance( + framerate_mpd, float + ), "Test Failed!" + assert round(framerate_mpd) == round(test_framerate), "Test Failed!" + else: + meta_data = extract_meta_video(assets_file_path) + assert meta_data and "framerate" in meta_data, "Test Failed!" + framerate_m3u8 = float(meta_data["framerate"]) + assert framerate_m3u8 > 0.0 and isinstance( + framerate_m3u8, float + ), "Test Failed!" + assert round(framerate_m3u8) == round(test_framerate), "Test Failed!" except Exception as e: pytest.fail(str(e)) @pytest.mark.parametrize( - "stream_params", + "stream_params, format", [ - {"-clear_prev_assets": True, "-bpp": 0.2000, "-gop": 125, "-vcodec": "libx265"}, - { - "-clear_prev_assets": True, - "-bpp": "unknown", - "-gop": "unknown", - "-s:v:0": "unknown", - "-b:v:0": "unknown", - "-b:a:0": "unknown", - }, + ( + { + "-clear_prev_assets": True, + "-bpp": 0.2000, + "-gop": 125, + "-vcodec": "libx265", + }, + "hls", + ), + ( + { + "-clear_prev_assets": True, + "-bpp": "unknown", + "-gop": "unknown", + "-s:v:0": "unknown", + "-b:v:0": "unknown", + "-b:a:0": "unknown", + }, + "hls", + ), + ( + { + "-clear_prev_assets": True, + "-bpp": 0.2000, + "-gop": 125, + "-vcodec": "libx265", + }, + "dash", + ), + ( + { + "-clear_prev_assets": True, + "-bpp": "unknown", + "-gop": "unknown", + "-s:v:0": "unknown", + "-b:v:0": "unknown", + "-b:a:0": "unknown", + }, + "dash", + ), ], ) -def test_params(stream_params): +def test_params(stream_params, format): """ - Testing "-input_framerate" parameter provided by StreamGear + Testing "-stream_params" parameters by StreamGear """ try: - mpd_file_path = os.path.join(return_mpd_path(), "dash_test.mpd") + assets_file_path = os.path.join( + return_assets_path(False if format == "dash" else True), + "format_test{}".format(".mpd" if format == "dash" else ".m3u8"), + ) + if format == "hls": + stream_params.update( + { + "-hls_base_url": return_assets_path( + False if format == "dash" else True + ) + + os.sep + } + ) stream = cv2.VideoCapture(return_testvideo_path()) # Open stream - streamer = StreamGear(output=mpd_file_path, logging=True, **stream_params) + streamer = StreamGear( + output=assets_file_path, format=format, logging=True, **stream_params + ) while True: (grabbed, frame) = stream.read() if not grabbed: @@ -314,119 +481,280 @@ def test_params(stream_params): streamer.stream(frame) stream.release() streamer.terminate() - assert check_valid_mpd(mpd_file_path) + if format == "dash": + assert check_valid_mpd(assets_file_path), "Test Failed!" + else: + assert extract_meta_video(assets_file_path), "Test Failed!" except Exception as e: pytest.fail(str(e)) @pytest.mark.parametrize( - "stream_params", + "stream_params, format", [ - { - "-clear_prev_assets": True, - "-video_source": return_testvideo_path(fmt="vo"), - "-audio": "https://raw.githubusercontent.com/abhiTronix/Imbakup/master/Images/invalid.aac", - }, - { - "-clear_prev_assets": True, - "-video_source": return_testvideo_path(fmt="vo"), - "-audio": return_testvideo_path(fmt="ao"), - }, - { - "-clear_prev_assets": True, - "-video_source": return_testvideo_path(fmt="vo"), - "-audio": "https://raw.githubusercontent.com/abhiTronix/Imbakup/master/Images/big_buck_bunny_720p_1mb_ao.aac", - }, + ( + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": "https://raw.githubusercontent.com/abhiTronix/Imbakup/master/Images/invalid.aac", + }, + "dash", + ), + ( + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": return_testvideo_path(fmt="ao"), + }, + "dash", + ), + ( + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": "https://raw.githubusercontent.com/abhiTronix/Imbakup/master/Images/big_buck_bunny_720p_1mb_ao.aac", + }, + "dash", + ), + ( + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": "https://raw.githubusercontent.com/abhiTronix/Imbakup/master/Images/invalid.aac", + }, + "hls", + ), + ( + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": return_testvideo_path(fmt="ao"), + }, + "hls", + ), + ( + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": "https://raw.githubusercontent.com/abhiTronix/Imbakup/master/Images/big_buck_bunny_720p_1mb_ao.aac", + }, + "hls", + ), ], ) -def test_audio(stream_params): +def test_audio(stream_params, format): """ - Testing Single-Source Mode + Testing external and audio audio for stream. """ - mpd_file_path = os.path.join(return_mpd_path(), "dash_test.mpd") + assets_file_path = os.path.join( + return_assets_path(False if format == "dash" else True), + "format_test{}".format(".mpd" if format == "dash" else ".m3u8"), + ) try: - streamer = StreamGear(output=mpd_file_path, logging=True, **stream_params) + if format == "hls": + stream_params.update( + { + "-hls_base_url": return_assets_path( + False if format == "dash" else True + ) + + os.sep + } + ) + streamer = StreamGear( + output=assets_file_path, format=format, logging=True, **stream_params + ) streamer.transcode_source() streamer.terminate() - assert check_valid_mpd(mpd_file_path) + if format == "dash": + assert check_valid_mpd(assets_file_path), "Test Failed!" + else: + assert extract_meta_video(assets_file_path), "Test Failed!" except Exception as e: pytest.fail(str(e)) @pytest.mark.parametrize( - "stream_params", + "format, stream_params", [ - { - "-clear_prev_assets": True, - "-video_source": return_testvideo_path(fmt="vo"), - "-streams": [ - { - "-video_bitrate": "unknown", - }, # Invalid Stream 1 - { - "-resolution": "unxun", - }, # Invalid Stream 2 - { - "-resolution": "640x480", - "-video_bitrate": "unknown", - }, # Invalid Stream 3 - { - "-resolution": "640x480", - "-framerate": "unknown", - }, # Invalid Stream 4 - { - "-resolution": "320x240", - "-framerate": 20.0, - }, # Stream: 320x240 at 20fps framerate - ], - }, - { - "-clear_prev_assets": True, - "-video_source": return_testvideo_path(fmt="vo"), - "-audio": return_testvideo_path(fmt="ao"), - "-streams": [ - { - "-resolution": "640x480", - "-video_bitrate": "850k", - "-audio_bitrate": "128k", - }, # Stream1: 640x480 at 850kbps bitrate - { - "-resolution": "320x240", - "-framerate": 20.0, - }, # Stream2: 320x240 at 20fps framerate - ], - }, - { - "-clear_prev_assets": True, - "-video_source": return_testvideo_path(), - "-streams": [ - { - "-resolution": "960x540", - "-video_bitrate": "1350k", - }, # Stream1: 960x540 at 1350kbps bitrate - ], - }, + ( + "dash", + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-streams": [ + { + "-video_bitrate": "unknown", + }, # Invalid Stream 1 + { + "-resolution": "unxun", + }, # Invalid Stream 2 + { + "-resolution": "640x480", + "-video_bitrate": "unknown", + }, # Invalid Stream 3 + { + "-resolution": "640x480", + "-framerate": "unknown", + }, # Invalid Stream 4 + { + "-resolution": "320x240", + "-framerate": 20.0, + }, # Stream: 320x240 at 20fps framerate + ], + }, + ), + ( + "hls", + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-streams": [ + { + "-video_bitrate": "unknown", + }, # Invalid Stream 1 + { + "-resolution": "unxun", + }, # Invalid Stream 2 + { + "-resolution": "640x480", + "-video_bitrate": "unknown", + }, # Invalid Stream 3 + { + "-resolution": "640x480", + "-framerate": "unknown", + }, # Invalid Stream 4 + { + "-resolution": "320x240", + "-framerate": 20.0, + }, # Stream: 320x240 at 20fps framerate + ], + }, + ), + ( + "dash", + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": return_testvideo_path(fmt="ao"), + "-streams": [ + { + "-resolution": "640x480", + "-video_bitrate": "850k", + "-audio_bitrate": "128k", + }, # Stream1: 640x480 at 850kbps bitrate + { + "-resolution": "320x240", + "-framerate": 20.0, + }, # Stream2: 320x240 at 20fps framerate + ], + }, + ), + ( + "hls", + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(fmt="vo"), + "-audio": return_testvideo_path(fmt="ao"), + "-streams": [ + { + "-resolution": "640x480", + "-video_bitrate": "850k", + "-audio_bitrate": "128k", + }, # Stream1: 640x480 at 850kbps bitrate + { + "-resolution": "320x240", + "-framerate": 20.0, + }, # Stream2: 320x240 at 20fps framerate + ], + }, + ), + ( + "dash", + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(), + "-streams": [ + { + "-resolution": "960x540", + "-video_bitrate": "1350k", + }, # Stream1: 960x540 at 1350kbps bitrate + ], + }, + ), + ( + "hls", + { + "-clear_prev_assets": True, + "-video_source": return_testvideo_path(), + "-streams": [ + { + "-resolution": "960x540", + "-video_bitrate": "1350k", + }, # Stream1: 960x540 at 1350kbps bitrate + ], + }, + ), ], ) -def test_multistreams(stream_params): +def test_multistreams(format, stream_params): """ Testing Support for additional Secondary Streams of variable bitrates or spatial resolutions. """ - mpd_file_path = os.path.join(return_mpd_path(), "dash_test.mpd") + assets_file_path = os.path.join( + return_assets_path(False if format == "dash" else True), + "asset_test.{}".format("mpd" if format == "dash" else "m3u8"), + ) results = extract_resolutions( stream_params["-video_source"], stream_params["-streams"] ) try: - streamer = StreamGear(output=mpd_file_path, logging=True, **stream_params) + streamer = StreamGear( + output=assets_file_path, format=format, logging=True, **stream_params + ) streamer.transcode_source() streamer.terminate() - metadata = extract_meta_mpd(mpd_file_path) - meta_videos = [x for x in metadata if x["mime_type"].startswith("video")] - assert meta_videos and (len(meta_videos) <= len(results)), "Test Failed!" - for s_v in meta_videos: - assert int(s_v["width"]) in results, "Width check failed!" - assert ( - int(s_v["height"]) == results[int(s_v["width"])] - ), "Height check failed!" + if format == "dash": + metadata = extract_meta_mpd(assets_file_path) + meta_videos = [x for x in metadata if x["mime_type"].startswith("video")] + assert meta_videos and (len(meta_videos) <= len(results)), "Test Failed!" + if len(meta_videos) == len(results): + for m_v, s_v in zip(meta_videos, list(results.values())): + assert int(m_v["width"]) == int( + s_v["resolution"][0] + ), "Width check failed!" + assert int(m_v["height"]) == int( + s_v["resolution"][1] + ), "Height check failed!" + else: + valid_widths = [int(x["resolution"][0]) for x in list(results.values())] + valid_heights = [ + int(x["resolution"][1]) for x in list(results.values()) + ] + for m_v in meta_videos: + assert int(m_v["width"]) in valid_widths, "Width check failed!" + assert int(m_v["height"]) in valid_heights, "Height check failed!" + else: + meta_videos = check_valid_m3u8(assets_file_path) + assert meta_videos and (len(meta_videos) <= len(results)), "Test Failed!" + if len(meta_videos) == len(results): + for m_v, s_v in zip(meta_videos, list(results.values())): + assert int(m_v["resolution"][0]) == int( + s_v["resolution"][0] + ), "Width check failed!" + assert int(m_v["resolution"][1]) == int( + s_v["resolution"][1] + ), "Height check failed!" + else: + valid_widths = [int(x["resolution"][0]) for x in list(results.values())] + valid_heights = [ + int(x["resolution"][1]) for x in list(results.values()) + ] + for m_v in meta_videos: + assert ( + int(m_v["resolution"][0]) in valid_widths + ), "Width check failed!" + assert ( + int(m_v["resolution"][1]) in valid_heights + ), "Height check failed!" except Exception as e: pytest.fail(str(e)) diff --git a/vidgear/tests/test_helper.py b/vidgear/tests/test_helper.py index 6ec1817ee..79e0ab9c6 100644 --- a/vidgear/tests/test_helper.py +++ b/vidgear/tests/test_helper.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -35,12 +35,13 @@ reducer, dict2Args, mkdir_safe, - delete_safe, + delete_ext_safe, check_output, extract_time, create_blank_frame, is_valid_url, logger_handler, + delete_file_safe, validate_audio, validate_video, validate_ffmpeg, @@ -52,6 +53,7 @@ generate_auth_certificates, get_supported_resolution, dimensions_to_resolutions, + retrieve_best_interpolation, ) from vidgear.gears.asyncio.helper import generate_webdata, validate_webdata @@ -320,17 +322,23 @@ def test_check_output(): @pytest.mark.parametrize( - "frame , percentage, result", - [(getframe(), 85, True), (None, 80, False), (getframe(), 95, False)], + "frame , percentage, interpolation, result", + [ + (getframe(), 85, cv2.INTER_AREA, True), + (None, 80, cv2.INTER_AREA, False), + (getframe(), 95, cv2.INTER_AREA, False), + (getframe(), 80, "invalid", False), + (getframe(), 80, 797, False), + ], ) -def test_reducer(frame, percentage, result): +def test_reducer(frame, percentage, interpolation, result): """ Testing frame size reducer function """ if not (frame is None): org_size = frame.shape[:2] try: - reduced_frame = reducer(frame, percentage) + reduced_frame = reducer(frame, percentage, interpolation) logger.debug(reduced_frame.shape) assert not (reduced_frame is None) reduced_frame_size = reduced_frame.shape[:2] @@ -341,8 +349,8 @@ def test_reducer(frame, percentage, result): 100 * reduced_frame_size[1] // (100 - percentage) == org_size[1] ) # cross-check height except Exception as e: - if isinstance(e, ValueError) and not (result): - pass + if not (result): + pytest.xfail(str(e)) else: pytest.fail(str(e)) @@ -398,7 +406,7 @@ def test_validate_audio(path, result): Testing validate_audio function """ try: - results = validate_audio(return_static_ffmpeg(), file_path=path) + results = validate_audio(return_static_ffmpeg(), source=path) if result: assert results, "Audio path validity test Failed!" except Exception as e: @@ -407,14 +415,19 @@ def test_validate_audio(path, result): @pytest.mark.parametrize( "frame , text", - [(getframe(), "ok"), (None, ""), (getframe(), 123)], + [ + (getframe(), "ok"), + (cv2.cvtColor(getframe(), cv2.COLOR_BGR2BGRA), "ok"), + (None, ""), + (cv2.cvtColor(getframe(), cv2.COLOR_BGR2GRAY), 123), + ], ) def test_create_blank_frame(frame, text): """ - Testing frame size reducer function + Testing create_blank_frame function """ try: - text_frame = create_blank_frame(frame=frame, text=text) + text_frame = create_blank_frame(frame=frame, text=text, logging=True) logger.debug(text_frame.shape) assert not (text_frame is None) except Exception as e: @@ -507,9 +520,9 @@ def test_check_gstreamer_support(): ([], False), ], ) -def test_delete_safe(ext, result): +def test_delete_ext_safe(ext, result): """ - Testing delete_safe function + Testing delete_ext_safe function """ try: path = os.path.join(expanduser("~"), "test_mpd") @@ -527,11 +540,41 @@ def test_delete_safe(ext, result): streamer.transcode_source() streamer.terminate() assert check_valid_mpd(mpd_file_path) - delete_safe(path, ext, logging=True) - assert not os.listdir(path), "`delete_safe` Test failed!" + delete_ext_safe(path, ext, logging=True) + assert not os.listdir(path), "`delete_ext_safe` Test failed!" # cleanup if os.path.isdir(path): shutil.rmtree(path) except Exception as e: if result: pytest.fail(str(e)) + + +@pytest.mark.parametrize( + "interpolations", + [ + "invalid", + ["invalid", "invalid2", "INTER_LANCZOS4"], + ["INTER_NEAREST_EXACT", "INTER_LINEAR_EXACT", "INTER_LANCZOS4"], + ], +) +def test_retrieve_best_interpolation(interpolations): + """ + Testing retrieve_best_interpolation method + """ + try: + output = retrieve_best_interpolation(interpolations) + if interpolations != "invalid": + assert output, "Test failed" + except Exception as e: + pytest.fail(str(e)) + + +def test_delete_file_safe(): + """ + Testing delete_file_safe method + """ + try: + delete_file_safe(os.path.join(expanduser("~"), "invalid")) + except Exception as e: + pytest.fail(str(e)) diff --git a/vidgear/tests/utils/fake_picamera/__init__.py b/vidgear/tests/utils/fake_picamera/__init__.py index 1a7f31292..039bb91ff 100644 --- a/vidgear/tests/utils/fake_picamera/__init__.py +++ b/vidgear/tests/utils/fake_picamera/__init__.py @@ -1,4 +1,3 @@ -# import the necessary packages from .picamera import PiCamera __author__ = "Abhishek Thakur (@abhiTronix) " diff --git a/vidgear/tests/utils/fake_picamera/picamera.py b/vidgear/tests/utils/fake_picamera/picamera.py index eaea53f44..753fb220c 100644 --- a/vidgear/tests/utils/fake_picamera/picamera.py +++ b/vidgear/tests/utils/fake_picamera/picamera.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -17,18 +17,17 @@ limitations under the License. =============================================== """ - # import the packages import time import numpy as np -import logging as log -from vidgear.gears.helper import logger_handler +import logging -# define test logger -logger = log.getLogger("Fake_Picamera") +# define custom logger +FORMAT = "%(name)s :: %(levelname)s :: %(message)s" +logging.basicConfig(format=FORMAT) +logger = logging.getLogger("Fake_Picamera") logger.propagate = False -logger.addHandler(logger_handler()) -logger.setLevel(log.DEBUG) +logger.setLevel(logging.DEBUG) class Warn(object): diff --git a/vidgear/tests/utils/fps.py b/vidgear/tests/utils/fps.py index c30e5befc..85908181b 100644 --- a/vidgear/tests/utils/fps.py +++ b/vidgear/tests/utils/fps.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/vidgear/tests/videocapture_tests/test_camgear.py b/vidgear/tests/videocapture_tests/test_camgear.py index 1760a9a1e..18f58cfe2 100644 --- a/vidgear/tests/videocapture_tests/test_camgear.py +++ b/vidgear/tests/videocapture_tests/test_camgear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/vidgear/tests/videocapture_tests/test_pigear.py b/vidgear/tests/videocapture_tests/test_pigear.py index 600370b14..ace1a5b0e 100644 --- a/vidgear/tests/videocapture_tests/test_pigear.py +++ b/vidgear/tests/videocapture_tests/test_pigear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -26,14 +26,6 @@ import pytest import logging as log import platform - -# Faking -import sys -from ..utils import fake_picamera - -sys.modules["picamera"] = fake_picamera.picamera -sys.modules["picamera.array"] = fake_picamera.picamera.array - from vidgear.gears.helper import logger_handler # define test logger diff --git a/vidgear/tests/videocapture_tests/test_screengear.py b/vidgear/tests/videocapture_tests/test_screengear.py index 202a193d0..e9eb5de91 100644 --- a/vidgear/tests/videocapture_tests/test_screengear.py +++ b/vidgear/tests/videocapture_tests/test_screengear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/vidgear/tests/videocapture_tests/test_videogear.py b/vidgear/tests/videocapture_tests/test_videogear.py index 1a4a9e49f..560249b16 100644 --- a/vidgear/tests/videocapture_tests/test_videogear.py +++ b/vidgear/tests/videocapture_tests/test_videogear.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/vidgear/tests/writer_tests/test_IO.py b/vidgear/tests/writer_tests/test_IO.py index b2079e205..26ec86a7c 100644 --- a/vidgear/tests/writer_tests/test_IO.py +++ b/vidgear/tests/writer_tests/test_IO.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/vidgear/tests/writer_tests/test_compression_mode.py b/vidgear/tests/writer_tests/test_compression_mode.py index 55dfdda02..06a1719e6 100644 --- a/vidgear/tests/writer_tests/test_compression_mode.py +++ b/vidgear/tests/writer_tests/test_compression_mode.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/vidgear/tests/writer_tests/test_non_compression_mode.py b/vidgear/tests/writer_tests/test_non_compression_mode.py index 34edaf7c9..98d975988 100644 --- a/vidgear/tests/writer_tests/test_non_compression_mode.py +++ b/vidgear/tests/writer_tests/test_non_compression_mode.py @@ -2,7 +2,7 @@ =============================================== vidgear library source-code is deployed under the Apache 2.0 License: -Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix) +Copyright (c) 2019 Abhishek Thakur(@abhiTronix) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/vidgear/version.py b/vidgear/version.py index 77648b6b7..984fc572f 100644 --- a/vidgear/version.py +++ b/vidgear/version.py @@ -1 +1 @@ -__version__ = "0.2.1" \ No newline at end of file +__version__ = "0.2.2" \ No newline at end of file