diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml
index 15479c1bb..26d6f75be 100644
--- a/.github/FUNDING.yml
+++ b/.github/FUNDING.yml
@@ -1,2 +1,2 @@
ko_fi: abhitronix
-custom: https://paypal.me/AbhiTronix
\ No newline at end of file
+liberapay: abhiTronix
\ No newline at end of file
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index 7b1b90e7a..44fd156b5 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -1,7 +1,8 @@
---
name: Bug report
about: Create a bug-report for VidGear
-labels: "issue: bug"
+labels: ':beetle: BUG'
+assignees: 'abhiTronix'
---
- [ ] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful.
-- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear).
-- [ ] I have read the [Issue Guidelines](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines).
+- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest).
+- [ ] I have read the [Issue Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines).
### Environment
diff --git a/.github/ISSUE_TEMPLATE/proposal.md b/.github/ISSUE_TEMPLATE/proposal.md
index f4df01214..a75eefaa4 100644
--- a/.github/ISSUE_TEMPLATE/proposal.md
+++ b/.github/ISSUE_TEMPLATE/proposal.md
@@ -1,7 +1,7 @@
---
name: Proposal
about: Suggest an idea for improving VidGear
-labels: "issue: proposal"
+labels: 'PROPOSAL :envelope_with_arrow:'
---
diff --git a/.github/ISSUE_TEMPLATE/question.md b/.github/ISSUE_TEMPLATE/question.md
index a7de08f00..3da69a789 100644
--- a/.github/ISSUE_TEMPLATE/question.md
+++ b/.github/ISSUE_TEMPLATE/question.md
@@ -1,7 +1,7 @@
---
name: Question
about: Have any questions regarding VidGear?
-labels: "issue: question"
+labels: 'QUESTION :question:'
---
@@ -18,8 +18,8 @@ _Kindly describe the issue here._
- [ ] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful.
-- [ ] I have read the [FAQs](https://abhitronix.github.io/vidgear/help/get_help/#frequently-asked-questions).
-- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear).
+- [ ] I have read the [FAQs](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions).
+- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest).
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 7f811e4f9..c3c1a176e 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -11,8 +11,8 @@ _Kindly explain the changes you made here._
-- [ ] I have read the [PR Guidelines](https://abhitronix.github.io/vidgear/contribution/PR/#submitting-pull-requestpr-guidelines).
-- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear).
+- [ ] I have read the [PR Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/PR/#submitting-pull-requestpr-guidelines).
+- [ ] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest).
- [ ] I have updated the documentation accordingly(if required).
diff --git a/.github/config.yml b/.github/config.yml
index 3f8db32a9..b118fb12d 100644
--- a/.github/config.yml
+++ b/.github/config.yml
@@ -2,9 +2,9 @@ newPRWelcomeComment: |
Thanks so much for opening your first PR here, a maintainer will get back to you shortly!
### In the meantime:
- - Read our [Pull Request(PR) Guidelines](https://abhitronix.github.io/vidgear/contribution/PR/#submitting-pull-requestpr-guidelines) for submitting a valid PR for VidGear.
- - Submit a [issue](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines) beforehand for your Pull Request.
- - Go briefly through our [PR FAQ section](https://abhitronix.github.io/vidgear/contribution/PR/#frequently-asked-questions).
+ - Read our [Pull Request(PR) Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/PR/#submitting-pull-requestpr-guidelines) for submitting a valid PR for VidGear.
+ - Submit a [issue](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines) beforehand for your Pull Request.
+ - Go briefly through our [PR FAQ section](https://abhitronix.github.io/vidgear/latest/contribution/PR/#frequently-asked-questions).
firstPRMergeComment: |
Congrats on merging your first pull request here! :tada: You're awesome!
@@ -14,6 +14,6 @@ newIssueWelcomeComment: |
Thanks for opening this issue, a maintainer will get back to you shortly!
### In the meantime:
- - Read our [Issue Guidelines](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines), and update your issue accordingly. Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo.
- - Go comprehensively through our dedicated [FAQ & Troubleshooting section](https://abhitronix.github.io/vidgear/help/get_help/#frequently-asked-questions).
+ - Read our [Issue Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines), and update your issue accordingly. Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo.
+ - Go comprehensively through our dedicated [FAQ & Troubleshooting section](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions).
- For any quick questions and typos, please refrain from opening an issue, as you can reach us on [Gitter](https://gitter.im/vidgear/community) community channel.
diff --git a/.github/needs-more-info.yml b/.github/needs-more-info.yml
index e6dea748c..f3e05b1d5 100644
--- a/.github/needs-more-info.yml
+++ b/.github/needs-more-info.yml
@@ -1,9 +1,10 @@
checkTemplate: true
miniTitleLength: 8
-labelToAdd: 'MISSING : INFORMATION :mag:'
+labelToAdd: 'MISSING : TEMPLATE :grey_question:'
issue:
reactions:
- eyes
+ - '-1'
badTitles:
- update
- updates
@@ -12,6 +13,7 @@ issue:
- debug
- demo
- new
+ - help
badTitleComment: >
@{{ author }} Please re-edit this issue title to provide more relevant info.
diff --git a/.github/no-response.yml b/.github/no-response.yml
new file mode 100644
index 000000000..deffa6b17
--- /dev/null
+++ b/.github/no-response.yml
@@ -0,0 +1,13 @@
+# Configuration for probot-no-response - https://github.com/probot/no-response
+
+# Number of days of inactivity before an Issue is closed for lack of response
+daysUntilClose: 1
+# Label requiring a response
+responseRequiredLabel: 'MISSING : INFORMATION :mag:'
+# Comment to post when closing an Issue for lack of response. Set to `false` to disable
+closeComment: >
+ ### No Response :-1:
+
+ This issue has been automatically closed because there has been no response
+ to our request for more information from the original author. Kindly provide
+ requested information so that we can investigate this issue further.
\ No newline at end of file
diff --git a/.github/workflows/ci_linux.yml b/.github/workflows/ci_linux.yml
index b95c72a76..dd9dd851f 100644
--- a/.github/workflows/ci_linux.yml
+++ b/.github/workflows/ci_linux.yml
@@ -1,4 +1,4 @@
-# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+# Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -51,7 +51,7 @@ jobs:
pip install -U pip wheel numpy
pip install -U .[asyncio]
pip uninstall opencv-python -y
- pip install -U flake8 six codecov pytest pytest-asyncio pytest-cov youtube-dl mpegdash
+ pip install -U flake8 six codecov pytest pytest-asyncio pytest-cov youtube-dl mpegdash paramiko m3u8 async-asgi-testclient
if: success()
- name: run prepare_dataset_script
run: bash scripts/bash/prepare_dataset.sh
diff --git a/.github/workflows/deploy_docs.yml b/.github/workflows/deploy_docs.yml
index 48c92b485..cabaeebae 100644
--- a/.github/workflows/deploy_docs.yml
+++ b/.github/workflows/deploy_docs.yml
@@ -1,4 +1,4 @@
-# Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+# Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -64,7 +64,7 @@ jobs:
- name: mike deploy docs release
run: |
echo "${{ env.NAME_RELEASE }}"
- mike deploy --push --update-aliases ${{ env.NAME_RELEASE }} ${{ env.RELEASE_NAME }}
+ mike deploy --push --update-aliases --no-redirect ${{ env.NAME_RELEASE }} ${{ env.RELEASE_NAME }} --title=${{ env.RELEASE_NAME }}
env:
NAME_RELEASE: "v${{ env.RELEASE_NAME }}-release"
if: success()
@@ -101,14 +101,10 @@ jobs:
echo "RELEASE_NAME=$(python -c 'import vidgear; print(vidgear.__version__)')" >>$GITHUB_ENV
shell: bash
if: success()
- - name: mike remove previous stable
- run: |
- mike delete --push latest
- if: success()
- name: mike deploy docs stable
run: |
echo "${{ env.NAME_STABLE }}"
- mike deploy --push --update-aliases ${{ env.NAME_STABLE }} latest
+ mike deploy --push --update-aliases --no-redirect ${{ env.NAME_STABLE }} latest --title=latest
mike set-default --push latest
env:
NAME_STABLE: "v${{ env.RELEASE_NAME }}-stable"
@@ -150,8 +146,8 @@ jobs:
if: success()
- name: mike deploy docs dev
run: |
- echo "${{ env.NAME_DEV }}"
- mike deploy --push --update-aliases ${{ env.NAME_DEV }} dev
+ echo "Releasing ${{ env.NAME_DEV }}"
+ mike deploy --push --update-aliases --no-redirect ${{ env.NAME_DEV }} dev --title=dev
env:
NAME_DEV: "v${{ env.RELEASE_NAME }}-dev"
if: success()
diff --git a/LICENSE b/LICENSE
index b37f76f9b..e36f93110 100644
--- a/LICENSE
+++ b/LICENSE
@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
- Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+ Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/README.md b/README.md
index f468514bc..b45a5dec1 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -29,16 +29,16 @@ limitations under the License.
[Releases][release] | [Gears][gears] | [Documentation][docs] | [Installation][installation] | [License](#license)
-[![Build Status][github-cli]][github-flow] [![Codecov branch][codecov]][code] [![Build Status][appveyor]][app]
+[![Build Status][github-cli]][github-flow] [![Codecov branch][codecov]][code] [![Azure DevOps builds (branch)][azure-badge]][azure-pipeline]
-[![Azure DevOps builds (branch)][azure-badge]][azure-pipeline] [![PyPi version][pypi-badge]][pypi] [![Glitter chat][gitter-bagde]][gitter]
+[![Glitter chat][gitter-bagde]][gitter] [![Build Status][appveyor]][app] [![PyPi version][pypi-badge]][pypi]
[![Code Style][black-badge]][black]
-VidGear is a **High-Performance Video Processing Python Library** that provides an easy-to-use, highly extensible, thoroughly optimised **Multi-Threaded + Asyncio Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* serving at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering **robust error-handling and real-time performance 🔥**
+VidGear is a **High-Performance Video Processing Python Library** that provides an easy-to-use, highly extensible, thoroughly optimised **Multi-Threaded + Asyncio API Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* serving at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering **robust error-handling and real-time performance 🔥**
VidGear primarily focuses on simplicity, and thereby lets programmers and software developers to easily integrate and perform Complex Video Processing Tasks, in just a few lines of code.
@@ -67,10 +67,10 @@ The following **functional block diagram** clearly depicts the generalized funct
* [**WebGear**](#webgear)
* [**WebGear_RTC**](#webgear_rtc)
* [**NetGear_Async**](#netgear_async)
-* [**Community Channel**](#community-channel)
-* [**Contributions & Support**](#contributions--support)
- * [**Support**](#support)
+* [**Contributions & Community Support**](#contributions--community-support)
+ * [**Community Support**](#community-support)
* [**Contributors**](#contributors)
+* [**Donations**](#donations)
* [**Citation**](#citation)
* [**Copyright**](#copyright)
@@ -81,21 +81,21 @@ The following **functional block diagram** clearly depicts the generalized funct
-# TL;DR
+## TL;DR
#### What is vidgear?
-> *"VidGear is a High-Performance Framework that provides an one-stop **Video-Processing** solution for building complex real-time media applications in python."*
+> *"VidGear is a cross-platform High-Performance Framework that provides an one-stop **Video-Processing** solution for building complex real-time media applications in python."*
#### What does it do?
-> *"VidGear can read, write, process, send & receive video files/frames/streams from/to various devices in real-time, and faster than underline libraries."*
+> *"VidGear can read, write, process, send & receive video files/frames/streams from/to various devices in real-time, and [**faster**][TQM-doc] than underline libraries."*
#### What is its purpose?
> *"Write Less and Accomplish More"* — **VidGear's Motto**
-> *"Built with simplicity in mind, VidGear lets programmers and software developers to easily integrate and perform **Complex Video-Processing Tasks** in their existing or newer applications without going through hefty documentation and in just a [few lines of code][switch_from_cv]. Beneficial for both, if you're new to programming with Python language or already a pro at it."*
+> *"Built with simplicity in mind, VidGear lets programmers and software developers to easily integrate and perform **Complex Video-Processing Tasks** in their existing or newer applications without going through hefty documentation and in just a [**few lines of code**][switch_from_cv]. Beneficial for both, if you're new to programming with Python language or already a pro at it."*
@@ -105,11 +105,11 @@ The following **functional block diagram** clearly depicts the generalized funct
If this is your first time using VidGear, head straight to the [Installation ➶][installation] to install VidGear.
-Once you have VidGear installed, **Checkout its Well-Documented Function-Specific [Gears ➶][gears]**
+Once you have VidGear installed, **Checkout its Well-Documented [Function-Specific Gears ➶][gears]**
Also, if you're already familiar with [OpenCV][opencv] library, then see [Switching from OpenCV Library ➶][switch_from_cv]
-Or, if you're just getting started with OpenCV, then see [here ➶](https://abhitronix.github.io/vidgear/latest/help/general_faqs/#im-new-to-python-programming-or-its-usage-in-computer-vision-how-to-use-vidgear-in-my-projects)
+Or, if you're just getting started with OpenCV-Python programming, then refer this [FAQ ➶](https://abhitronix.github.io/vidgear/latest/help/general_faqs/#im-new-to-python-programming-or-its-usage-in-opencv-library-how-to-use-vidgear-in-my-projects)
@@ -406,7 +406,9 @@ stream.stop()
> *WriteGear handles various powerful Video-Writer Tools that provide us the freedom to do almost anything imaginable with multimedia data.*
-WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**][ffmpeg], a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specification _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_. It is powerful enough to perform complex tasks such as [Live-Streaming][live-stream] _(such as for Twitch and YouTube)_ and [Multiplexing Video-Audio][live-audio-doc] with real-time frames in way fewer lines of code.
+WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**][ffmpeg], a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specifications _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_.
+
+WriteGear also supports streaming with traditional protocols such as RTMP, RTSP/RTP. It is powerful enough to perform complex tasks such as [Live-Streaming][live-stream] _(such as for Twitch, YouTube etc.)_ and [Multiplexing Video-Audio][live-audio-doc] with real-time frames in just few lines of code.
Best of all, WriteGear grants users the complete freedom to play with any FFmpeg parameter with its exclusive **Custom Commands function** _(see this [doc][custom-command-doc])_ without relying on any third-party API.
@@ -416,7 +418,7 @@ In addition to this, WriteGear also provides flexible access to [**OpenCV's Vide
* **Compression Mode:** In this mode, WriteGear utilizes powerful [**FFmpeg**][ffmpeg] inbuilt encoders to encode lossless multimedia files. This mode provides us the ability to exploit almost any parameter available within FFmpeg, effortlessly and flexibly, and while doing that it robustly handles all errors/warnings quietly. **You can find more about this mode [here ➶][cm-writegear-doc]**
- * **Non-Compression Mode:** In this mode, WriteGear utilizes basic [**OpenCV's inbuilt VideoWriter API**][opencv-vw] tools. This mode also supports all parameters manipulation available within VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. **You can learn about this mode [here ➶][ncm-writegear-doc]**
+ * **Non-Compression Mode:** In this mode, WriteGear utilizes basic [**OpenCV's inbuilt VideoWriter API**][opencv-vw] tools. This mode also supports all parameter transformations available within OpenCV's VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. **You can learn about this mode [here ➶][ncm-writegear-doc]**
### WriteGear API Guide:
@@ -434,22 +436,22 @@ In addition to this, WriteGear also provides flexible access to [**OpenCV's Vide
+
+
+## Overview
+
+!!! new "New in v0.2.2"
+ This document was added in `v0.2.2`.
+
+
+SSH Tunneling Mode allows you to connect NetGear client and server via secure SSH connection over the untrusted network and access its intranet services across firewalls. This mode works with pyzmq's [`zmq.ssh`](https://github.com/zeromq/pyzmq/tree/main/zmq/ssh) module for tunneling ZeroMQ connections over ssh.
+
+This mode implements [SSH Remote Port Forwarding](https://www.ssh.com/academy/ssh/tunneling/example) which enables accessing Host(client) machine outside the network by exposing port to the public Internet. Thereby, once you have established the tunnel, connections to local machine will actually be connections to remote machine as seen from the server.
+
+??? danger "Beware ☠️"
+ Cybercriminals or malware could exploit SSH tunnels to hide their unauthorized communications, or to exfiltrate stolen data from the network. More information can be found [here ➶](https://www.ssh.com/academy/ssh/tunneling)
+
+All patterns are valid for this mode and it can be easily activated in NetGear API at server end through `ssh_tunnel_mode` string attribute of its [`options`](../../params/#options) dictionary parameter during initialization.
+
+!!! warning "Important"
+ * ==SSH tunneling can only be enabled on Server-end to establish remote SSH connection with Client.==
+ * SSH tunneling Mode is **NOT** compatible with [Multi-Servers](../../advanced/multi_server) and [Multi-Clients](../../advanced/multi_client) Exclusive Modes yet.
+
+!!! tip "Useful Tips"
+ * It is advise to use `pattern=2` to overcome random disconnection due to delays in network.
+ * SSH tunneling Mode is fully supports [Bidirectional Mode](../../advanced/multi_server), [Secure Mode](../../advanced/secure_mode/) and [JPEG-Frame Compression](../../advanced/compression/).
+ * It is advised to enable logging (`logging = True`) on the first run, to easily identify any runtime errors.
+
+
+
+
+## Requirements
+
+SSH Tunnel Mode requires [`pexpect`](http://www.noah.org/wiki/pexpect) or [`paramiko`](http://www.lag.net/paramiko/) as an additional dependency which is not part of standard VidGear package. It can be easily installed via pypi as follows:
+
+
+=== "Pramiko"
+
+ !!! success "`paramiko` is compatible with all platforms."
+
+ !!! info "`paramiko` support is automatically enabled in ZeroMQ if installed."
+
+ ```sh
+ # install paramiko
+ pip install paramiko
+ ```
+
+=== "Pexpect"
+
+ !!! fail "`pexpect` is NOT compatible with Windows Machines."
+
+ ```sh
+ # install pexpect
+ pip install pexpect
+ ```
+
+
+
+
+## Exclusive Attributes
+
+!!! warning "All these attributes will work on Server end only whereas Client end will simply discard them."
+
+For implementing SSH Tunneling Mode, NetGear API currently provide following exclusive attribute for its [`options`](../../params/#options) dictionary parameter:
+
+* **`ssh_tunnel_mode`** (_string_) : This attribute activates SSH Tunneling Mode and sets the fully specified `"@:"` SSH URL for tunneling at Server end. Its usage is as follows:
+
+ !!! fail "On Server end, NetGear automatically validates if the `port` is open at specified SSH URL or not, and if it fails _(i.e. port is closed)_, NetGear will throw `AssertionError`!"
+
+ === "With Default Port"
+ !!! info "The `port` value in SSH URL is the forwarded port on host(client) machine. Its default value is `22`_(meaning default SSH port is forwarded)_."
+
+ ```python
+ # activates SSH Tunneling and assign SSH URL
+ options = {"ssh_tunnel_mode":"userid@52.194.1.73"}
+ # only connections from the public IP address 52.194.1.73 on default port 22 are allowed
+ ```
+
+ === "With Custom Port"
+ !!! quote "You can also define your custom forwarded port instead."
+
+ ```python
+ # activates SSH Tunneling and assign SSH URL
+ options = {"ssh_tunnel_mode":"userid@52.194.1.73:8080"}
+ # only connections from the public IP address 52.194.1.73 on custom port 8080 are allowed
+ ```
+
+* **`ssh_tunnel_pwd`** (_string_): This attribute sets the password required to authorize Host for SSH Connection at Server end. This password grant access and controls SSH user can access what. It can be used as follows:
+
+ ```python
+ # set password for our SSH conection
+ options = {
+ "ssh_tunnel_mode":"userid@52.194.1.73",
+ "ssh_tunnel_pwd":"mypasswordstring",
+ }
+ ```
+
+* **`ssh_tunnel_keyfile`** (_string_): This attribute sets path to Host key that provide another way to authenticate host for SSH Connection at Server end. Its purpose is to prevent man-in-the-middle attacks. Certificate-based host authentication can be a very attractive alternative in large organizations. It allows device authentication keys to be rotated and managed conveniently and every connection to be secured. It can be used as follows:
+
+ !!! tip "You can use [Ssh-keygen](https://www.ssh.com/academy/ssh/keygen) tool for creating new authentication key pairs for SSH Tunneling."
+
+ ```python
+ # set keyfile path for our SSH conection
+ options = {
+ "ssh_tunnel_mode":"userid@52.194.1.73",
+ "ssh_tunnel_keyfile":"/home/foo/.ssh/id_rsa",
+ }
+ ```
+
+
+
+
+
+## Usage Example
+
+
+???+ alert "Assumptions for this Example"
+
+ In this particular example, we assume that:
+
+ - **Server:**
+ * [x] Server end is a **Raspberry Pi** with USB camera connected to it.
+ * [x] Server is located at remote location and outside the Client's network.
+
+ - **Client:**
+ * [x] Client end is a **Regular PC/Computer** located at `52.155.1.89` public IP address for displaying frames received from the remote Server.
+ * [x] This Client is Port Forwarded by its Router to a default SSH Port(22), which allows Server to connect to its TCP port `22` remotely. This connection will then be tunneled back to our PC/Computer(Client) and makes TCP connection to it again via port `22` on localhost(`127.0.0.1`).
+ * [x] Also, there's a username `test` present on the PC/Computer(Client) to SSH login with password `pas$wd`.
+
+ - **Setup Diagram:**
+
+ Assumed setup can be visualized throw diagram as follows:
+
+ ![Placeholder](../../../../assets/images/ssh_tunnel_ex.png){ loading=lazy }
+
+
+
+??? question "How to Port Forward in Router"
+
+ For more information on Forwarding Port in Popular Home Routers. See [this document ➶](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/)"
+
+
+
+#### Client's End
+
+Open a terminal on Client System _(A Regular PC where you want to display the input frames received from the Server)_ and execute the following python code:
+
+
+!!! warning "Prerequisites for Client's End"
+
+ To ensure a successful Remote NetGear Connection with Server:
+
+ * **Install OpenSSH Server: (Tested)**
+
+ === "On Linux"
+
+ ```sh
+ # Debian-based
+ sudo apt-get install openssh-server
+
+ # RHEL-based
+ sudo yum install openssh-server
+ ```
+
+ === "On Windows"
+
+ See [this official Microsoft doc ➶](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)
+
+
+ === "On OSX"
+
+ ```sh
+ brew install openssh
+ ```
+
+ * Make sure to note down the Client's public IP address required by Server end. Use https://www.whatismyip.com/ to determine it.
+
+ * Make sure that Client Machine is Port Forward by its Router to expose it to the public Internet. Also, this forwarded port value is needed at Server end."
+
+
+??? fail "Secsh channel X open FAILED: open failed: Administratively prohibited"
+
+ **Error:** This error means that installed OpenSSH is preventing connections to forwarded ports from outside your Client Machine.
+
+ **Solution:** You need to change `GatewayPorts no` option to `GatewayPorts yes` in the **OpenSSH server configuration file** [`sshd_config`](https://www.ssh.com/ssh/sshd_config/) to allows anyone to connect to the forwarded ports on Client Machine.
+
+
+??? tip "Enabling Dynamic DNS"
+ SSH tunneling requires public IP address to able to access host on public Internet. Thereby, if it's troublesome to remember Public IP address or your IP address change constantly, then you can use dynamic DNS services like https://www.noip.com/
+
+!!! info "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import required libraries
+from vidgear.gears import NetGear
+import cv2
+
+# Define NetGear Client at given IP address and define parameters
+client = NetGear(
+ address="127.0.0.1", # don't change this
+ port="5454",
+ pattern=2,
+ receive_mode=True,
+ logging=True,
+)
+
+# loop over
+while True:
+
+ # receive frames from network
+ frame = client.recv()
+
+ # check for received frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+# close output window
+cv2.destroyAllWindows()
+
+# safely close client
+client.close()
+```
+
+
+
+#### Server's End
+
+Now, Open the terminal on Remote Server System _(A Raspberry Pi with a webcam connected to it at index `0`)_, and execute the following python code:
+
+!!! danger "Make sure to replace the SSH URL in following example with yours."
+
+!!! warning "On Server end, NetGear automatically validates if the `port` is open at specified SSH URL or not, and if it fails _(i.e. port is closed)_, NetGear will throw `AssertionError`!"
+
+!!! info "You can terminate stream on both side anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import required libraries
+from vidgear.gears import VideoGear
+from vidgear.gears import NetGear
+
+# activate SSH tunneling with SSH URL, and
+# [BEWARE!!!] Change SSH URL and SSH password with yours for this example !!!
+options = {
+ "ssh_tunnel_mode": "test@52.155.1.89", # defaults to port 22
+ "ssh_tunnel_pwd": "pas$wd",
+}
+
+# Open live video stream on webcam at first index(i.e. 0) device
+stream = VideoGear(source=0).start()
+
+# Define NetGear server at given IP address and define parameters
+server = NetGear(
+ address="127.0.0.1", # don't change this
+ port="5454",
+ pattern=2,
+ logging=True,
+ **options
+)
+
+# loop over until KeyBoard Interrupted
+while True:
+
+ try:
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to server
+ server.send(frame)
+
+ except KeyboardInterrupt:
+ break
+
+# safely close video stream
+stream.stop()
+
+# safely close server
+server.close()
+```
+
+
\ No newline at end of file
diff --git a/docs/gears/netgear/overview.md b/docs/gears/netgear/overview.md
index 86e4a73cc..6c24ce05b 100644
--- a/docs/gears/netgear/overview.md
+++ b/docs/gears/netgear/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -77,6 +77,9 @@ In addition to the primary modes, NetGear API also offers applications-specific
* **Bidirectional Mode:** _This exclusive mode ==provides seamless support for bidirectional data transmission between between Server and Client along with video frames==. Using this mode, the user can now send or receive any data(of any datatype) between Server and Client easily in real-time. **You can learn more about this mode [here ➶](../advanced/bidirectional_mode/).**_
+* **SSH Tunneling Mode:** _This exclusive mode ==allows you to connect NetGear client and server via secure SSH connection over the untrusted network== and access its intranet services across firewalls. This mode implements SSH Remote Port Forwarding which enables accessing Host(client) machine outside the network by exposing port to the public Internet. **You can learn more about this mode [here ➶](../advanced/ssh_tunnel/).**_
+
+
* **Secure Mode:** _In this exclusive mode, NetGear API ==provides easy access to powerful, smart & secure ZeroMQ's Security Layers== that enables strong encryption on data, and unbreakable authentication between the Server and Client with the help of custom certificates/keys that brings cheap, standardized privacy and authentication for distributed systems over the network. **You can learn more about this mode [here ➶](../advanced/secure_mode/).**_
diff --git a/docs/gears/netgear/params.md b/docs/gears/netgear/params.md
index 1bc72ecfb..c9562e362 100644
--- a/docs/gears/netgear/params.md
+++ b/docs/gears/netgear/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -149,24 +149,28 @@ This parameter provides the flexibility to alter various NetGear API's internal
* **`bidirectional_mode`** (_boolean_) : This internal attribute activates the exclusive [**Bidirectional Mode**](../advanced/bidirectional_mode/), if enabled(`True`).
+ * **`ssh_tunnel_mode`** (_string_) : This internal attribute activates the exclusive [**SSH Tunneling Mode**](../advanced/ssh_tunnel/) ==at the Server-end only==.
+
+ * **`ssh_tunnel_pwd`** (_string_): In SSH Tunneling Mode, This internal attribute sets the password required to authorize Host for SSH Connection ==at the Server-end only==. More information can be found [here ➶](../advanced/ssh_tunnel/#supported-attributes)
+
+ * **`ssh_tunnel_keyfile`** (_string_): In SSH Tunneling Mode, This internal attribute sets path to Host key that provide another way to authenticate host for SSH Connection ==at the Server-end only==. More information can be found [here ➶](../advanced/ssh_tunnel/#supported-attributes)
+
* **`custom_cert_location`** (_string_) : In Secure Mode, This internal attribute assigns user-defined location/path to directory for generating/storing Public+Secret Keypair necessary for encryption. More information can be found [here ➶](../advanced/secure_mode/#supported-attributes)
* **`overwrite_cert`** (_boolean_) : In Secure Mode, This internal attribute decides whether to overwrite existing Public+Secret Keypair/Certificates or not, ==at the Server-end only==. More information can be found [here ➶](../advanced/secure_mode/#supported-attributes)
- * **`jpeg_compression`**(_bool_): This attribute can be used to activate(if True)/deactivate(if False) Frame Compression. Its default value is also `True`. More information can be found [here ➶](../advanced/compression/#supported-attributes)
+ * **`jpeg_compression`**(_bool/str_): This internal attribute is used to activate(if `True`)/deactivate(if `False`) JPEG Frame Compression as well as to specify incoming frames colorspace with compression. By default colorspace is `BGR` and compression is enabled(`True`). More information can be found [here ➶](../advanced/compression/#supported-attributes)
- * **`jpeg_compression_quality`**(_int/float_): It controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. More information can be found [here ➶](../advanced/compression/#supported-attributes)
+ * **`jpeg_compression_quality`**(_int/float_): This internal attribute controls the JPEG quantization factor in JPEG Frame Compression. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. More information can be found [here ➶](../advanced/compression/#supported-attributes)
- * **`jpeg_compression_fastdct`**(_bool_): This attribute if True, use fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`. More information can be found [here ➶](../advanced/compression/#supported-attributes)
+ * **`jpeg_compression_fastdct`**(_bool_): This internal attributee if True, use fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality in JPEG Frame Compression. Its default value is also `True`. More information can be found [here ➶](../advanced/compression/#supported-attributes)
- * **`jpeg_compression_fastupsample`**(_bool_): This attribute if True, use fastest color upsampling method. Its default value is `False`. More information can be found [here ➶](../advanced/compression/#supported-attributes)
+ * **`jpeg_compression_fastupsample`**(_bool_): This internal attribute if True, use fastest color upsampling method. Its default value is `False`. More information can be found [here ➶](../advanced/compression/#supported-attributes)
* **`max_retries`**(_integer_): This internal attribute controls the maximum retries before Server/Client exit itself, if it's unable to get any response/reply from the socket before a certain amount of time, when synchronous messaging patterns like (`zmq.PAIR` & `zmq.REQ/zmq.REP`) are being used. It's value can anything greater than `0`, and its default value is `3`.
-
* **`request_timeout`**(_integer_): This internal attribute controls the timeout value _(in seconds)_, after which the Server/Client exit itself if it's unable to get any response/reply from the socket, when synchronous messaging patterns like (`zmq.PAIR` & `zmq.REQ/zmq.REP`) are being used. It's value can anything greater than `0`, and its default value is `10` seconds.
-
* **`flag`**(_integer_): This PyZMQ attribute value can be either `0` or `zmq.NOBLOCK`_( i.e. 1)_. More information can be found [here ➶](https://pyzmq.readthedocs.io/en/latest/api/zmq.html).
* **`copy`**(_boolean_): This PyZMQ attribute selects if message be received in a copying or non-copying manner. If `False` a object is returned, if `True` a string copy of the message is returned.
diff --git a/docs/gears/netgear/usage.md b/docs/gears/netgear/usage.md
index 357a49819..e2aaa2d14 100644
--- a/docs/gears/netgear/usage.md
+++ b/docs/gears/netgear/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -471,4 +471,10 @@ stream.stop()
server.close()
```
-
\ No newline at end of file
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced NetGear examples with unusual configuration [here ➶](../../../help/netgear_ex/)"
+
+
\ No newline at end of file
diff --git a/docs/gears/netgear_async/advanced/bidirectional_mode.md b/docs/gears/netgear_async/advanced/bidirectional_mode.md
new file mode 100644
index 000000000..0341f372b
--- /dev/null
+++ b/docs/gears/netgear_async/advanced/bidirectional_mode.md
@@ -0,0 +1,582 @@
+
+
+# Bidirectional Mode for NetGear_Async API
+
+
+
+ NetGear_Async's Bidirectional Mode
+
+
+## Overview
+
+!!! new "New in v0.2.2"
+ This document was added in `v0.2.2`.
+
+Bidirectional Mode enables seamless support for Bidirectional data transmission between Client and Sender along with video-frames through its synchronous messaging patterns such as `zmq.PAIR` (ZMQ Pair Pattern) & `zmq.REQ/zmq.REP` (ZMQ Request/Reply Pattern) in NetGear_Async API.
+
+In Bidirectional Mode, we utilizes the NetGear_Async API's [`transceive_data`](../../../../bonus/reference/NetGear_Async/#vidgear.gears.asyncio.netgear_async.NetGear_Async.transceive_data) method for transmitting data _(at Client's end)_ and receiving data _(in Server's end)_ all while transferring frames in real-time.
+
+This mode can be easily activated in NetGear_Async through `bidirectional_mode` attribute of its [`options`](../../params/#options) dictionary parameter during initialization.
+
+
+
+
+!!! danger "Important"
+
+ * In Bidirectional Mode, `zmq.PAIR`(ZMQ Pair) & `zmq.REQ/zmq.REP`(ZMQ Request/Reply) are **ONLY** Supported messaging patterns. Accessing this mode with any other messaging pattern, will result in `ValueError`.
+
+ * Bidirectional Mode ==only works with [**User-defined Custom Source**](../../usage/#using-netgear_async-with-a-custom-sourceopencv) on Server end==. Otherwise, NetGear_Async API will throw `ValueError`.
+
+ * Bidirectional Mode enables you to send data of **ANY**[^1] Data-type along with frame bidirectionally.
+
+ * NetGear_Async API will throw `RuntimeError` if Bidirectional Mode is disabled at Server end or Client end but not both.
+
+ * Bidirectional Mode may lead to additional **LATENCY** depending upon the size of data being transfer bidirectionally. User discretion is advised!
+
+
+
+
+
+
+## Exclusive Method and Parameter
+
+To send data bidirectionally, NetGear_Async API provides following exclusive method and parameter:
+
+!!! alert "`transceive_data` only works when Bidirectional Mode is enabled."
+
+* [`transceive_data`](../../../../bonus/reference/NetGear_Async/#vidgear.gears.asyncio.netgear_async.NetGear_Async.transceive_data): It's a bidirectional mode exclusive method to transmit data _(in Receive mode)_ and receive data _(in Send mode)_, all while transferring frames in real-time.
+
+ * `data`: In `transceive_data` method, this parameter enables user to inputs data _(of **ANY**[^1] datatype)_ for sending back to Server at Client's end.
+
+
+
+
+
+
+## Usage Examples
+
+
+!!! warning "For Bidirectional Mode, NetGear_Async must need [User-defined Custom Source](../../usage/#using-netgear_async-with-a-custom-sourceopencv) at its Server end otherwise it will throw ValueError."
+
+
+### Bare-Minimum Usage with OpenCV
+
+Following is the bare-minimum code you need to get started with Bidirectional Mode over Custom Source Server built using OpenCV and NetGear_Async API:
+
+#### Server End
+
+Open your favorite terminal and execute the following python code:
+
+!!! tip "You can terminate both sides anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import library
+from vidgear.gears.asyncio import NetGear_Async
+import cv2, asyncio
+
+# activate Bidirectional mode
+options = {"bidirectional_mode": True}
+
+# initialize Server without any source
+server = NetGear_Async(source=None, logging=True, **options)
+
+# Create a async frame generator as custom source
+async def my_frame_generator():
+
+ # !!! define your own video source here !!!
+ # Open any valid video stream(for e.g `foo.mp4` file)
+ stream = cv2.VideoCapture("foo.mp4")
+
+ # loop over stream until its terminated
+ while True:
+ # read frames
+ (grabbed, frame) = stream.read()
+
+ # check for empty frame
+ if not grabbed:
+ break
+
+ # {do something with the frame to be sent here}
+
+ # prepare data to be sent(a simple text in our case)
+ target_data = "Hello, I am a Server."
+
+ # receive data from Client
+ recv_data = await server.transceive_data()
+
+ # print data just received from Client
+ if not (recv_data is None):
+ print(recv_data)
+
+ # send our frame & data
+ yield (target_data, frame)
+
+ # sleep for sometime
+ await asyncio.sleep(0)
+
+ # safely close video stream
+ stream.release()
+
+
+if __name__ == "__main__":
+ # set event loop
+ asyncio.set_event_loop(server.loop)
+ # Add your custom source generator to Server configuration
+ server.config["generator"] = my_frame_generator()
+ # Launch the Server
+ server.launch()
+ try:
+ # run your main function task until it is complete
+ server.loop.run_until_complete(server.task)
+ except (KeyboardInterrupt, SystemExit):
+ # wait for interrupts
+ pass
+ finally:
+ # finally close the server
+ server.close()
+```
+
+#### Client End
+
+Then open another terminal on the same system and execute the following python code and see the output:
+
+!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import libraries
+from vidgear.gears.asyncio import NetGear_Async
+import cv2, asyncio
+
+# activate Bidirectional mode
+options = {"bidirectional_mode": True}
+
+# define and launch Client with `receive_mode=True`
+client = NetGear_Async(receive_mode=True, logging=True, **options).launch()
+
+
+# Create a async function where you want to show/manipulate your received frames
+async def main():
+ # loop over Client's Asynchronous Frame Generator
+ async for (data, frame) in client.recv_generator():
+
+ # do something with receive data from server
+ if not (data is None):
+ # let's print it
+ print(data)
+
+ # {do something with received frames here}
+
+ # Show output window(comment these lines if not required)
+ cv2.imshow("Output Frame", frame)
+ cv2.waitKey(1) & 0xFF
+
+ # prepare data to be sent
+ target_data = "Hi, I am a Client here."
+ # send our data to server
+ await client.transceive_data(data=target_data)
+
+ # await before continuing
+ await asyncio.sleep(0)
+
+
+if __name__ == "__main__":
+ # Set event loop to client's
+ asyncio.set_event_loop(client.loop)
+ try:
+ # run your main function task until it is complete
+ client.loop.run_until_complete(main())
+ except (KeyboardInterrupt, SystemExit):
+ # wait for interrupts
+ pass
+
+ # close all output window
+ cv2.destroyAllWindows()
+
+ # safely close client
+ client.close()
+```
+
+
+
+
+
+
+### Using Bidirectional Mode with Variable Parameters
+
+
+#### Client's End
+
+Open a terminal on Client System _(where you want to display the input frames received from the Server)_ and execute the following python code:
+
+!!! info "Note down the IP-address of this system(required at Server's end) by executing the command: `hostname -I` and also replace it in the following code."
+
+!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import libraries
+from vidgear.gears.asyncio import NetGear_Async
+import cv2, asyncio
+
+# activate Bidirectional mode
+options = {"bidirectional_mode": True}
+
+# Define NetGear_Async Client at given IP address and define parameters
+# !!! change following IP address '192.168.x.xxx' with yours !!!
+client = NetGear_Async(
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ receive_mode=True,
+ logging=True,
+ **options
+)
+
+# Create a async function where you want to show/manipulate your received frames
+async def main():
+ # loop over Client's Asynchronous Frame Generator
+ async for (data, frame) in client.recv_generator():
+
+ # do something with receive data from server
+ if not (data is None):
+ # let's print it
+ print(data)
+
+ # {do something with received frames here}
+
+ # Show output window(comment these lines if not required)
+ cv2.imshow("Output Frame", frame)
+ cv2.waitKey(1) & 0xFF
+
+ # prepare data to be sent
+ target_data = "Hi, I am a Client here."
+ # send our data to server
+ await client.transceive_data(data=target_data)
+
+ # await before continuing
+ await asyncio.sleep(0)
+
+
+if __name__ == "__main__":
+ # Set event loop to client's
+ asyncio.set_event_loop(client.loop)
+ try:
+ # run your main function task until it is complete
+ client.loop.run_until_complete(main())
+ except (KeyboardInterrupt, SystemExit):
+ # wait for interrupts
+ pass
+
+ # close all output window
+ cv2.destroyAllWindows()
+
+ # safely close client
+ client.close()
+```
+
+
+
+#### Server End
+
+Now, Open the terminal on another Server System _(a Raspberry Pi with Camera Module)_, and execute the following python code:
+
+!!! info "Replace the IP address in the following code with Client's IP address you noted earlier."
+
+!!! tip "You can terminate stream on both side anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import library
+from vidgear.gears.asyncio import NetGear_Async
+from vidgear.gears import VideoGear
+import cv2, asyncio
+
+# activate Bidirectional mode
+options = {"bidirectional_mode": True}
+
+# initialize Server without any source at given IP address and define parameters
+# !!! change following IP address '192.168.x.xxx' with client's IP address !!!
+server = NetGear_Async(
+ source=None,
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+ **options
+)
+
+# Create a async frame generator as custom source
+async def my_frame_generator():
+
+ # !!! define your own video source here !!!
+ # Open any video stream such as live webcam
+ # video stream on first index(i.e. 0) device
+ # add various Picamera tweak parameters to dictionary
+ options = {
+ "hflip": True,
+ "exposure_mode": "auto",
+ "iso": 800,
+ "exposure_compensation": 15,
+ "awb_mode": "horizon",
+ "sensor_mode": 0,
+ }
+
+ # open pi video stream with defined parameters
+ stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start()
+
+ # loop over stream until its terminated
+ while True:
+ # read frames
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame to be sent here}
+
+ # prepare data to be sent(a simple text in our case)
+ target_data = "Hello, I am a Server."
+
+ # receive data from Client
+ recv_data = await server.transceive_data()
+
+ # print data just received from Client
+ if not (recv_data is None):
+ print(recv_data)
+
+ # send our frame & data
+ yield (target_data, frame)
+
+ # sleep for sometime
+ await asyncio.sleep(0)
+
+ # safely close video stream
+ stream.stop()
+
+
+if __name__ == "__main__":
+ # set event loop
+ asyncio.set_event_loop(server.loop)
+ # Add your custom source generator to Server configuration
+ server.config["generator"] = my_frame_generator()
+ # Launch the Server
+ server.launch()
+ try:
+ # run your main function task until it is complete
+ server.loop.run_until_complete(server.task)
+ except (KeyboardInterrupt, SystemExit):
+ # wait for interrupts
+ pass
+ finally:
+ # finally close the server
+ server.close()
+```
+
+
+
+
+
+
+### Using Bidirectional Mode for Video-Frames Transfer
+
+
+In this example we are going to implement a bare-minimum example, where we will be sending video-frames _(3-Dimensional numpy arrays)_ of the same Video bidirectionally at the same time, for testing the real-time performance and synchronization between the Server and the Client using this(Bidirectional) Mode.
+
+!!! tip "This feature is great for building applications like Real-Time Video Chat."
+
+!!! info "We're also using [`reducer()`](../../../../../bonus/reference/helper/#vidgear.gears.helper.reducer--reducer) method for reducing frame-size on-the-go for additional performance."
+
+!!! warning "Remember, Sending large HQ video-frames may required more network bandwidth and packet size which may lead to video latency!"
+
+#### Server End
+
+Open your favorite terminal and execute the following python code:
+
+!!! tip "You can terminate both side anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+!!! alert "Server end can only send [numpy.ndarray](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) datatype as frame but not as data."
+
+```python
+# import library
+from vidgear.gears.asyncio import NetGear_Async
+from vidgear.gears.asyncio.helper import reducer
+import cv2, asyncio
+import numpy as np
+
+# activate Bidirectional mode
+options = {"bidirectional_mode": True}
+
+# Define NetGear Server without any source and with defined parameters
+server = NetGear_Async(source=None, pattern=1, logging=True, **options)
+
+# Create a async frame generator as custom source
+async def my_frame_generator():
+ # !!! define your own video source here !!!
+ # Open any valid video stream(for e.g `foo.mp4` file)
+ stream = cv2.VideoCapture("foo.mp4")
+ # loop over stream until its terminated
+ while True:
+
+ # read frames
+ (grabbed, frame) = stream.read()
+
+ # check for empty frame
+ if not grabbed:
+ break
+
+ # reducer frames size if you want more performance, otherwise comment this line
+ frame = await reducer(frame, percentage=30) # reduce frame by 30%
+
+ # {do something with the frame to be sent here}
+
+ # send frame & data and also receive data from Client
+ recv_data = await server.transceive_data()
+
+ # receive data from Client
+ if not (recv_data is None):
+ # check data is a numpy frame
+ if isinstance(recv_data, np.ndarray):
+
+ # {do something with received numpy frame here}
+
+ # Let's show it on output window
+ cv2.imshow("Received Frame", recv_data)
+ cv2.waitKey(1) & 0xFF
+ else:
+ # otherwise just print data
+ print(recv_data)
+
+ # prepare data to be sent(a simple text in our case)
+ target_data = "Hello, I am a Server."
+
+ # send our frame & data to client
+ yield (target_data, frame)
+
+ # sleep for sometime
+ await asyncio.sleep(0)
+
+ # safely close video stream
+ stream.release()
+
+
+if __name__ == "__main__":
+ # set event loop
+ asyncio.set_event_loop(server.loop)
+ # Add your custom source generator to Server configuration
+ server.config["generator"] = my_frame_generator()
+ # Launch the Server
+ server.launch()
+ try:
+ # run your main function task until it is complete
+ server.loop.run_until_complete(server.task)
+ except (KeyboardInterrupt, SystemExit):
+ # wait for interrupts
+ pass
+ finally:
+ # finally close the server
+ server.close()
+```
+
+
+
+#### Client End
+
+Then open another terminal on the same system and execute the following python code and see the output:
+
+!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import libraries
+from vidgear.gears.asyncio import NetGear_Async
+from vidgear.gears.asyncio.helper import reducer
+import cv2, asyncio
+
+# activate Bidirectional mode
+options = {"bidirectional_mode": True}
+
+# define and launch Client with `receive_mode=True`
+client = NetGear_Async(pattern=1, receive_mode=True, logging=True, **options).launch()
+
+# Create a async function where you want to show/manipulate your received frames
+async def main():
+ # !!! define your own video source here !!!
+ # again open the same video stream for comparison
+ stream = cv2.VideoCapture("foo.mp4")
+ # loop over Client's Asynchronous Frame Generator
+ async for (server_data, frame) in client.recv_generator():
+
+ # check for server data
+ if not (server_data is None):
+
+ # {do something with the server data here}
+
+ # lets print extracted server data
+ print(server_data)
+
+ # {do something with received frames here}
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+ key = cv2.waitKey(1) & 0xFF
+
+ # read frame target data from stream to be sent to server
+ (grabbed, target_data) = stream.read()
+ # check for frame
+ if grabbed:
+ # reducer frames size if you want more performance, otherwise comment this line
+ target_data = await reducer(
+ target_data, percentage=30
+ ) # reduce frame by 30%
+ # send our frame data
+ await client.transceive_data(data=target_data)
+
+ # await before continuing
+ await asyncio.sleep(0)
+
+ # safely close video stream
+ stream.release()
+
+
+if __name__ == "__main__":
+ # Set event loop to client's
+ asyncio.set_event_loop(client.loop)
+ try:
+ # run your main function task until it is complete
+ client.loop.run_until_complete(main())
+ except (KeyboardInterrupt, SystemExit):
+ # wait for interrupts
+ pass
+ # close all output window
+ cv2.destroyAllWindows()
+ # safely close client
+ client.close()
+```
+
+
+
+
+[^1]:
+
+ !!! warning "Additional data of [numpy.ndarray](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) datatype is **ONLY SUPPORTED** at Client's end with [`transceive_data`](../../../../bonus/reference/NetGear_Async/#vidgear.gears.asyncio.netgear_async.NetGear_Async.transceive_data) method using its `data` parameter. Whereas Server end can only send [numpy.ndarray](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) datatype as frame but not as data."
+
+
+
\ No newline at end of file
diff --git a/docs/gears/netgear_async/overview.md b/docs/gears/netgear_async/overview.md
index fd982bc6d..fc2e505cf 100644
--- a/docs/gears/netgear_async/overview.md
+++ b/docs/gears/netgear_async/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -26,11 +26,13 @@ limitations under the License.
## Overview
-> _NetGear_Async can generate the same performance as [NetGear API](../../netgear/overview/) at about one-third the memory consumption, and also provide complete server-client handling with various options to use variable protocols/patterns similar to NetGear, but it doesn't support any [NetGear's Exclusive Modes](../../netgear/overview/#exclusive-modes) yet._
+> _NetGear_Async can generate the same performance as [NetGear API](../../netgear/overview/) at about one-third the memory consumption, and also provide complete server-client handling with various options to use variable protocols/patterns similar to NetGear, but lacks in term of flexibility as it supports only a few [NetGear's Exclusive Modes](../../netgear/overview/#exclusive-modes)._
NetGear_Async is built on [`zmq.asyncio`](https://pyzmq.readthedocs.io/en/latest/api/zmq.asyncio.html), and powered by a high-performance asyncio event loop called [**`uvloop`**](https://github.com/MagicStack/uvloop) to achieve unmatchable high-speed and lag-free video streaming over the network with minimal resource constraints. NetGear_Async can transfer thousands of frames in just a few seconds without causing any significant load on your system.
-NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](../../netgear/overview/) but doesn't support any [NetGear's Exclusive Modes](../../netgear/overview/#exclusive-modes) yet. Furthermore, NetGear_Async allows us to define our custom Server as source to manipulate frames easily before sending them across the network(see this [doc](../usage/#using-netgear_async-with-a-custom-sourceopencv) example).
+NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](../../netgear/overview/). Furthermore, NetGear_Async allows us to define our custom Server as source to transform frames easily before sending them across the network(see this [doc](../usage/#using-netgear_async-with-a-custom-sourceopencv) example).
+
+NetGear_Async now supports additional [**bidirectional data transmission**](../advanced/bidirectional_mode) between receiver(client) and sender(server) while transferring frames. Users can easily build complex applications such as like [Real-Time Video Chat](../advanced/bidirectional_mode/#using-bidirectional-mode-for-video-frames-transfer) in just few lines of code.
In addition to all this, NetGear_Async API also provides internal wrapper around [VideoGear](../../videogear/overview/), which itself provides internal access to both [CamGear](../../camgear/overview/) and [PiGear](../../pigear/overview/) APIs, thereby granting it exclusive power for transferring frames incoming from any source to the network.
diff --git a/docs/gears/netgear_async/params.md b/docs/gears/netgear_async/params.md
index f727ae5ee..4b1180ba4 100644
--- a/docs/gears/netgear_async/params.md
+++ b/docs/gears/netgear_async/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -150,6 +150,34 @@ In NetGear_Async, the Receiver-end keeps tracks if frames are received from Serv
NetGear_Async(timeout=5.0) # sets 5secs timeout
```
+## **`options`**
+
+This parameter provides the flexibility to alter various NetGear_Async API's internal properties and modes.
+
+**Data-Type:** Dictionary
+
+**Default Value:** Its default value is `{}`
+
+**Usage:**
+
+
+!!! abstract "Supported dictionary attributes for NetGear_Async API"
+
+ * **`bidirectional_mode`** (_boolean_) : This internal attribute activates the exclusive [**Bidirectional Mode**](../advanced/bidirectional_mode/), if enabled(`True`).
+
+
+The desired attributes can be passed to NetGear_Async API as follows:
+
+```python
+# formatting parameters as dictionary attributes
+options = {
+ "bidirectional_mode": True,
+}
+# assigning it
+NetGear_Async(logging=True, **options)
+```
+
+
diff --git a/docs/gears/netgear_async/usage.md b/docs/gears/netgear_async/usage.md
index 38a1933f7..63323b049 100644
--- a/docs/gears/netgear_async/usage.md
+++ b/docs/gears/netgear_async/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -100,7 +100,7 @@ async def main():
key = cv2.waitKey(1) & 0xFF
# await before continuing
- await asyncio.sleep(0.00001)
+ await asyncio.sleep(0)
if __name__ == "__main__":
@@ -162,7 +162,7 @@ async def main():
key = cv2.waitKey(1) & 0xFF
# await before continuing
- await asyncio.sleep(0.00001)
+ await asyncio.sleep(0)
if __name__ == "__main__":
@@ -223,7 +223,9 @@ if __name__ == "__main__":
## Using NetGear_Async with a Custom Source(OpenCV)
-NetGear_Async allows you to easily define your own custom Source at Server-end that you want to use to manipulate your frames before sending them onto the network. Let's implement a bare-minimum example with a Custom Source using NetGear_Async API and OpenCV:
+NetGear_Async allows you to easily define your own custom Source at Server-end that you want to use to transform your frames before sending them onto the network.
+
+Let's implement a bare-minimum example with a Custom Source using NetGear_Async API and OpenCV:
### Server's End
@@ -237,16 +239,16 @@ from vidgear.gears.asyncio import NetGear_Async
import cv2, asyncio
# initialize Server without any source
-server = NetGear_Async(logging=True)
+server = NetGear_Async(source=None, logging=True)
+
+# !!! define your own video source here !!!
+# Open any video stream such as live webcam
+# video stream on first index(i.e. 0) device
+stream = cv2.VideoCapture(0)
# Create a async frame generator as custom source
async def my_frame_generator():
- # !!! define your own video source here !!!
- # Open any video stream such as live webcam
- # video stream on first index(i.e. 0) device
- stream = cv2.VideoCapture(0)
-
# loop over stream until its terminated
while True:
@@ -255,7 +257,6 @@ async def my_frame_generator():
# check if frame empty
if not grabbed:
- # if True break the infinite loop
break
# do something with the frame to be sent here
@@ -263,7 +264,7 @@ async def my_frame_generator():
# yield frame
yield frame
# sleep for sometime
- await asyncio.sleep(0.00001)
+ await asyncio.sleep(0)
if __name__ == "__main__":
@@ -280,6 +281,8 @@ if __name__ == "__main__":
# wait for interrupts
pass
finally:
+ # close stream
+ stream.release()
# finally close the server
server.close()
```
@@ -313,7 +316,7 @@ async def main():
key = cv2.waitKey(1) & 0xFF
# await before continuing
- await asyncio.sleep(0.01)
+ await asyncio.sleep(0)
if __name__ == "__main__":
@@ -371,6 +374,7 @@ if __name__ == "__main__":
```
### Client's End
+
Then open another terminal on the same system and execute the following python code and see the output:
!!! warning "Client will throw TimeoutError if it fails to connect to the Server in given [`timeout`](../params/#timeout) value!"
@@ -404,7 +408,7 @@ async def main():
key = cv2.waitKey(1) & 0xFF
# await before continuing
- await asyncio.sleep(0.00001)
+ await asyncio.sleep(0)
if __name__ == "__main__":
@@ -425,4 +429,10 @@ if __name__ == "__main__":
writer.close()
```
-
\ No newline at end of file
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced NetGear_Async examples with unusual configuration [here ➶](../../../help/netgear_async_ex/)"
+
+
\ No newline at end of file
diff --git a/docs/gears/pigear/overview.md b/docs/gears/pigear/overview.md
index d5b93d461..2cba27deb 100644
--- a/docs/gears/pigear/overview.md
+++ b/docs/gears/pigear/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/pigear/params.md b/docs/gears/pigear/params.md
index 9ce6e0f1f..b28e72fd1 100644
--- a/docs/gears/pigear/params.md
+++ b/docs/gears/pigear/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/pigear/usage.md b/docs/gears/pigear/usage.md
index 95de21793..78ec04348 100644
--- a/docs/gears/pigear/usage.md
+++ b/docs/gears/pigear/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -204,4 +204,76 @@ cv2.destroyAllWindows()
stream.stop()
```
+
+
+## Using PiGear with WriteGear API
+
+PiGear can be easily used with WriteGear API directly without any compatibility issues. The suitable example is as follows:
+
+```python
+# import required libraries
+from vidgear.gears import PiGear
+from vidgear.gears import WriteGear
+import cv2
+
+# add various Picamera tweak parameters to dictionary
+options = {
+ "hflip": True,
+ "exposure_mode": "auto",
+ "iso": 800,
+ "exposure_compensation": 15,
+ "awb_mode": "horizon",
+ "sensor_mode": 0,
+}
+
+# define suitable (Codec,CRF,preset) FFmpeg parameters for writer
+output_params = {"-vcodec": "libx264", "-crf": 0, "-preset": "fast"}
+
+# open pi video stream with defined parameters
+stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start()
+
+# Define writer with defined parameters and suitable output filename for e.g. `Output.mp4`
+writer = WriteGear(output_filename="Output.mp4", logging=True, **output_params)
+
+# loop over
+while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+ # lets convert frame to gray for this example
+ gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
+
+ # write gray frame to writer
+ writer.write(gray)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+# close output window
+cv2.destroyAllWindows()
+
+# safely close video stream
+stream.stop()
+
+# safely close writer
+writer.close()
+```
+
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced PiGear examples with unusual configuration [here ➶](../../../help/pigear_ex/)"
+
\ No newline at end of file
diff --git a/docs/gears/screengear/overview.md b/docs/gears/screengear/overview.md
index 9b1dbec55..e6505cc79 100644
--- a/docs/gears/screengear/overview.md
+++ b/docs/gears/screengear/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/screengear/params.md b/docs/gears/screengear/params.md
index 0c22d9ae6..bc782b063 100644
--- a/docs/gears/screengear/params.md
+++ b/docs/gears/screengear/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/screengear/usage.md b/docs/gears/screengear/usage.md
index e959a12d5..9dd7c6ce3 100644
--- a/docs/gears/screengear/usage.md
+++ b/docs/gears/screengear/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -122,7 +122,7 @@ from vidgear.gears import ScreenGear
import cv2
# open video stream with defined parameters with monitor at index `1` selected
-stream = ScreenGear(monitor=1, logging=True, **options).start()
+stream = ScreenGear(monitor=1, logging=True).start()
# loop over
while True:
@@ -167,7 +167,7 @@ from vidgear.gears import ScreenGear
import cv2
# open video stream with defined parameters and `mss` backend for extracting frames.
-stream = ScreenGear(backend="mss", logging=True, **options).start()
+stream = ScreenGear(backend="mss", logging=True).start()
# loop over
while True:
@@ -321,4 +321,10 @@ stream.stop()
writer.close()
```
-
\ No newline at end of file
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced NetGear examples with unusual configuration [here ➶](../../../help/screengear_ex/)"
+
+
\ No newline at end of file
diff --git a/docs/gears/stabilizer/overview.md b/docs/gears/stabilizer/overview.md
index d6fe2f3b7..7d6af0d99 100644
--- a/docs/gears/stabilizer/overview.md
+++ b/docs/gears/stabilizer/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -29,7 +29,7 @@ limitations under the License.
VidGear's Stabilizer in Action (Video Credits @SIGGRAPH2013)
-!!! info "This video is transcoded with [**StreamGear API**](../../streamgear/overview/) and hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)"
+!!! info "This video is transcoded with [**StreamGear API**](../../streamgear/introduction/) and hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)"
@@ -37,7 +37,7 @@ limitations under the License.
> Stabilizer is an auxiliary class that enables Video Stabilization for vidgear with minimalistic latency, and at the expense of little to no additional computational requirements.
-The basic idea behind it is to tracks and save the salient feature array for the given number of frames and then uses these anchor point to cancel out all perturbations relative to it for the incoming frames in the queue. This class relies heavily on [**Threaded Queue mode**](../../../bonus/TQM/) for error-free & ultra-fast frame handling.
+The basic idea behind it is to tracks and save the salient feature array for the given number of frames and then uses these anchor point to cancel out all perturbations relative to it for the incoming frames in the queue. This class relies on [**Fixed-Size Python Queues**](../../../bonus/TQM/#b-utilizes-fixed-size-queues) for error-free & ultra-fast frame handling.
!!! tip "For more detailed information on Stabilizer working, See [this blogpost ➶](https://learnopencv.com/video-stabilization-using-point-feature-matching-in-opencv/)"
diff --git a/docs/gears/stabilizer/params.md b/docs/gears/stabilizer/params.md
index 8d9560de9..f49b21519 100644
--- a/docs/gears/stabilizer/params.md
+++ b/docs/gears/stabilizer/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/stabilizer/usage.md b/docs/gears/stabilizer/usage.md
index 6931c1c32..acd7ca2ae 100644
--- a/docs/gears/stabilizer/usage.md
+++ b/docs/gears/stabilizer/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -67,7 +67,7 @@ while True:
if stabilized_frame is None:
continue
- # {do something with the stabilized_frame frame here}
+ # {do something with the stabilized frame here}
# Show output window
cv2.imshow("Output Stabilized Frame", stabilized_frame)
@@ -121,7 +121,7 @@ while True:
if stabilized_frame is None:
continue
- # {do something with the frame here}
+ # {do something with the stabilized frame here}
# Show output window
cv2.imshow("Stabilized Frame", stabilized_frame)
@@ -145,7 +145,7 @@ stream.release()
## Using Stabilizer with Variable Parameters
-Stabilizer class provide certain [parameters](../params/) which you can use to manipulate its internal properties. The complete usage example is as follows:
+Stabilizer class provide certain [parameters](../params/) which you can use to tweak its internal properties. The complete usage example is as follows:
```python
# import required libraries
@@ -176,7 +176,7 @@ while True:
if stabilized_frame is None:
continue
- # {do something with the stabilized_frame frame here}
+ # {do something with the stabilized frame here}
# Show output window
cv2.imshow("Output Stabilized Frame", stabilized_frame)
@@ -203,6 +203,8 @@ stream.stop()
VideoGear's stabilizer can be used in conjunction with WriteGear API directly without any compatibility issues. The complete usage example is as follows:
+!!! tip "You can also add live audio input to WriteGear pipeline. See this [bonus example](../../../help)"
+
```python
# import required libraries
from vidgear.gears.stabilizer import Stabilizer
@@ -236,7 +238,7 @@ while True:
if stabilized_frame is None:
continue
- # {do something with the frame here}
+ # {do something with the stabilized frame here}
# write stabilized frame to writer
writer.write(stabilized_frame)
@@ -271,4 +273,10 @@ writer.close()
!!! example "The complete usage example can be found [here ➶](../../videogear/usage/#using-videogear-with-video-stabilizer-backend)"
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced Stabilizer examples with unusual configuration [here ➶](../../../help/stabilizer_ex/)"
+
\ No newline at end of file
diff --git a/docs/gears/streamgear/ffmpeg_install.md b/docs/gears/streamgear/ffmpeg_install.md
index 763489bce..2062b5928 100644
--- a/docs/gears/streamgear/ffmpeg_install.md
+++ b/docs/gears/streamgear/ffmpeg_install.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -21,7 +21,7 @@ limitations under the License.
# FFmpeg Installation Instructions
-
+
@@ -68,7 +68,7 @@ The StreamGear API supports _Auto-Installation_ and _Manual Configuration_ metho
!!! quote "This is a recommended approach on Windows Machines"
-If StreamGear API not receives any input from the user on [**`custom_ffmpeg`**](../params/#custom_ffmpeg) parameter, then on Windows system StreamGear API **auto-generates** the required FFmpeg Static Binaries, according to your system specifications, into the temporary directory _(for e.g. `C:\Temp`)_ of your machine.
+If StreamGear API not receives any input from the user on [**`custom_ffmpeg`**](../params/#custom_ffmpeg) parameter, then on Windows system StreamGear API **auto-generates** the required FFmpeg Static Binaries from a dedicated [**Github Server**](https://github.com/abhiTronix/FFmpeg-Builds) into the temporary directory _(for e.g. `C:\Temp`)_ of your machine.
!!! warning Important Information
@@ -85,7 +85,7 @@ If StreamGear API not receives any input from the user on [**`custom_ffmpeg`**](
* **Download:** You can also manually download the latest Windows Static Binaries(*based on your machine arch(x86/x64)*) from the link below:
- *Windows Static Binaries:* http://ffmpeg.zeranoe.com/builds/
+ *Windows Static Binaries:* https://ffmpeg.org/download.html#build-windows
* **Assignment:** Then, you can easily assign the custom path to the folder containing FFmpeg executables(`for e.g 'C:/foo/Downloads/ffmpeg/bin'`) or path of `ffmpeg.exe` executable itself to the [**`custom_ffmpeg`**](../params/#custom_ffmpeg) parameter in the StreamGear API.
diff --git a/docs/gears/streamgear/introduction.md b/docs/gears/streamgear/introduction.md
new file mode 100644
index 000000000..b460a41ee
--- /dev/null
+++ b/docs/gears/streamgear/introduction.md
@@ -0,0 +1,179 @@
+
+
+# StreamGear API
+
+
+
+
+ StreamGear API's generalized workflow
+
+
+
+## Overview
+
+> StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH and Apple HLS)_ in just few lines of python code.
+
+StreamGear provides a standalone, highly extensible, and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/) multimedia framework for generating chunked-encoded media segments of the content.
+
+SteamGear is an out-of-the-box solution for transcoding source videos/audio files & real-time video frames and breaking them into a sequence of multiple smaller chunks/segments of suitable lengths. These segments make it possible to stream videos at different quality levels _(different bitrates or spatial resolutions)_ and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests.
+
+SteamGear currently supports [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_.
+
+SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ or a Master Playlist _(such as M3U8 in-case of Apple HLS)_ besides segments that describe these segment information _(timing, URL, media characteristics like video resolution and adaptive bit rates)_ and is provided to the client before the streaming session.
+
+!!! alert "For streaming with older traditional protocols such as RTMP, RTSP/RTP you could use [WriteGear](../../writegear/introduction/) API instead."
+
+
+
+!!! new "New in v0.2.2"
+
+ Apple HLS support was added in `v0.2.2`.
+
+
+!!! danger "Important"
+
+ * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../ffmpeg_install/) for its installation.
+
+ * :warning: StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executable on your system.
+
+ * It is advised to enable logging _([`logging=True`](../params/#logging))_ on the first run for easily identifying any runtime errors.
+
+!!! tip "Useful Links"
+
+ - Checkout [this detailed blogpost](https://ottverse.com/mpeg-dash-video-streaming-the-complete-guide/) on how MPEG-DASH works.
+ - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) on how HLS works.
+ - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) for HLS vs. MPEG-DASH comparison.
+
+
+
+
+## Mode of Operations
+
+StreamGear primarily operates in following independent modes for transcoding:
+
+
+??? warning "Real-time Frames Mode is NOT Live-Streaming."
+
+ Rather, you can enable live-streaming in Real-time Frames Mode by using using exclusive [`-livestream`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter in StreamGear API. Checkout [this usage example](../rtfm/usage/#bare-minimum-usage-with-live-streaming) for more information.
+
+
+- [**Single-Source Mode**](../ssm/overview): In this mode, StreamGear **transcodes entire video file** _(as opposed to frame-by-frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) for streaming that required no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing.
+
+- [**Real-time Frames Mode**](../rtfm/overview): In this mode, StreamGear directly **transcodes frame-by-frame** _(as opposed to a entire video file)_, into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you desire to flexibility manipulate or transform [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, audio has to added manually _(as separate source)_ for streams.
+
+
+
+## Importing
+
+You can import StreamGear API in your program as follows:
+
+```python
+from vidgear.gears import StreamGear
+```
+
+
+
+
+## Watch Demo
+
+=== "Watch MPEG-DASH Stream"
+
+ Watch StreamGear transcoded MPEG-DASH Stream:
+
+
+
+ !!! info "This video assets _(Manifest and segments)_ are hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)"
+
+ !!! quote "Video Credits: [**"Tears of Steel"** - Project Mango Teaser](https://mango.blender.org/download/)"
+
+=== "Watch APPLE HLS Stream"
+
+ Watch StreamGear transcoded APPLE HLS Stream:
+
+
+
+ !!! info "This video assets _(Playlist and segments)_ are hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)"
+
+ !!! quote "Video Credits: [**"Sintel"** - Project Durian Teaser](https://durian.blender.org/download/)"
+
+
+
+## Recommended Players
+
+=== "GUI Players"
+ - [x] **[MPV Player](https://mpv.io/):** _(recommended)_ MPV is a free, open source, and cross-platform media player. It supports a wide variety of media file formats, audio and video codecs, and subtitle types.
+ - [x] **[VLC Player](https://www.videolan.org/vlc/releases/3.0.0.html):** VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols.
+ - [x] **[Parole](https://docs.xfce.org/apps/parole/start):** _(UNIX only)_ Parole is a modern simple media player based on the GStreamer framework for Unix and Unix-like operating systems.
+
+=== "Command-Line Players"
+ - [x] **[MP4Client](https://github.com/gpac/gpac/wiki/MP4Client-Intro):** [GPAC](https://gpac.wp.imt.fr/home/) provides a highly configurable multimedia player called MP4Client. GPAC itself is an open source multimedia framework developed for research and academic purposes, and used in many media production chains.
+ - [x] **[ffplay](https://ffmpeg.org/ffplay.html):** FFplay is a very simple and portable media player using the FFmpeg libraries and the SDL library. It is mostly used as a testbed for the various FFmpeg APIs.
+
+=== "Online Players"
+ !!! alert "To run Online players locally, you'll need a HTTP server. For creating one yourself, See [this well-curated list ➶](https://gist.github.com/abhiTronix/7d2798bc9bc62e9e8f1e88fb601d7e7b)"
+
+ - [x] **[Clapper](https://github.com/clappr/clappr):** Clappr is an extensible media player for the web.
+ - [x] **[Shaka Player](https://github.com/google/shaka-player):** Shaka Player is an open-source JavaScript library for playing adaptive media in a browser.
+ - [x] **[MediaElementPlayer](https://github.com/mediaelement/mediaelement):** MediaElementPlayer is a complete HTML/CSS audio/video player.
+ - [x] **[Native MPEG-Dash + HLS Playback](https://chrome.google.com/webstore/detail/native-mpeg-dash-%20-hls-pl/cjfbmleiaobegagekpmlhmaadepdeedn?hl=en)(Chrome Extension):** Allow the browser to play HLS (m3u8) or MPEG-Dash (mpd) video urls 'natively' on chrome browsers.
+
+
+
+## Parameters
+
+
+
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced StreamGear examples with unusual configuration [here ➶](../../../help/streamgear_ex/)"
+
+
\ No newline at end of file
diff --git a/docs/gears/streamgear/overview.md b/docs/gears/streamgear/overview.md
deleted file mode 100644
index 20fee4db7..000000000
--- a/docs/gears/streamgear/overview.md
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
-# StreamGear API
-
-
-
-
- StreamGear API's generalized workflow
-
-
-
-## Overview
-
-> StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH)_ in just few lines of python code.
-
-StreamGear provides a standalone, highly extensible, and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/) multimedia framework for generating chunked-encoded media segments of the content.
-
-SteamGear easily transcodes source videos/audio files & real-time video-frames and breaks them into a sequence of multiple smaller chunks/segments of fixed length. These segments make it possible to stream videos at different quality levels _(different bitrates or spatial resolutions)_ and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests.
-
-SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ besides segments that describe these segment information _(timing, URL, media characteristics like video resolution and bit rates)_ and is provided to the client before the streaming session.
-
-SteamGear currently only supports [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ , but other adaptive streaming technologies such as Apple HLS, Microsoft Smooth Streaming, will be added soon. Also, Multiple DRM support is yet to be implemented.
-
-
-
-!!! danger "Important"
-
- * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../ffmpeg_install/) for its installation.
-
- * :warning: StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system.
-
- * It is advised to enable logging _([`logging=True`](../params/#logging))_ on the first run for easily identifying any runtime errors.
-
-
-
-## Mode of Operations
-
-StreamGear primarily works in two independent modes for transcoding which serves different purposes. These modes are as follows:
-
-### A. Single-Source Mode
-
-In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing. This mode can be easily activated by assigning suitable video path as input to [`-video_source`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter, during StreamGear initialization. ***Learn more about this mode [here ➶](../usage/#a-single-source-mode)***
-
-### B. Real-time Frames Mode
-
-When no valid input is received on [`-video_source`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter, StreamGear API activates this mode where it directly transcodes video-frames _(as opposed to a entire file)_, into a sequence of multiple smaller chunks/segments for streaming. In this mode, StreamGear supports real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames, and process them over FFmpeg pipeline. But on the downside, audio has to added manually _(as separate source)_ for streams. ***Learn more about this mode [here ➶](../usage/#b-real-time-frames-mode)***
-
-
-
-
-## Watch Demo
-
-Watch StreamGear transcoded MPEG-DASH Stream:
-
-
-
-!!! info "This video assets _(Manifest and segments)_ are hosted on [GitHub Repository](https://github.com/abhiTronix/vidgear-docs-additionals) and served with [raw.githack.com](https://raw.githack.com)"
-
-!!! quote "Video Credits: [**"Tears of Steel"** - Project Mango Teaser](https://mango.blender.org/download/)"
-
-
-
-## Recommended Stream Players
-
-### GUI Players
-
-- [x] **[MPV Player](https://mpv.io/):** _(recommended)_ MPV is a free, open source, and cross-platform media player. It supports a wide variety of media file formats, audio and video codecs, and subtitle types.
-- [x] **[VLC Player](https://www.videolan.org/vlc/releases/3.0.0.html):** VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols.
-- [x] **[Parole](https://docs.xfce.org/apps/parole/start):** _(UNIX only)_ Parole is a modern simple media player based on the GStreamer framework for Unix and Unix-like operating systems.
-
-### Command-Line Players
-
-- [x] **[MP4Client](https://github.com/gpac/gpac/wiki/MP4Client-Intro):** [GPAC](https://gpac.wp.imt.fr/home/) provides a highly configurable multimedia player called MP4Client. GPAC itself is an open source multimedia framework developed for research and academic purposes, and used in many media production chains.
-- [x] **[ffplay](https://ffmpeg.org/ffplay.html):** FFplay is a very simple and portable media player using the FFmpeg libraries and the SDL library. It is mostly used as a testbed for the various FFmpeg APIs.
-
-### Online Players
-
-!!! tip "To run Online players locally, you'll need a HTTP server. For creating one yourself, See [this well-curated list ➶](https://gist.github.com/abhiTronix/7d2798bc9bc62e9e8f1e88fb601d7e7b)"
-
-- [x] **[Clapper](https://github.com/clappr/clappr):** Clappr is an extensible media player for the web.
-- [x] **[Shaka Player](https://github.com/google/shaka-player):** Shaka Player is an open-source JavaScript library for playing adaptive media in a browser.
-- [x] **[MediaElementPlayer](https://github.com/mediaelement/mediaelement):** MediaElementPlayer is a complete HTML/CSS audio/video player.
-
-
-
-## Importing
-
-You can import StreamGear API in your program as follows:
-
-```python
-from vidgear.gears import StreamGear
-```
-
-
-
-## Usage Examples
-
-
-
-
\ No newline at end of file
diff --git a/docs/gears/streamgear/params.md b/docs/gears/streamgear/params.md
index ff411907d..ad1ab9470 100644
--- a/docs/gears/streamgear/params.md
+++ b/docs/gears/streamgear/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,10 +24,12 @@ limitations under the License.
## **`output`**
-This parameter sets the valid filename/path for storing the StreamGear assets _(Manifest file(such as Media Presentation Description(MPD) in-case of DASH) & Transcoded sequence of segments)_.
+This parameter sets the valid filename/path for storing the StreamGear assets _(Manifest file (such as MPD in-case of DASH) or a Master Playlist (such as M3U8 in-case of Apple HLS) & Transcoded sequence of segments)_.
!!! warning "StreamGear API will throw `ValueError` if `output` provided is empty or invalid."
+!!! error "Make sure to provide _valid filename with valid file-extension_ for selected [`format`](#format) value _(such as `.mpd` in case of MPEG-DASH and `.m3u8` in case of APPLE-HLS)_, otherwise StreamGear will throw `AssertionError`."
+
!!! note "StreamGear generated sequence of multiple chunks/segments are also stored in the same directory."
!!! tip "You can easily delete all previous assets at `output` location, by using [`-clear_prev_assets`](#a-exclusive-parameters) attribute of [`stream_params`](#stream_params) dictionary parameter."
@@ -40,41 +42,77 @@ Its valid input can be one of the following:
* **Path to directory**: Valid path of the directory. In this case, StreamGear API will automatically assign a unique filename for Manifest file. This can be defined as follows:
- ```python
- streamer = StreamGear(output = '/home/foo/foo1') # Define streamer with manifest saving directory path
- ```
+ === "DASH"
+
+ ```python
+ streamer = StreamGear(output = "/home/foo/foo1") # Define streamer with manifest saving directory path
+ ```
+
+ === "HLS"
+
+ ```python
+ streamer = StreamGear(output = "/home/foo/foo1", format="hls") # Define streamer with playlist saving directory path
+ ```
* **Filename** _(with/without path)_: Valid filename(_with valid extension_) of the output Manifest file. In case filename is provided without path, then current working directory will be used.
- ```python
- streamer = StreamGear(output = 'output_foo.mpd') # Define streamer with manifest file name
- ```
+ === "DASH"
- !!! warning "Make sure to provide _valid filename with valid file-extension_ for selected [format](#format) value _(such as `output.mpd` in case of MPEG-DASH)_, otherwise StreamGear will throw `AssertionError`."
+ ```python
+ streamer = StreamGear(output = "output_foo.mpd") # Define streamer with manifest file name
+ ```
+ === "HLS"
+
+ ```python
+ streamer = StreamGear(output = "output_foo.m3u8", format="hls") # Define streamer with playlist file name
+ ```
* **URL**: Valid URL of a network stream with a protocol supported by installed FFmpeg _(verify with command `ffmpeg -protocols`)_ only. This is useful for directly storing assets to a network server. For example, you can use a `http` protocol URL as follows:
- ```python
- streamer = StreamGear(output = 'http://195.167.1.101/live/test.mpd') #Define streamer
- ```
+
+ === "DASH"
+
+ ```python
+ streamer = StreamGear(output = "http://195.167.1.101/live/test.mpd") #Define streamer
+ ```
+
+ === "HLS"
+
+ ```python
+ streamer = StreamGear(output = "http://195.167.1.101/live/test.m3u8", format="hls") #Define streamer
+ ```
-## **`formats`**
+## **`format`**
-This parameter select the adaptive HTTP streaming format. HTTP streaming works by breaking the overall stream into a sequence of small HTTP-based file downloads, each downloading one short chunk of an overall potentially unbounded transport stream. For now, the only supported format is: `'dash'` _(i.e [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/))_, but other adaptive streaming technologies such as Apple HLS, Microsoft Smooth Streaming, will be added soon.
+This parameter select the adaptive HTTP streaming formats. For now, the supported format are: `dash` _(i.e [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/))_ and `hls` _(i.e [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming))_.
+
+!!! warning "Any invalid value to `format` parameter will result in ValueError!"
+
+!!! error "Make sure to provide _valid filename with valid file-extension_ in [`output`](#output) for selected `format` value _(such as `.mpd` in case of MPEG-DASH and `.m3u8` in case of APPLE-HLS)_, otherwise StreamGear will throw `AssertionError`."
+
**Data-Type:** String
-**Default Value:** Its default value is `'dash'`
+**Default Value:** Its default value is `dash`
**Usage:**
-```python
-StreamGear(output = 'output_foo.mpd', format="dash")
-```
+=== "DASH"
+
+ ```python
+ StreamGear(output = "output_foo.mpd", format="dash")
+ ```
+
+=== "HLS"
+
+ ```python
+ StreamGear(output = "output_foo.m3u8", format="hls")
+ ```
+
@@ -151,7 +189,7 @@ StreamGear API provides some exclusive internal parameters to easily generate St
**Usage:** You can easily define any number of streams using `-streams` attribute as follows:
- !!! tip "Usage example can be found [here ➶](../usage/#a2-usage-with-additional-streams)"
+ !!! tip "Usage example can be found [here ➶](../ssm/usage/#usage-with-additional-streams)"
```python
stream_params =
@@ -164,9 +202,9 @@ StreamGear API provides some exclusive internal parameters to easily generate St
-* **`-video_source`** _(string)_: This attribute takes valid Video path as input and activates [**Single-Source Mode**](../usage/#a-single-source-mode), for transcoding it into multiple smaller chunks/segments for streaming after successful validation. Its value be one of the following:
+* **`-video_source`** _(string)_: This attribute takes valid Video path as input and activates [**Single-Source Mode**](../ssm/overview), for transcoding it into multiple smaller chunks/segments for streaming after successful validation. Its value be one of the following:
- !!! tip "Usage example can be found [here ➶](../usage/#a1-bare-minimum-usage)"
+ !!! tip "Usage example can be found [here ➶](../ssm/usage/#bare-minimum-usage)"
* **Video Filename**: Valid path to Video file as follows:
```python
@@ -183,17 +221,17 @@ StreamGear API provides some exclusive internal parameters to easily generate St
-* **`-audio`** _(dict)_: This attribute takes external custom audio path as audio-input for all StreamGear streams. Its value be one of the following:
+* **`-audio`** _(string/list)_: This attribute takes external custom audio path _(as `string`)_ or audio device name followed by suitable demuxer _(as `list`)_ as audio source input for all StreamGear streams. Its value be one of the following:
!!! failure "Make sure this audio-source is compatible with provided video -source, otherwise you encounter multiple errors, or even no output at all!"
- !!! tip "Usage example can be found [here ➶](../usage/#a3-usage-with-custom-audio)"
-
- * **Audio Filename**: Valid path to Audio file as follows:
+ * **Audio Filename** _(string)_: Valid path to Audio file as follows:
```python
stream_params = {"-audio": "/home/foo/foo1.aac"} # set input audio source: /home/foo/foo1.aac
```
- * **Audio URL**: Valid URL of a network audio stream as follows:
+ !!! tip "Usage example can be found [here ➶](../ssm/usage/#usage-with-custom-audio)"
+
+ * **Audio URL** _(string)_: Valid URL of a network audio stream as follows:
!!! danger "Make sure given Video URL has protocol that is supported by installed FFmpeg. _(verify with `ffmpeg -protocols` terminal command)_"
@@ -201,6 +239,15 @@ StreamGear API provides some exclusive internal parameters to easily generate St
stream_params = {"-audio": "https://exampleaudio.org/example-160.mp3"} # set input audio source: https://exampleaudio.org/example-160.mp3
```
+ * **Device name and Demuxer** _(list)_: Valid audio device name followed by suitable demuxer as follows:
+
+ ```python
+ stream_params = {"-audio": "https://exampleaudio.org/example-160.mp3"} # set input audio source: https://exampleaudio.org/example-160.mp3
+ ```
+ !!! tip "Usage example can be found [here ➶](../rtfm/usage/#usage-with-device-audio--input)"
+
+
+
* **`-livestream`** _(bool)_: ***(optional)*** specifies whether to enable **Livestream Support**_(chunks will contain information for new frames only)_ for the selected mode, or not. You can easily set it to `True` to enable this feature, and default value is `False`. It can be used as follows:
@@ -215,7 +262,7 @@ StreamGear API provides some exclusive internal parameters to easily generate St
* **`-input_framerate`** _(float/int)_ : ***(optional)*** specifies the assumed input video source framerate, and only works in [Real-time Frames Mode](../usage/#b-real-time-frames-mode). It can be used as follows:
- !!! tip "Usage example can be found [here ➶](../usage/#b3-bare-minimum-usage-with-controlled-input-framerate)"
+ !!! tip "Usage example can be found [here ➶](../rtfm/usage/#bare-minimum-usage-with-controlled-input-framerate)"
```python
stream_params = {"-input_framerate": 60.0} # set input video source framerate to 60fps
@@ -265,7 +312,9 @@ StreamGear API provides some exclusive internal parameters to easily generate St
-* **`-clear_prev_assets`** _(bool)_: ***(optional)*** specify whether to force-delete any previous copies of StreamGear Assets _(i.e. Manifest files(.mpd) & streaming chunks(.m4s))_ present at path specified by [`output`](#output) parameter. You can easily set it to `True` to enable this feature, and default value is `False`. It can be used as follows:
+* **`-clear_prev_assets`** _(bool)_: ***(optional)*** specify whether to force-delete any previous copies of StreamGear Assets _(i.e. Manifest files(.mpd) & streaming chunks(.m4s) etc.)_ present at path specified by [`output`](#output) parameter. You can easily set it to `True` to enable this feature, and default value is `False`. It can be used as follows:
+
+ !!! info "In Single-Source Mode, additional segments _(such as `.webm`, `.mp4` chunks)_ are also cleared automatically."
```python
stream_params = {"-clear_prev_assets": True} # will delete all previous assets
@@ -279,6 +328,10 @@ Almost all FFmpeg parameter can be passed as dictionary attributes in `stream_pa
!!! tip "Kindly check [H.264 docs ➶](https://trac.ffmpeg.org/wiki/Encode/H.264) and other [FFmpeg Docs ➶](https://ffmpeg.org/documentation.html) for more information on these parameters"
+
+!!! error "All ffmpeg parameters are case-sensitive. Remember to double check every parameter if any error occurs."
+
+
!!! note "In addition to these parameters, almost any FFmpeg parameter _(supported by installed FFmpeg)_ is also supported. But make sure to read [**FFmpeg Docs**](https://ffmpeg.org/documentation.html) carefully first."
```python
@@ -291,6 +344,8 @@ stream_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast", "-tune": "ze
All the encoders and decoders that are compiled with FFmpeg in use, are supported by WriteGear API. You can easily check the compiled encoders by running following command in your terminal:
+!!! info "Similarily, supported demuxers and filters depends upons compiled FFmpeg in use."
+
```sh
# for checking encoder
ffmpeg -encoders # use `ffmpeg.exe -encoders` on windows
diff --git a/docs/gears/streamgear/rtfm/overview.md b/docs/gears/streamgear/rtfm/overview.md
new file mode 100644
index 000000000..0f8649233
--- /dev/null
+++ b/docs/gears/streamgear/rtfm/overview.md
@@ -0,0 +1,90 @@
+
+
+# StreamGear API: Real-time Frames Mode
+
+
+
+
+ Real-time Frames Mode generalized workflow
+
+
+
+## Overview
+
+When no valid input is received on [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#supported-parameters) dictionary parameter, StreamGear API activates this mode where it directly transcodes real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire video file)_ into a sequence of multiple smaller chunks/segments for adaptive streaming.
+
+This mode works exceptionally well when you desire to flexibility manipulate or transform video-frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, StreamGear **DOES NOT** automatically maps video-source's audio to generated streams with this mode. You need to manually assign separate audio-source through [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter.
+
+SteamGear supports both [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ with this mode.
+
+For this mode, StreamGear API provides exclusive [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) method for directly trancoding video-frames into streamable chunks.
+
+
+
+!!! new "New in v0.2.2"
+
+ Apple HLS support was added in `v0.2.2`.
+
+
+!!! alert "Real-time Frames Mode is NOT Live-Streaming."
+
+ Rather, you can easily enable live-streaming in Real-time Frames Mode by using StreamGear API's exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. Checkout its [usage example here](../usage/#bare-minimum-usage-with-live-streaming).
+
+
+!!! danger
+
+ * Using [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) function instead of [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) in Real-time Frames Mode will instantly result in **`RuntimeError`**!
+
+ * **NEVER** assign anything to [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#supported-parameters) dictionary parameter, otherwise [Single-Source Mode](../#a-single-source-mode) may get activated, and as a result, using [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function will throw **`RuntimeError`**!
+
+ * You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in this mode, otherwise audio delay will occur in output streams.
+
+ * Input framerate defaults to `25.0` fps if [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value not defined.
+
+
+
+
+## Usage Examples
+
+
+
+
\ No newline at end of file
diff --git a/docs/gears/streamgear/rtfm/usage.md b/docs/gears/streamgear/rtfm/usage.md
new file mode 100644
index 000000000..6008a65a7
--- /dev/null
+++ b/docs/gears/streamgear/rtfm/usage.md
@@ -0,0 +1,1288 @@
+
+
+# StreamGear API Usage Examples: Real-time Frames Mode
+
+
+!!! alert "Real-time Frames Mode is NOT Live-Streaming."
+
+ Rather you can easily enable live-streaming in Real-time Frames Mode by using StreamGear API's exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. Checkout following [usage example](#bare-minimum-usage-with-live-streaming).
+
+!!! warning "Important Information"
+
+ * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../../ffmpeg_install/) for its installation.
+
+ * StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system.
+
+ * By default, when no additional streams are defined, ==StreamGear generates a primary stream of same resolution and framerate[^1] as the input video _(at the index `0`)_.==
+
+ * Always use `terminate()` function at the very end of the main code.
+
+
+
+
+
+## Bare-Minimum Usage
+
+Following is the bare-minimum code you need to get started with StreamGear API in Real-time Frames Mode:
+
+!!! note "We are using [CamGear](../../../camgear/overview/) in this Bare-Minimum example, but any [VideoCapture Gear](../../../#a-videocapture-gears) will work in the similar manner."
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source='foo1.mp4').start()
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="dash_out.mpd")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source='foo1.mp4').start()
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+!!! success "After running this bare-minimum example, StreamGear will produce a Manifest file _(`dash.mpd`)_ with streamable chunks that contains information about a Primary Stream of same resolution and framerate[^1] as input _(without any audio)_."
+
+
+
+
+## Bare-Minimum Usage with Live-Streaming
+
+You can easily activate ==Low-latency Livestreaming in Real-time Frames Mode==, where chunks will contain information for few new frames only and forgets all previous ones), using exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows:
+
+!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency."
+
+!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames."
+
+!!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter."
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(from web-camera attached at index `0`)
+ stream = CamGear(source=0).start()
+
+ # enable livestreaming and retrieve framerate from CamGear Stream and
+ # pass it as `-input_framerate` parameter for controlled framerate
+ stream_params = {"-input_framerate": stream.framerate, "-livestream": True}
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(from web-camera attached at index `0`)
+ stream = CamGear(source=0).start()
+
+ # enable livestreaming and retrieve framerate from CamGear Stream and
+ # pass it as `-input_framerate` parameter for controlled framerate
+ stream_params = {"-input_framerate": stream.framerate, "-livestream": True}
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+
+## Bare-Minimum Usage with RGB Mode
+
+In Real-time Frames Mode, StreamGear API provide [`rgb_mode`](../../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) boolean parameter with its `stream()` function, which if enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_, thereby also known as ==RGB Mode==.
+
+The complete usage example is as follows:
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source='foo1.mp4').start()
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="dash_out.mpd")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {simulating RGB frame for this example}
+ frame_rgb = frame[:,:,::-1]
+
+
+ # send frame to streamer
+ streamer.stream(frame_rgb, rgb_mode = True) #activate RGB Mode
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source='foo1.mp4').start()
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {simulating RGB frame for this example}
+ frame_rgb = frame[:,:,::-1]
+
+
+ # send frame to streamer
+ streamer.stream(frame_rgb, rgb_mode = True) #activate RGB Mode
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+
+## Bare-Minimum Usage with controlled Input-framerate
+
+In Real-time Frames Mode, StreamGear API provides exclusive [`-input_framerate`](../../params/#a-exclusive-parameters) attribute for its `stream_params` dictionary parameter, that allow us to set the assumed constant framerate for incoming frames.
+
+In this example, we will retrieve framerate from webcam video-stream, and set it as value for `-input_framerate` attribute in StreamGear:
+
+!!! danger "Remember, Input framerate default to `25.0` fps if [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value not defined in Real-time Frames mode."
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open live video stream on webcam at first index(i.e. 0) device
+ stream = CamGear(source=0).start()
+
+ # retrieve framerate from CamGear Stream and pass it as `-input_framerate` value
+ stream_params = {"-input_framerate":stream.framerate}
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open live video stream on webcam at first index(i.e. 0) device
+ stream = CamGear(source=0).start()
+
+ # retrieve framerate from CamGear Stream and pass it as `-input_framerate` value
+ stream_params = {"-input_framerate":stream.framerate}
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+## Bare-Minimum Usage with OpenCV
+
+You can easily use StreamGear API directly with any other Video Processing library(_For e.g. [OpenCV](https://github.com/opencv/opencv) itself_) in Real-time Frames Mode.
+
+The complete usage example is as follows:
+
+!!! tip "This just a bare-minimum example with OpenCV, but any other Real-time Frames Mode feature/example will work in the similar manner."
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open suitable video stream, such as webcam on first index(i.e. 0)
+ stream = cv2.VideoCapture(0)
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="dash_out.mpd")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ (grabbed, frame) = stream.read()
+
+ # check for frame if not grabbed
+ if not grabbed:
+ break
+
+ # {do something with the frame here}
+ # lets convert frame to gray for this example
+ gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
+
+
+ # send frame to streamer
+ streamer.stream(gray)
+
+ # Show output window
+ cv2.imshow("Output Gray Frame", gray)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.release()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open suitable video stream, such as webcam on first index(i.e. 0)
+ stream = cv2.VideoCapture(0)
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ (grabbed, frame) = stream.read()
+
+ # check for frame if not grabbed
+ if not grabbed:
+ break
+
+ # {do something with the frame here}
+ # lets convert frame to gray for this example
+ gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
+
+
+ # send frame to streamer
+ streamer.stream(gray)
+
+ # Show output window
+ cv2.imshow("Output Gray Frame", gray)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.release()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+
+## Usage with Additional Streams
+
+Similar to Single-Source Mode, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to add each resolution and bitrate/framerate as list of dictionaries to this attribute, and rest is done automatically.
+
+!!! info "A more detailed information on `-streams` attribute can be found [here ➶](../../params/#a-exclusive-parameters)"
+
+The complete example is as follows:
+
+!!! danger "Important `-streams` attribute Information"
+ * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate[^1] as the input, at the index `0`.
+ * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised!
+ * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded!
+ * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically.
+ * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically.
+
+!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!"
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open suitable video stream, such as webcam on first index(i.e. 0)
+ stream = CamGear(source=0).start()
+
+ # define various streams
+ stream_params = {
+ "-streams": [
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate
+ {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate
+ ],
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open suitable video stream, such as webcam on first index(i.e. 0)
+ stream = CamGear(source=0).start()
+
+ # define various streams
+ stream_params = {
+ "-streams": [
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate
+ {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate
+ ],
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls")
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+## Usage with File Audio-Input
+
+In Real-time Frames Mode, if you want to add audio to your streams, you've to use exclusive [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to input the path of your audio file to this attribute as `string` value, and the API will automatically validate as well as maps it to all generated streams.
+
+The complete example is as follows:
+
+!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
+
+!!! warning "You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams."
+
+!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../../params/#a-exclusive-parameters)"
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source='foo1.mp4').start()
+
+ # add various streams, along with custom audio
+ stream_params = {
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
+ ],
+ "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!!
+ "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac"
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source='foo1.mp4').start()
+
+ # add various streams, along with custom audio
+ stream_params = {
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
+ ],
+ "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!!
+ "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac"
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+## Usage with Device Audio-Input
+
+In Real-time Frames Mode, you've can also use exclusive [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter for streaming live audio from external device. You just need to format your audio device name followed by suitable demuxer as `list` and assign to this attribute, and the API will automatically validate as well as map it to all generated streams.
+
+The complete example is as follows:
+
+
+!!! alert "Example Assumptions"
+
+ * You're running are Windows machine with all neccessary audio drivers and software installed.
+ * There's a audio device with named `"Microphone (USB2.0 Camera)"` connected to your windows machine.
+
+
+??? tip "Using devices with `-audio` attribute on different OS platforms"
+
+ === "On Windows"
+
+ Windows OS users can use the [dshow](https://trac.ffmpeg.org/wiki/DirectShow) (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card:
+
+ - [x] **[OPTIONAL] Enable sound card(if disabled):** First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. **Follow this [post ➶](https://forums.tomshardware.com/threads/no-sound-through-stereo-mix-realtek-hd-audio.1716182/) for more details.**
+
+ - [x] **Identify Sound Card:** Then, You can locate your soundcard using `dshow` as follows:
+
+ ```sh
+ c:\> ffmpeg -list_devices true -f dshow -i dummy
+ ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect
+ libavutil 51. 74.100 / 51. 74.100
+ libavcodec 54. 65.100 / 54. 65.100
+ libavformat 54. 31.100 / 54. 31.100
+ libavdevice 54. 3.100 / 54. 3.100
+ libavfilter 3. 19.102 / 3. 19.102
+ libswscale 2. 1.101 / 2. 1.101
+ libswresample 0. 16.100 / 0. 16.100
+ [dshow @ 03ACF580] DirectShow video devices
+ [dshow @ 03ACF580] "Integrated Camera"
+ [dshow @ 03ACF580] "USB2.0 Camera"
+ [dshow @ 03ACF580] DirectShow audio devices
+ [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)"
+ [dshow @ 03ACF580] "Microphone (USB2.0 Camera)"
+ dummy: Immediate exit requested
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows:
+
+ ```python
+ # assign appropriate input audio-source device and demuxer device and demuxer
+ stream_params = {"-audio": ["-f","dshow", "-i", "audio=Microphone (USB2.0 Camera)"]}
+ ```
+
+ !!! fail "If audio still doesn't work then [checkout this troubleshooting guide ➶](https://www.maketecheasier.com/fix-microphone-not-working-windows10/) or reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+ === "On Linux"
+
+ Linux OS users can use the [alsa](https://ffmpeg.org/ffmpeg-all.html#alsa) to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card:
+
+ - [x] **Identify Sound Card:** To get the list of all installed cards on your machine, you can type `arecord -l` or `arecord -L` _(longer output)_.
+
+ ```sh
+ arecord -l
+
+ **** List of CAPTURE Hardware Devices ****
+ card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in WriteGear as follows:
+
+ !!! info "The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as `hw:0` or `hw:1`"
+
+ ```python
+ # assign appropriate input audio-source device and demuxer device and demuxer
+ stream_params = {"-audio": ["-f","alsa", "-i", "hw:1"]}
+ ```
+
+ !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+ === "On MacOS"
+
+ MAC OS users can use the [avfoundation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines:
+
+
+ - [x] **Identify Sound Card:** Then, You can locate your soundcard using `avfoundation` as follows:
+
+ ```sh
+ ffmpeg -f qtkit -list_devices true -i ""
+ ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect
+ libavutil 51. 74.100 / 51. 74.100
+ libavcodec 54. 65.100 / 54. 65.100
+ libavformat 54. 31.100 / 54. 31.100
+ libavdevice 54. 3.100 / 54. 3.100
+ libavfilter 3. 19.102 / 3. 19.102
+ libswscale 2. 1.101 / 2. 1.101
+ libswresample 0. 16.100 / 0. 16.100
+ [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices:
+ [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in)
+ [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0
+ [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices:
+ [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio
+ [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows:
+
+ ```python
+ # assign appropriate input audio-source device and demuxer
+ stream_params = {"-audio": ["-f","avfoundation", "-audio_device_index", "0"]}
+ ```
+
+ !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+!!! danger "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
+
+!!! warning "You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams."
+
+!!! note "It is advised to use this example with live-streaming enabled(True) by using StreamGear API's exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter."
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source="foo1.mp4").start()
+
+ # add various streams, along with custom audio
+ stream_params = {
+ "-streams": [
+ {
+ "-resolution": "1280x720",
+ "-video_bitrate": "4000k",
+ }, # Stream1: 1280x720 at 4000kbs bitrate
+ {"-resolution": "640x360", "-framerate": 30.0}, # Stream2: 640x360 at 30fps
+ ],
+ "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!!
+ "-audio": [
+ "-f",
+ "dshow",
+ "-i",
+ "audio=Microphone (USB2.0 Camera)",
+ ], # assign appropriate input audio-source device and demuxer
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import CamGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # open any valid video stream(for e.g `foo1.mp4` file)
+ stream = CamGear(source="foo1.mp4").start()
+
+ # add various streams, along with custom audio
+ stream_params = {
+ "-streams": [
+ {
+ "-resolution": "1280x720",
+ "-video_bitrate": "4000k",
+ }, # Stream1: 1280x720 at 4000kbs bitrate
+ {"-resolution": "640x360", "-framerate": 30.0}, # Stream2: 640x360 at 30fps
+ ],
+ "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!!
+ "-audio": [
+ "-f",
+ "dshow",
+ "-i",
+ "audio=Microphone (USB2.0 Camera)",
+ ], # assign appropriate input audio-source device and demuxer
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.m3u8", format="hls", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+## Usage with Hardware Video-Encoder
+
+
+In Real-time Frames Mode, you can also easily change encoder as per your requirement just by passing `-vcodec` FFmpeg parameter as an attribute in `stream_params` dictionary parameter. In addition to this, you can also specify the additional properties/features/optimizations for your system's GPU similarly.
+
+In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'` like properties by formatting them as `option` dictionary parameter's attributes, as follows:
+
+!!! warning "Check VAAPI support"
+
+ **This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which MAY/MAY-NOT suit your system. Kindly use suitable parameters based your supported system and FFmpeg configurations only.**
+
+ To use `h264_vaapi` encoder, remember to check if its available and your FFmpeg compiled with VAAPI support. You can easily do this by executing following one-liner command in your terminal, and observing if output contains something similar as follows:
+
+ ```sh
+ ffmpeg -hide_banner -encoders | grep vaapi
+
+ V..... h264_vaapi H.264/AVC (VAAPI) (codec h264)
+ V..... hevc_vaapi H.265/HEVC (VAAPI) (codec hevc)
+ V..... mjpeg_vaapi MJPEG (VAAPI) (codec mjpeg)
+ V..... mpeg2_vaapi MPEG-2 (VAAPI) (codec mpeg2video)
+ V..... vp8_vaapi VP8 (VAAPI) (codec vp8)
+ ```
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import VideoGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open suitable video stream, such as webcam on first index(i.e. 0)
+ stream = VideoGear(source=0).start()
+
+ # add various streams with custom Video Encoder and optimizations
+ stream_params = {
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
+ ],
+ "-vcodec": "h264_vaapi", # define custom Video encoder
+ "-vaapi_device": "/dev/dri/renderD128", # define device location
+ "-vf": "format=nv12,hwupload", # define video pixformat
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import VideoGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # Open suitable video stream, such as webcam on first index(i.e. 0)
+ stream = VideoGear(source=0).start()
+
+ # add various streams with custom Video Encoder and optimizations
+ stream_params = {
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
+ ],
+ "-vcodec": "h264_vaapi", # define custom Video encoder
+ "-vaapi_device": "/dev/dri/renderD128", # define device location
+ "-vf": "format=nv12,hwupload", # define video pixformat
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
+[^1]:
+ :bulb: In Real-time Frames Mode, the Primary Stream's framerate defaults to [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value, if defined, else it will be 25fps.
\ No newline at end of file
diff --git a/docs/gears/streamgear/ssm/overview.md b/docs/gears/streamgear/ssm/overview.md
new file mode 100644
index 000000000..b71ba4084
--- /dev/null
+++ b/docs/gears/streamgear/ssm/overview.md
@@ -0,0 +1,81 @@
+
+
+# StreamGear API: Single-Source Mode
+
+
+
+ Single-Source Mode generalized workflow
+
+
+
+## Overview
+
+In this mode, StreamGear transcodes entire audio-video file _(as opposed to frames-by-frame)_ into a sequence of multiple smaller chunks/segments for adaptive streaming.
+
+This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) files for streaming that requires no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing.
+
+SteamGear supports both [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ with this mode.
+
+For this mode, StreamGear API provides exclusive [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) method to easily process audio-video files into streamable chunks.
+
+This mode can be easily activated by assigning suitable video path as input to [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#stream_params) dictionary parameter, during StreamGear initialization.
+
+
+
+!!! new "New in v0.2.2"
+
+ Apple HLS support was added in `v0.2.2`.
+
+
+!!! warning
+
+ * Using [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function instead of [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) in Single-Source Mode will instantly result in **`RuntimeError`**!
+ * Any invalid value to the [`-video_source`](../../params/#a-exclusive-parameters) attribute will result in **`AssertionError`**!
+
+
+
+## Usage Examples
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/gears/streamgear/ssm/usage.md b/docs/gears/streamgear/ssm/usage.md
new file mode 100644
index 000000000..1db663992
--- /dev/null
+++ b/docs/gears/streamgear/ssm/usage.md
@@ -0,0 +1,332 @@
+
+
+# StreamGear API Usage Examples: Single-Source Mode
+
+!!! warning "Important Information"
+
+ * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../../../ffmpeg_install/) for its installation.
+
+ * StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system.
+
+ * By default, when no additional streams are defined, ==StreamGear generates a primary stream of same resolution and framerate[^1] as the input video _(at the index `0`)_.==
+
+ * Always use `terminate()` function at the very end of the main code.
+
+
+
+
+## Bare-Minimum Usage
+
+Following is the bare-minimum code you need to get started with StreamGear API in Single-Source Mode:
+
+!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts."
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode with valid video input
+ stream_params = {"-video_source": "foo.mp4"}
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode with valid video input
+ stream_params = {"-video_source": "foo.mp4"}
+ # describe a suitable master playlist location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+
+!!! success "After running this bare-minimum example, StreamGear will produce a Manifest file _(`dash.mpd`)_ with streamable chunks that contains information about a Primary Stream of same resolution and framerate as the input."
+
+
+
+## Bare-Minimum Usage with Live-Streaming
+
+You can easily activate ==Low-latency Livestreaming in Single-Source Mode==, where chunks will contain information for few new frames only and forgets all previous ones), using exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows:
+
+!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency."
+
+!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames."
+
+!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts."
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode with valid video input and enable livestreaming
+ stream_params = {"-video_source": 0, "-livestream": True}
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode with valid video input and enable livestreaming
+ stream_params = {"-video_source": 0, "-livestream": True}
+ # describe a suitable master playlist location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+
+
+## Usage with Additional Streams
+
+In addition to Primary Stream, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to add each resolution and bitrate/framerate as list of dictionaries to this attribute, and rest is done automatically.
+
+!!! info "A more detailed information on `-streams` attribute can be found [here ➶](../../params/#a-exclusive-parameters)"
+
+The complete example is as follows:
+
+!!! note "If input video-source contains any audio stream/channel, then it automatically gets assigned to all generated streams without any extra efforts."
+
+!!! danger "Important `-streams` attribute Information"
+ * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate as the input, at the index `0`.
+ * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised!
+ * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded!
+ * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically.
+ * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically.
+
+!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!"
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode and also define various streams
+ stream_params = {
+ "-video_source": "foo.mp4",
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate
+ {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate
+ ],
+ }
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode and also define various streams
+ stream_params = {
+ "-video_source": "foo.mp4",
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate
+ {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate
+ ],
+ }
+ # describe a suitable master playlist location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+
+
+## Usage with Custom Audio
+
+By default, if input video-source _(i.e. `-video_source`)_ contains any audio, then it gets automatically mapped to all generated streams. But, if you want to add any custom audio, you can easily do it by using exclusive [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to input the path of your audio file to this attribute as `string`, and the API will automatically validate as well as map it to all generated streams.
+
+The complete example is as follows:
+
+!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
+
+!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../../params/#a-exclusive-parameters)"
+
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode and various streams, along with custom audio
+ stream_params = {
+ "-video_source": "foo.mp4",
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
+ ],
+ "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac"
+ }
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode and various streams, along with custom audio
+ stream_params = {
+ "-video_source": "foo.mp4",
+ "-streams": [
+ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
+ {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
+ ],
+ "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac"
+ }
+ # describe a suitable master playlist location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+
+
+
+
+## Usage with Variable FFmpeg Parameters
+
+For seamlessly generating these streaming assets, StreamGear provides a highly extensible and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/) and access to almost all of its parameter. Thereby, you can access almost any parameter available with FFmpeg itself as dictionary attributes in [`stream_params` dictionary parameter](../../params/#stream_params), and use it to manipulate transcoding as you like.
+
+For this example, let us use our own [H.265/HEVC](https://trac.ffmpeg.org/wiki/Encode/H.265) video and [AAC](https://trac.ffmpeg.org/wiki/Encode/AAC) audio encoder, and set custom audio bitrate, and various other optimizations:
+
+
+!!! tip "This example is just conveying the idea on how to use FFmpeg's encoders/parameters with StreamGear API. You can use any FFmpeg parameter in the similar manner."
+
+!!! danger "Kindly read [**FFmpeg Docs**](https://ffmpeg.org/documentation.html) carefully, before passing any FFmpeg values to `stream_params` parameter. Wrong values may result in undesired errors or no output at all."
+
+!!! fail "Always use `-streams` attribute to define additional streams safely, any duplicate or incorrect stream definition can break things!"
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode and various other parameters
+ stream_params = {
+ "-video_source": "foo.mp4", # define Video-Source
+ "-vcodec": "libx265", # assigns H.265/HEVC video encoder
+ "-x265-params": "lossless=1", # enables Lossless encoding
+ "-crf": 25, # Constant Rate Factor: 25
+ "-bpp": "0.15", # Bits-Per-Pixel(BPP), an Internal StreamGear parameter to ensure good quality of high motion scenes
+ "-streams": [
+ {"-resolution": "1280x720", "-video_bitrate": "4000k"}, # Stream1: 1280x720 at 4000kbs bitrate
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream2: 640x360 at 60fps
+ ],
+ "-audio": "/home/foo/foo1.aac", # define input audio-source: "/home/foo/foo1.aac",
+ "-acodec": "libfdk_aac", # assign lossless AAC audio encoder
+ "-vbr": 4, # Variable Bit Rate: `4`
+ }
+
+ # describe a suitable manifest-file location/name and assign params
+ streamer = StreamGear(output="dash_out.mpd", logging=True, **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import StreamGear
+
+ # activate Single-Source Mode and various other parameters
+ stream_params = {
+ "-video_source": "foo.mp4", # define Video-Source
+ "-vcodec": "libx265", # assigns H.265/HEVC video encoder
+ "-x265-params": "lossless=1", # enables Lossless encoding
+ "-crf": 25, # Constant Rate Factor: 25
+ "-bpp": "0.15", # Bits-Per-Pixel(BPP), an Internal StreamGear parameter to ensure good quality of high motion scenes
+ "-streams": [
+ {"-resolution": "1280x720", "-video_bitrate": "4000k"}, # Stream1: 1280x720 at 4000kbs bitrate
+ {"-resolution": "640x360", "-framerate": 60.0}, # Stream2: 640x360 at 60fps
+ ],
+ "-audio": "/home/foo/foo1.aac", # define input audio-source: "/home/foo/foo1.aac",
+ "-acodec": "libfdk_aac", # assign lossless AAC audio encoder
+ "-vbr": 4, # Variable Bit Rate: `4`
+ }
+
+ # describe a suitable master playlist file location/name and assign params
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", logging=True, **stream_params)
+ # trancode source
+ streamer.transcode_source()
+ # terminate
+ streamer.terminate()
+ ```
+
+
+
+[^1]:
+ :bulb: In Real-time Frames Mode, the Primary Stream's framerate defaults to [`-input_framerate`](../../params/#a-exclusive-parameters) attribute value, if defined, else it will be 25fps.
\ No newline at end of file
diff --git a/docs/gears/streamgear/usage.md b/docs/gears/streamgear/usage.md
deleted file mode 100644
index 81682b439..000000000
--- a/docs/gears/streamgear/usage.md
+++ /dev/null
@@ -1,766 +0,0 @@
-
-
-# StreamGear API Usage Examples:
-
-
-!!! warning "Important Information"
-
- * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../ffmpeg_install/) for its installation.
-
- * StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system.
-
- * By default, when no additional streams are defined, ==StreamGear generates a primary stream of same resolution and framerate[^1] as the input video _(at the index `0`)_.==
-
- * Always use `terminate()` function at the very end of the main code.
-
-
-
-
-## A. Single-Source Mode
-
-
-
- Single-Source Mode generalized workflow
-
-
-In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing.
-
-This mode provide [`transcode_source()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) function to process audio-video files into streamable chunks.
-
-This mode can be easily activated by assigning suitable video path as input to [`-video_source`](../params/#a-exclusive-parameters) attribute of [`stream_params`](../params/#stream_params) dictionary parameter, during StreamGear initialization.
-
-!!! warning
-
- * Using [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function instead of [`transcode_source()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) in Single-Source Mode will instantly result in **`RuntimeError`**!
- * Any invalid value to the [`-video_source`](../params/#a-exclusive-parameters) attribute will result in **`AssertionError`**!
-
-
-
-### A.1 Bare-Minimum Usage
-
-Following is the bare-minimum code you need to get started with StreamGear API in Single-Source Mode:
-
-!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts."
-
-```python
-# import required libraries
-from vidgear.gears import StreamGear
-
-# activate Single-Source Mode with valid video input
-stream_params = {"-video_source": "foo.mp4"}
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-# trancode source
-streamer.transcode_source()
-# terminate
-streamer.terminate()
-```
-
-!!! success "After running these bare-minimum commands, StreamGear will produce a Manifest file _(`dash.mpd`)_ with steamable chunks that contains information about a Primary Stream of same resolution and framerate as the input."
-
-
-
-### A.2 Bare-Minimum Usage with Live-Streaming
-
-If you want to **Livestream in Single-Source Mode** _(chunks will contain information for few new frames only, and forgets all previous ones)_, you can use exclusive [`-livestream`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows:
-
-!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency."
-
-!!! warning "All Chunks will be overwritten in this mode after every few Chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, Hence Newer Chunks and Manifest contains NO information of any older video-frames."
-
-!!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts."
-
-```python
-# import required libraries
-from vidgear.gears import StreamGear
-
-# activate Single-Source Mode with valid video input and enable livestreaming
-stream_params = {"-video_source": 0, "-livestream": True}
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-# trancode source
-streamer.transcode_source()
-# terminate
-streamer.terminate()
-```
-
-
-
-### A.3 Usage with Additional Streams
-
-In addition to Primary Stream, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to add each resolution and bitrate/framerate as list of dictionaries to this attribute, and rest is done automatically _(More detailed information can be found [here ➶](../params/#a-exclusive-parameters))_. The complete example is as follows:
-
-!!! note "If input video-source contains any audio stream/channel, then it automatically gets assigned to all generated streams without any extra efforts."
-
-!!! danger "Important `-streams` attribute Information"
- * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate as the input, at the index `0`.
- * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised!
- * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded!
- * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically.
- * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically.
-
-!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!"
-
-```python
-# import required libraries
-from vidgear.gears import StreamGear
-
-# activate Single-Source Mode and also define various streams
-stream_params = {
- "-video_source": "foo.mp4",
- "-streams": [
- {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
- {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate
- {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate
- {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate
- ],
-}
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-# trancode source
-streamer.transcode_source()
-# terminate
-streamer.terminate()
-```
-
-
-
-### A.4 Usage with Custom Audio
-
-By default, if input video-source _(i.e. `-video_source`)_ contains any audio, then it gets automatically mapped to all generated streams. But, if you want to add any custom audio, you can easily do it by using exclusive [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You just need to input the path of your audio file to this attribute as string, and StreamGear API will automatically validate and map it to all generated streams. The complete example is as follows:
-
-!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
-
-!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../params/#a-exclusive-parameters)"
-
-```python
-# import required libraries
-from vidgear.gears import StreamGear
-
-# activate Single-Source Mode and various streams, along with custom audio
-stream_params = {
- "-video_source": "foo.mp4",
- "-streams": [
- {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
- {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
- {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
- ],
- "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac"
-}
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-# trancode source
-streamer.transcode_source()
-# terminate
-streamer.terminate()
-```
-
-
-
-
-### A.5 Usage with Variable FFmpeg Parameters
-
-For seamlessly generating these streaming assets, StreamGear provides a highly extensible and flexible wrapper around [**FFmpeg**](https://ffmpeg.org/), and access to almost all of its parameter. Hence, you can access almost any parameter available with FFmpeg itself as dictionary attributes in [`stream_params` dictionary parameter](../params/#stream_params), and use it to manipulate transcoding as you like.
-
-For this example, let us use our own [H.265/HEVC](https://trac.ffmpeg.org/wiki/Encode/H.265) video and [AAC](https://trac.ffmpeg.org/wiki/Encode/AAC) audio encoder, and set custom audio bitrate, and various other optimizations:
-
-
-!!! tip "This example is just conveying the idea on how to use FFmpeg's encoders/parameters with StreamGear API. You can use any FFmpeg parameter in the similar manner."
-
-!!! danger "Kindly read [**FFmpeg Docs**](https://ffmpeg.org/documentation.html) carefully, before passing any FFmpeg values to `stream_params` parameter. Wrong values may result in undesired errors or no output at all."
-
-!!! fail "Always use `-streams` attribute to define additional streams safely, any duplicate or incorrect stream definition can break things!"
-
-
-```python
-# import required libraries
-from vidgear.gears import StreamGear
-
-# activate Single-Source Mode and various other parameters
-stream_params = {
- "-video_source": "foo.mp4", # define Video-Source
- "-vcodec": "libx265", # assigns H.265/HEVC video encoder
- "-x265-params": "lossless=1", # enables Lossless encoding
- "-crf": 25, # Constant Rate Factor: 25
- "-bpp": "0.15", # Bits-Per-Pixel(BPP), an Internal StreamGear parameter to ensure good quality of high motion scenes
- "-streams": [
- {"-resolution": "1280x720", "-video_bitrate": "4000k"}, # Stream1: 1280x720 at 4000kbs bitrate
- {"-resolution": "640x360", "-framerate": 60.0}, # Stream2: 640x360 at 60fps
- ],
- "-audio": "/home/foo/foo1.aac", # define input audio-source: "/home/foo/foo1.aac",
- "-acodec": "libfdk_aac", # assign lossless AAC audio encoder
- "-vbr": 4, # Variable Bit Rate: `4`
-}
-
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", logging=True, **stream_params)
-# trancode source
-streamer.transcode_source()
-# terminate
-streamer.terminate()
-```
-
-
-
-
-
-## B. Real-time Frames Mode
-
-
-
- Real-time Frames Mode generalized workflow
-
-
-When no valid input is received on [`-video_source`](../params/#a-exclusive-parameters) attribute of [`stream_params`](../params/#supported-parameters) dictionary parameter, StreamGear API activates this mode where it directly transcodes real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire file)_ into a sequence of multiple smaller chunks/segments for streaming.
-
-In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter.
-
-This mode provide [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function for directly trancoding video-frames into streamable chunks over the FFmpeg pipeline.
-
-
-!!! warning
-
- * Using [`transcode_source()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) function instead of [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) in Real-time Frames Mode will instantly result in **`RuntimeError`**!
-
- * **NEVER** assign anything to [`-video_source`](../params/#a-exclusive-parameters) attribute of [`stream_params`](../params/#supported-parameters) dictionary parameter, otherwise [Single-Source Mode](#a-single-source-mode) may get activated, and as a result, using [`stream()`](../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function will throw **`RuntimeError`**!
-
- * You **MUST** use [`-input_framerate`](../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in this mode, otherwise audio delay will occur in output streams.
-
- * Input framerate defaults to `25.0` fps if [`-input_framerate`](../params/#a-exclusive-parameters) attribute value not defined.
-
-
-
-
-
-### B.1 Bare-Minimum Usage
-
-Following is the bare-minimum code you need to get started with StreamGear API in Real-time Frames Mode:
-
-!!! note "We are using [CamGear](../../camgear/overview/) in this Bare-Minimum example, but any [VideoCapture Gear](../../#a-videocapture-gears) will work in the similar manner."
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import StreamGear
-import cv2
-
-# open any valid video stream(for e.g `foo1.mp4` file)
-stream = CamGear(source='foo1.mp4').start()
-
-# describe a suitable manifest-file location/name
-streamer = StreamGear(output="dash_out.mpd")
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
-
- # {do something with the frame here}
-
-
- # send frame to streamer
- streamer.stream(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close streamer
-streamer.terminate()
-```
-
-!!! success "After running these bare-minimum commands, StreamGear will produce a Manifest file _(`dash.mpd`)_ with steamable chunks that contains information about a Primary Stream of same resolution and framerate[^1] as input _(without any audio)_."
-
-
-
-
-### B.2 Bare-Minimum Usage with Live-Streaming
-
-If you want to **Livestream in Real-time Frames Mode** _(chunks will contain information for few new frames only)_, which is excellent for building Low Latency solutions such as Live Camera Streaming, then you can use exclusive [`-livestream`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter as follows:
-
-!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency."
-
-!!! warning "All Chunks will be overwritten in this mode after every few Chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, Hence Newer Chunks and Manifest contains NO information of any older video-frames."
-
-!!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter."
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import StreamGear
-import cv2
-
-# open any valid video stream(from web-camera attached at index `0`)
-stream = CamGear(source=0).start()
-
-# enable livestreaming and retrieve framerate from CamGear Stream and
-# pass it as `-input_framerate` parameter for controlled framerate
-stream_params = {"-input_framerate": stream.framerate, "-livestream": True}
-
-# describe a suitable manifest-file location/name
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
- # {do something with the frame here}
-
- # send frame to streamer
- streamer.stream(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close streamer
-streamer.terminate()
-```
-
-
-
-### B.3 Bare-Minimum Usage with RGB Mode
-
-In Real-time Frames Mode, StreamGear API provide [`rgb_mode`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) boolean parameter with its `stream()` function, which if enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_, thereby also known as ==RGB Mode==. The complete usage example is as follows:
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import StreamGear
-import cv2
-
-# open any valid video stream(for e.g `foo1.mp4` file)
-stream = CamGear(source='foo1.mp4').start()
-
-# describe a suitable manifest-file location/name
-streamer = StreamGear(output="dash_out.mpd")
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
-
- # {simulating RGB frame for this example}
- frame_rgb = frame[:,:,::-1]
-
-
- # send frame to streamer
- streamer.stream(frame_rgb, rgb_mode = True) #activate RGB Mode
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close streamer
-streamer.terminate()
-```
-
-
-
-### B.4 Bare-Minimum Usage with controlled Input-framerate
-
-In Real-time Frames Mode, StreamGear API provides exclusive [`-input_framerate`](../params/#a-exclusive-parameters) attribute for its `stream_params` dictionary parameter, that allow us to set the assumed constant framerate for incoming frames. In this example, we will retrieve framerate from webcam video-stream, and set it as value for `-input_framerate` attribute in StreamGear:
-
-!!! danger "Remember, Input framerate default to `25.0` fps if [`-input_framerate`](../params/#a-exclusive-parameters) attribute value not defined in Real-time Frames mode."
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import StreamGear
-import cv2
-
-# Open live video stream on webcam at first index(i.e. 0) device
-stream = CamGear(source=0).start()
-
-# retrieve framerate from CamGear Stream and pass it as `-input_framerate` value
-stream_params = {"-input_framerate":stream.framerate}
-
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
-
- # {do something with the frame here}
-
-
- # send frame to streamer
- streamer.stream(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close streamer
-streamer.terminate()
-```
-
-
-
-### B.5 Bare-Minimum Usage with OpenCV
-
-You can easily use StreamGear API directly with any other Video Processing library(_For e.g. [OpenCV](https://github.com/opencv/opencv) itself_) in Real-time Frames Mode. The complete usage example is as follows:
-
-!!! tip "This just a bare-minimum example with OpenCV, but any other Real-time Frames Mode feature/example will work in the similar manner."
-
-```python
-# import required libraries
-from vidgear.gears import StreamGear
-import cv2
-
-# Open suitable video stream, such as webcam on first index(i.e. 0)
-stream = cv2.VideoCapture(0)
-
-# describe a suitable manifest-file location/name
-streamer = StreamGear(output="dash_out.mpd")
-
-# loop over
-while True:
-
- # read frames from stream
- (grabbed, frame) = stream.read()
-
- # check for frame if not grabbed
- if not grabbed:
- break
-
- # {do something with the frame here}
- # lets convert frame to gray for this example
- gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
-
-
- # send frame to streamer
- streamer.stream(gray)
-
- # Show output window
- cv2.imshow("Output Gray Frame", gray)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.release()
-
-# safely close streamer
-streamer.terminate()
-```
-
-
-
-### B.6 Usage with Additional Streams
-
-Similar to Single-Source Mode, you can easily generate any number of additional Secondary Streams of variable bitrates or spatial resolutions, using exclusive [`-streams`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter _(More detailed information can be found [here ➶](../params/#a-exclusive-parameters))_ in Real-time Frames Mode. The complete example is as follows:
-
-!!! danger "Important `-streams` attribute Information"
- * On top of these additional streams, StreamGear by default, generates a primary stream of same resolution and framerate[^1] as the input, at the index `0`.
- * :warning: Make sure your System/Machine/Server/Network is able to handle these additional streams, discretion is advised!
- * You **MUST** need to define `-resolution` value for your stream, otherwise stream will be discarded!
- * You only need either of `-video_bitrate` or `-framerate` for defining a valid stream. Since with `-framerate` value defined, video-bitrate is calculated automatically.
- * If you define both `-video_bitrate` and `-framerate` values at the same time, StreamGear will discard the `-framerate` value automatically.
-
-!!! fail "Always use `-stream` attribute to define additional streams safely, any duplicate or incorrect definition can break things!"
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import StreamGear
-import cv2
-
-# Open suitable video stream, such as webcam on first index(i.e. 0)
-stream = CamGear(source=0).start()
-
-# define various streams
-stream_params = {
- "-streams": [
- {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate
- {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate
- {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate
- ],
-}
-
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd")
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
-
- # {do something with the frame here}
-
-
- # send frame to streamer
- streamer.stream(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close streamer
-streamer.terminate()
-```
-
-
-
-### B.7 Usage with Audio-Input
-
-In Real-time Frames Mode, if you want to add audio to your streams, you've to use exclusive [`-audio`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. You need to input the path of your audio to this attribute as string value, and StreamGear API will automatically validate and map it to all generated streams. The complete example is as follows:
-
-!!! failure "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
-
-!!! warning "You **MUST** use [`-input_framerate`](../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams."
-
-!!! tip "You can also assign a valid Audio URL as input, rather than filepath. More details can be found [here ➶](../params/#a-exclusive-parameters)"
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import StreamGear
-import cv2
-
-# open any valid video stream(for e.g `foo1.mp4` file)
-stream = CamGear(source='foo1.mp4').start()
-
-# add various streams, along with custom audio
-stream_params = {
- "-streams": [
- {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
- {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
- {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
- ],
- "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!!
- "-audio": "/home/foo/foo1.aac" # assigns input audio-source: "/home/foo/foo1.aac"
-}
-
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
-
- # {do something with the frame here}
-
-
- # send frame to streamer
- streamer.stream(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close streamer
-streamer.terminate()
-```
-
-
-
-### B.8 Usage with Hardware Video-Encoder
-
-
-In Real-time Frames Mode, you can also easily change encoder as per your requirement just by passing `-vcodec` FFmpeg parameter as an attribute in `stream_params` dictionary parameter. In addition to this, you can also specify the additional properties/features/optimizations for your system's GPU similarly.
-
-In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'` like properties by formatting them as `option` dictionary parameter's attributes, as follows:
-
-!!! warning "Check VAAPI support"
-
- **This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which MAY/MAY-NOT suit your system. Kindly use suitable parameters based your supported system and FFmpeg configurations only.**
-
- To use `h264_vaapi` encoder, remember to check if its available and your FFmpeg compiled with VAAPI support. You can easily do this by executing following one-liner command in your terminal, and observing if output contains something similar as follows:
-
- ```sh
- ffmpeg -hide_banner -encoders | grep vaapi
-
- V..... h264_vaapi H.264/AVC (VAAPI) (codec h264)
- V..... hevc_vaapi H.265/HEVC (VAAPI) (codec hevc)
- V..... mjpeg_vaapi MJPEG (VAAPI) (codec mjpeg)
- V..... mpeg2_vaapi MPEG-2 (VAAPI) (codec mpeg2video)
- V..... vp8_vaapi VP8 (VAAPI) (codec vp8)
- ```
-
-
-```python
-# import required libraries
-from vidgear.gears import VideoGear
-from vidgear.gears import StreamGear
-import cv2
-
-# Open suitable video stream, such as webcam on first index(i.e. 0)
-stream = VideoGear(source=0).start()
-
-# add various streams with custom Video Encoder and optimizations
-stream_params = {
- "-streams": [
- {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate
- {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps
- {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps
- ],
- "-vcodec": "h264_vaapi", # define custom Video encoder
- "-vaapi_device": "/dev/dri/renderD128", # define device location
- "-vf": "format=nv12,hwupload", # define video pixformat
-}
-
-# describe a suitable manifest-file location/name and assign params
-streamer = StreamGear(output="dash_out.mpd", **stream_params)
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
-
- # {do something with the frame here}
-
-
- # send frame to streamer
- streamer.stream(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close streamer
-streamer.terminate()
-```
-
-
-
-[^1]:
- :bulb: In Real-time Frames Mode, the Primary Stream's framerate defaults to [`-input_framerate`](../params/#a-exclusive-parameters) attribute value, if defined, else it will be 25fps.
\ No newline at end of file
diff --git a/docs/gears/videogear/overview.md b/docs/gears/videogear/overview.md
index 75768e1e7..b13be384a 100644
--- a/docs/gears/videogear/overview.md
+++ b/docs/gears/videogear/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -37,13 +37,10 @@ VideoGear is ideal when you need to switch to different video sources without ch
!!! tip "Helpful Tips"
- * If you're already familar with [OpenCV](https://github.com/opencv/opencv) library, then see [Switching from OpenCV ➶](../../switch_from_cv/#switching-videocapture-apis)
+ * If you're already familar with [OpenCV](https://github.com/opencv/opencv) library, then see [Switching from OpenCV ➶](../../../switch_from_cv/#switching-videocapture-apis)
* It is advised to enable logging(`logging = True`) on the first run for easily identifying any runtime errors.
-
-!!! warning "Make sure to [enable Raspberry Pi hardware-specific settings](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior using PiGear API, otherwise nothing will work."
-
## Importing
diff --git a/docs/gears/videogear/params.md b/docs/gears/videogear/params.md
index 19cbc529e..2e70add7c 100644
--- a/docs/gears/videogear/params.md
+++ b/docs/gears/videogear/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/videogear/usage.md b/docs/gears/videogear/usage.md
index 2442829de..3f41d1ab8 100644
--- a/docs/gears/videogear/usage.md
+++ b/docs/gears/videogear/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -26,6 +26,8 @@ limitations under the License.
## Bare-Minimum Usage with CamGear backend
+!!! abstract "VideoGear by default provides direct internal access to [CamGear API](../../camgear/overview/)."
+
Following is the bare-minimum code you need to access CamGear API with VideoGear:
```python
@@ -34,7 +36,7 @@ from vidgear.gears import VideoGear
import cv2
-# open any valid video stream(for e.g `myvideo.avi` file
+# open any valid video stream(for e.g `myvideo.avi` file)
stream = VideoGear(source="myvideo.avi").start()
# loop over
@@ -69,8 +71,12 @@ stream.stop()
## Bare-Minimum Usage with PiGear backend
+!!! abstract "VideoGear contains a special [`enablePiCamera`](../params/#enablepicamera) flag that when `True` provides internal access to [PiGear API](../../pigear/overview/)."
+
Following is the bare-minimum code you need to access PiGear API with VideoGear:
+!!! warning "Make sure to [enable Raspberry Pi hardware-specific settings](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior using PiGear Backend, otherwise nothing will work."
+
```python
# import required libraries
from vidgear.gears import VideoGear
@@ -111,7 +117,9 @@ stream.stop()
## Using VideoGear with Video Stabilizer backend
-VideoGear API provides a special internal wrapper around VidGear's Exclusive [**Video Stabilizer**](../../stabilizer/overview/) class and provides easy way of activating stabilization for various video-streams _(real-time or not)_ with its [`stabilize`](../params/#stabilize) boolean parameter during initialization. The complete usage example is as follows:
+!!! abstract "VideoGear API provides a special internal wrapper around VidGear's Exclusive [**Video Stabilizer**](../../stabilizer/overview/) class and provides easy way of activating stabilization for various video-streams _(real-time or not)_ with its [`stabilize`](../params/#stabilize) boolean parameter during initialization."
+
+The usage example is as follows:
!!! tip "For a more detailed information on Video-Stabilizer Class, Read [here ➶](../../stabilizer/overview/)"
@@ -155,10 +163,15 @@ stream_stab.stop()
-VideoGear contains a special [`enablePiCamera`](../params/#enablepicamera) flag that provides internal access to both CamGear and PiGear APIs, and thereby only one of them can be accessed at a given instance. Therefore, the additional parameters of VideoGear API are also based on API _([PiGear API](../params/#parameters-with-pigear-backend) or [CamGear API](../params/#parameters-with-camgear-backend))_ being accessed. The complete usage example of VideoGear API with Variable PiCamera Properties is as follows:
+## Advanced VideoGear usage with PiGear Backend
+
+!!! abstract "VideoGear provides internal access to both CamGear and PiGear APIs, and thereby all additional parameters of [PiGear API](../params/#parameters-with-pigear-backend) or [CamGear API](../params/#parameters-with-camgear-backend) are also easily accessible within VideoGear API."
+
+The usage example of VideoGear API with Variable PiCamera Properties is as follows:
!!! info "This example is basically a VideoGear API implementation of this [PiGear usage example](../../pigear/usage/#using-pigear-with-variable-camera-properties). Thereby, any [CamGear](../../camgear/usage/) or [PiGear](../../pigear/usage/) usage examples can be implemented with VideoGear API in the similar manner."
+!!! warning "Make sure to [enable Raspberry Pi hardware-specific settings](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior using PiGear Backend, otherwise nothing will work."
```python
# import required libraries
@@ -212,11 +225,11 @@ stream.stop()
## Using VideoGear with Colorspace Manipulation
-VideoGear API also supports **Colorspace Manipulation** but not direct like other VideoCapture Gears.
+VideoGear API also supports **Colorspace Manipulation** but **NOT Direct** like other VideoCapture Gears.
!!! danger "Important"
- * `color_space` global variable is **NOT Supported** in VideoGear API, calling it will result in `AttribueError`. More details can be found [here ➶](../../../bonus/colorspace_manipulation/#using-color_space-global-variable)
+ * `color_space` global variable is **NOT Supported** in VideoGear API, calling it will result in `AttribueError`. More details can be found [here ➶](../../../bonus/colorspace_manipulation/#source-colorspace-manipulation)
* Any incorrect or None-type value on [`colorspace`](../params/#colorspace) parameter will be skipped automatically.
@@ -261,4 +274,10 @@ cv2.destroyAllWindows()
stream.stop()
```
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced VideoGear examples with unusual configuration [here ➶](../../../help/videogear_ex/)"
+
\ No newline at end of file
diff --git a/docs/gears/webgear/advanced.md b/docs/gears/webgear/advanced.md
index 28940c432..c0a735366 100644
--- a/docs/gears/webgear/advanced.md
+++ b/docs/gears/webgear/advanced.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -23,6 +23,50 @@ limitations under the License.
!!! note "This is a continuation of the [WebGear doc ➶](../overview/#webgear-api). Thereby, It's advised to first get familiarize with this API, and its [requirements](../usage/#requirements)."
+
+
+
+### Using WebGear with Variable Colorspace
+
+WebGear by default only supports "BGR" colorspace frames as input, but you can use [`jpeg_compression_colorspace`](../params/#webgear-specific-attributes) string attribute through its options dictionary parameter to specify incoming frames colorspace.
+
+Let's implement a bare-minimum example using WebGear, where we will be sending [**GRAY**](https://en.wikipedia.org/wiki/Grayscale) frames to client browser:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+!!! example "This example works in conjunction with [Source ColorSpace manipulation for VideoCapture Gears ➶](../../../../bonus/colorspace_manipulation/#source-colorspace-manipulation)"
+
+!!! info "Supported `jpeg_compression_colorspace` colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)"
+
+```python
+# import required libraries
+import uvicorn
+from vidgear.gears.asyncio import WebGear
+
+# various performance tweaks and enable grayscale input
+options = {
+ "frame_size_reduction": 25,
+ "jpeg_compression_colorspace": "GRAY", # set grayscale
+ "jpeg_compression_quality": 90,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": True,
+}
+
+# initialize WebGear app and change its colorspace to grayscale
+web = WebGear(
+ source="foo.mp4", colorspace="COLOR_BGR2GRAY", logging=True, **options
+)
+
+# run this app on Uvicorn server at address http://0.0.0.0:8000/
+uvicorn.run(web(), host="0.0.0.0", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+**And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address on your local machine.**
+
## Using WebGear with a Custom Source(OpenCV)
@@ -30,7 +74,9 @@ limitations under the License.
!!! new "New in v0.2.1"
This example was added in `v0.2.1`.
-WebGear allows you to easily define your own custom Source that you want to use to manipulate your frames before sending them onto the browser.
+WebGear allows you to easily define your own custom Source that you want to use to transform your frames before sending them onto the browser.
+
+!!! warning "JPEG Frame-Compression and all of its [performance enhancing attributes](../usage/#performance-enhancements) are disabled with a Custom Source!"
Let's implement a bare-minimum example with a Custom Source using WebGear API and OpenCV:
@@ -62,12 +108,12 @@ async def my_frame_producer():
# do something with your OpenCV frame here
# reducer frames size if you want more performance otherwise comment this line
- frame = await reducer(frame, percentage=30) # reduce frame by 30%
+ frame = await reducer(frame, percentage=30, interpolation=cv2.INTER_AREA) # reduce frame by 30%
# handle JPEG encoding
encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
# yield frame in byte format
- yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n")
- await asyncio.sleep(0.00001)
+ yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n")
+ await asyncio.sleep(0)
# close stream
stream.release()
@@ -101,9 +147,9 @@ from vidgear.gears.asyncio import WebGear
# various performance tweaks
options = {
"frame_size_reduction": 40,
- "frame_jpeg_quality": 80,
- "frame_jpeg_optimize": True,
- "frame_jpeg_progressive": False,
+ "jpeg_compression_quality": 80,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": False,
}
# initialize WebGear app
@@ -172,9 +218,9 @@ async def hello_world(request):
# add various performance tweaks as usual
options = {
"frame_size_reduction": 40,
- "frame_jpeg_quality": 80,
- "frame_jpeg_optimize": True,
- "frame_jpeg_progressive": False,
+ "jpeg_compression_quality": 80,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": False,
}
# initialize WebGear app with a valid source
@@ -195,56 +241,51 @@ web.shutdown()
-## Rules for Altering WebGear Files and Folders
-
-WebGear gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind:
+## Using WebGear with MiddleWares
-### Rules for Altering Data Files
-
-- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions.
-- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones.
-- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear-with-custom-mounting-points).
-
-### Rules for Altering Data Folders
-
-- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear-with-custom-mounting-points).
-- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process).
+WebGear natively supports ASGI middleware classes with Starlette for implementing behavior that is applied across your entire ASGI application easily.
-
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
-## Bonus Usage Examples
+!!! info "All supported middlewares can be found [here ➶](https://www.starlette.io/middleware/)"
-Because of WebGear API's flexible internal wapper around [VideoGear](../../videogear/overview/), it can easily access any parameter of [CamGear](#camgear) and [PiGear](#pigear) videocapture APIs.
+For this example, let's use [`CORSMiddleware`](https://www.starlette.io/middleware/#corsmiddleware) for implementing appropriate [CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) to outgoing responses in our application in order to allow cross-origin requests from browsers, as follows:
-!!! info "Following usage examples are just an idea of what can be done with WebGear API, you can try various [VideoGear](../../videogear/params/), [CamGear](../../camgear/params/) and [PiGear](../../pigear/params/) parameters directly in WebGear API in the similar manner."
+!!! danger "The default parameters used by the CORSMiddleware implementation are restrictive by default, so you'll need to explicitly enable particular origins, methods, or headers, in order for browsers to be permitted to use them in a Cross-Domain context."
-### Using WebGear with Pi Camera Module
-
-Here's a bare-minimum example of using WebGear API with the Raspberry Pi camera module while tweaking its various properties in just one-liner:
+!!! tip "Starlette provides several arguments for enabling origins, methods, or headers for CORSMiddleware API. More information can be found [here ➶](https://www.starlette.io/middleware/#corsmiddleware)"
```python
# import libs
-import uvicorn
+import uvicorn, asyncio
+from starlette.middleware import Middleware
+from starlette.middleware.cors import CORSMiddleware
from vidgear.gears.asyncio import WebGear
-# various webgear performance and Raspberry Pi camera tweaks
+# add various performance tweaks as usual
options = {
"frame_size_reduction": 40,
- "frame_jpeg_quality": 80,
- "frame_jpeg_optimize": True,
- "frame_jpeg_progressive": False,
- "hflip": True,
- "exposure_mode": "auto",
- "iso": 800,
- "exposure_compensation": 15,
- "awb_mode": "horizon",
- "sensor_mode": 0,
+ "jpeg_compression_quality": 80,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": False,
}
-# initialize WebGear app
+# initialize WebGear app with a valid source
web = WebGear(
- enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options
-)
+ source="/home/foo/foo1.mp4", logging=True, **options
+) # enable source i.e. `test.mp4` and enable `logging` for debugging
+
+# define and assign suitable cors middlewares
+web.middleware = [
+ Middleware(
+ CORSMiddleware,
+ allow_origins=["*"],
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+ )
+]
# run this app on Uvicorn server at address http://localhost:8000/
uvicorn.run(web(), host="localhost", port=8000)
@@ -252,35 +293,29 @@ uvicorn.run(web(), host="localhost", port=8000)
# close app safely
web.shutdown()
```
+**And that's all, Now you can see output at [`http://localhost:8000`](http://localhost:8000) address.**
-### Using WebGear with real-time Video Stabilization enabled
-
-Here's an example of using WebGear API with real-time Video Stabilization enabled:
+## Rules for Altering WebGear Files and Folders
-```python
-# import libs
-import uvicorn
-from vidgear.gears.asyncio import WebGear
+WebGear gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind:
-# various webgear performance tweaks
-options = {
- "frame_size_reduction": 40,
- "frame_jpeg_quality": 80,
- "frame_jpeg_optimize": True,
- "frame_jpeg_progressive": False,
-}
+### Rules for Altering Data Files
+
+- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions.
+- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones.
+- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear-with-custom-mounting-points).
-# initialize WebGear app with a raw source and enable video stabilization(`stabilize=True`)
-web = WebGear(source="foo.mp4", stabilize=True, logging=True, **options)
+### Rules for Altering Data Folders
+
+- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear-with-custom-mounting-points).
+- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process).
-# run this app on Uvicorn server at address http://localhost:8000/
-uvicorn.run(web(), host="localhost", port=8000)
+
-# close app safely
-web.shutdown()
-```
+## Bonus Examples
+
+!!! example "Checkout more advanced WebGear examples with unusual configuration [here ➶](../../../help/webgear_ex/)"
-
\ No newline at end of file
diff --git a/docs/gears/webgear/overview.md b/docs/gears/webgear/overview.md
index 67267453a..a08a07187 100644
--- a/docs/gears/webgear/overview.md
+++ b/docs/gears/webgear/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/webgear/params.md b/docs/gears/webgear/params.md
index 95c0c586c..d82839c7d 100644
--- a/docs/gears/webgear/params.md
+++ b/docs/gears/webgear/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -75,7 +75,7 @@ This parameter can be used to pass user-defined parameter to WebGear API by form
WebGear(logging=True, **options)
```
-* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server. The value defaults to `20`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows:
+* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server and it has the most significant effect on performance. The value defaults to `25`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows:
```python
# frame-size will be reduced by 50%
@@ -84,49 +84,67 @@ This parameter can be used to pass user-defined parameter to WebGear API by form
WebGear(logging=True, **options)
```
-* **`enable_infinite_frames`** _(boolean)_ : Can be used to continue streaming _(instead of terminating immediately)_ with emulated blank frames with text "No Input", whenever the input source disconnects. Its default value is `False`. Its usage is as follows
+* **`jpeg_compression_quality`**: _(int/float)_ This attribute controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows:
- !!! new "New in v0.2.1"
- `enable_infinite_frames` attribute was added in `v0.2.1`.
+ !!! new "New in v0.2.2"
+ `enable_infinite_frames` attribute was added in `v0.2.2`.
```python
- # emulate infinite frames
- options = {"enable_infinite_frames": True}
+ # activate jpeg encoding and set quality 95%
+ options = {"jpeg_compression_quality": 95}
# assign it
WebGear(logging=True, **options)
```
-* **Various Encoding Parameters:**
+* **`jpeg_compression_fastdct`**: _(bool)_ This attribute if True, WebGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows:
- In WebGear, the input video frames are first encoded into [**Motion JPEG (M-JPEG or MJPEG**)](https://en.wikipedia.org/wiki/Motion_JPEG) video compression format in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image, before sending onto a server. Therefore, WebGear API provides various attributes to have full control over JPEG encoding performance and quality, which are as follows:
+ !!! new "New in v0.2.2"
+ `enable_infinite_frames` attribute was added in `v0.2.2`.
+ ```python
+ # activate jpeg encoding and enable fast dct
+ options = {"jpeg_compression_fastdct": True}
+ # assign it
+ WebGear(logging=True, **options)
+ ```
- * **`frame_jpeg_quality`** _(integer)_ : It controls the JPEG encoder quality and value varies from `0` to `100` (the higher is the better quality but performance will be lower). Its default value is `95`. Its usage is as follows:
+* **`jpeg_compression_fastupsample`**: _(bool)_ This attribute if True, WebGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows:
- ```python
- # JPEG will be encoded at 80% quality
- options = {"frame_jpeg_quality": 80}
- # assign it
- WebGear(logging=True, **options)
- ```
+ !!! new "New in v0.2.2"
+ `enable_infinite_frames` attribute was added in `v0.2.2`.
- * **`frame_jpeg_optimize`** _(boolean)_ : It enables various JPEG compression optimizations such as Chroma subsampling, Quantization table, etc. Its default value is `False`. Its usage is as follows:
+ ```python
+ # activate jpeg encoding and enable fast upsampling
+ options = {"jpeg_compression_fastupsample": True}
+ # assign it
+ WebGear(logging=True, **options)
+ ```
- ```python
- # JPEG optimizations are enabled
- options = {"frame_jpeg_optimize": True}
- # assign it
- WebGear(logging=True, **options)
- ```
+ * **`jpeg_compression_colorspace`**: _(str)_ This internal attribute is used to specify incoming frames colorspace with compression. Its usage is as follows:
- * **`frame_jpeg_progressive`** _(boolean)_ : It enables **Progressive** JPEG encoding instead of the **Baseline**. Progressive Mode. Its default value is `False` means baseline mode is in-use. Its usage is as follows:
+ !!! info "Supported `jpeg_compression_colorspace` colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)"
- ```python
- # Progressive JPEG encoding enabled
- options = {"frame_jpeg_progressive": True}
- # assign it
- WebGear(logging=True, **options)
- ```
+ !!! new "New in v0.2.2"
+ `enable_infinite_frames` attribute was added in `v0.2.2`.
+
+ ```python
+ # Specify incoming frames are `grayscale`
+ options = {"jpeg_compression": "GRAY"}
+ # assign it
+ WebGear(logging=True, **options)
+ ```
+
+* **`enable_infinite_frames`** _(boolean)_ : Can be used to continue streaming _(instead of terminating immediately)_ with emulated blank frames with text "No Input", whenever the input source disconnects. Its default value is `False`. Its usage is as follows
+
+ !!! new "New in v0.2.1"
+ `enable_infinite_frames` attribute was added in `v0.2.1`.
+
+ ```python
+ # emulate infinite frames
+ options = {"enable_infinite_frames": True}
+ # assign it
+ WebGear(logging=True, **options)
+ ```
diff --git a/docs/gears/webgear/usage.md b/docs/gears/webgear/usage.md
index 2a24c69a3..3d6a66983 100644
--- a/docs/gears/webgear/usage.md
+++ b/docs/gears/webgear/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -54,24 +54,27 @@ WebGear provides certain performance enhancing attributes for its [`options`](..
* **Various Encoding Parameters:**
- In WebGear API, the input video frames are first encoded into [**Motion JPEG (M-JPEG or MJPEG**)](https://en.wikipedia.org/wiki/Motion_JPEG) compression format, in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image, before sending onto a server. Therefore, WebGear API provides various attributes to have full control over JPEG encoding performance and quality, which are as follows:
+ In WebGear API, the input video frames are first encoded into [**Motion JPEG (M-JPEG or MJPEG**)](https://en.wikipedia.org/wiki/Motion_JPEG) compression format, in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image using [`simplejpeg`](https://gitlab.com/jfolz/simplejpeg) library, before sending onto a server. Therefore, WebGear API provides various attributes to have full control over JPEG encoding performance and quality, which are as follows:
- * **`frame_jpeg_quality`**: _(int)_ It controls the JPEG encoder quality. Its value varies from `0` to `100` (the higher is the better quality but performance will be lower). Its default value is `95`. Its usage is as follows:
+ * **`jpeg_compression_quality`**: _(int/float)_ This attribute controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows:
```python
- options={"frame_jpeg_quality": 80} #JPEG will be encoded at 80% quality.
+ # activate jpeg encoding and set quality 95%
+ options = {"jpeg_compression_quality": 95}
```
- * **`frame_jpeg_optimize`**: _(bool)_ It enables various JPEG compression optimizations such as Chroma sub-sampling, Quantization table, etc. These optimizations based on JPEG libs which are used while compiling OpenCV binaries, and recent versions of OpenCV uses [**TurboJPEG library**](https://libjpeg-turbo.org/), which is highly recommended for performance. Its default value is `False`. Its usage is as follows:
-
+ * **`jpeg_compression_fastdct`**: _(bool)_ This attribute if True, WebGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows:
+
```python
- options={"frame_jpeg_optimize": True} #JPEG optimizations are enabled.
+ # activate jpeg encoding and enable fast dct
+ options = {"jpeg_compression_fastdct": True}
```
- * **`frame_jpeg_progressive`**: _(bool)_ It enables **Progressive** JPEG encoding instead of the **Baseline**. Progressive Mode, displays an image in such a way that it shows a blurry/low-quality photo in its entirety, and then becomes clearer as the image downloads, whereas in Baseline Mode, an image created using the JPEG compression algorithm that will start to display the image as the data is made available, line by line. Progressive Mode, can drastically improve the performance in WebGear but at the expense of additional CPU load, thereby suitable for powerful systems only. Its default value is `False` meaning baseline mode is in-use. Its usage is as follows:
-
+ * **`jpeg_compression_fastupsample`**: _(bool)_ This attribute if True, WebGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows:
+
```python
- options={"frame_jpeg_progressive": True} #Progressive JPEG encoding enabled.
+ # activate jpeg encoding and enable fast upsampling
+ options = {"jpeg_compression_fastupsample": True}
```
@@ -85,7 +88,7 @@ Let's implement our Bare-Minimum usage example with these [**Performance Enhanci
You can access and run WebGear VideoStreamer Server programmatically in your python script in just a few lines of code, as follows:
-!!! tip "For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](./../../../help/webgear_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear)"
+!!! tip "For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](../../../help/webgear_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear)"
```python
@@ -96,9 +99,9 @@ from vidgear.gears.asyncio import WebGear
# various performance tweaks
options = {
"frame_size_reduction": 40,
- "frame_jpeg_quality": 80,
- "frame_jpeg_optimize": True,
- "frame_jpeg_progressive": False,
+ "jpeg_compression_quality": 80,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": False,
}
# initialize WebGear app
@@ -111,7 +114,7 @@ uvicorn.run(web(), host="localhost", port=8000)
web.shutdown()
```
-which can be accessed on any browser on the network at http://localhost:8000/.
+which can be accessed on any browser on your machine at http://localhost:8000/.
### Running from Terminal
@@ -123,7 +126,7 @@ You can also access and run WebGear Server directly from the terminal commandlin
!!! warning "If you're using `--options/-op` flag, then kindly wrap your dictionary value in single `''` quotes."
```sh
-python3 -m vidgear.gears.asyncio --source test.avi --logging True --options '{"frame_size_reduction": 50, "frame_jpeg_quality": 80, "frame_jpeg_optimize": True, "frame_jpeg_progressive": False}'
+python3 -m vidgear.gears.asyncio --source test.avi --logging True --options '{"frame_size_reduction": 50, "jpeg_compression_quality": 80, "jpeg_compression_fastdct": True, "jpeg_compression_fastupsample": False}'
```
which can also be accessed on any browser on the network at http://localhost:8000/.
@@ -133,8 +136,6 @@ which can also be accessed on any browser on the network at http://localhost:800
You can run `#!py3 python3 -m vidgear.gears.asyncio -h` help command to see all the advanced settings, as follows:
- !!! warning "If you're using `--options/-op` flag, then kindly wrap your dictionary value in single `''` quotes."
-
```sh
usage: python -m vidgear.gears.asyncio [-h] [-m MODE] [-s SOURCE] [-ep ENABLEPICAMERA] [-S STABILIZE]
[-cn CAMERA_NUM] [-yt stream_mode] [-b BACKEND] [-cs COLORSPACE]
diff --git a/docs/gears/webgear_rtc/advanced.md b/docs/gears/webgear_rtc/advanced.md
index 2a1fc2346..2726cdc64 100644
--- a/docs/gears/webgear_rtc/advanced.md
+++ b/docs/gears/webgear_rtc/advanced.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -34,7 +34,7 @@ Let's implement a bare-minimum example using WebGear_RTC as Real-time Broadcaste
!!! info "[`enable_infinite_frames`](../params/#webgear_rtc-specific-attributes) is enforced by default with this(`enable_live_broadcast`) attribute."
-!!! tip "For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](./../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)"
+!!! tip "For accessing WebGear_RTC on different Client Devices on the network, we use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](../../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)"
```python
# import required libraries
@@ -50,21 +50,21 @@ options = {
# initialize WebGear_RTC app
web = WebGear_RTC(source="foo.mp4", logging=True, **options)
-# run this app on Uvicorn server at address http://localhost:8000/
-uvicorn.run(web(), host="localhost", port=8000)
+# run this app on Uvicorn server at address http://0.0.0.0:8000/
+uvicorn.run(web(), host="0.0.0.0", port=8000)
# close app safely
web.shutdown()
```
-**And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address.**
+**And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address on your local machine.**
## Using WebGear_RTC with a Custom Source(OpenCV)
-WebGear_RTC allows you to easily define your own Custom Media Server with a custom source that you want to use to manipulate your frames before sending them onto the browser.
+WebGear_RTC allows you to easily define your own Custom Media Server with a custom source that you want to use to transform your frames before sending them onto the browser.
Let's implement a bare-minimum example with a Custom Source using WebGear_RTC API and OpenCV:
@@ -77,6 +77,7 @@ Let's implement a bare-minimum example with a Custom Source using WebGear_RTC AP
import uvicorn, asyncio, cv2
from av import VideoFrame
from aiortc import VideoStreamTrack
+from aiortc.mediastreams import MediaStreamError
from vidgear.gears.asyncio import WebGear_RTC
from vidgear.gears.asyncio.helper import reducer
@@ -112,7 +113,7 @@ class Custom_RTCServer(VideoStreamTrack):
# if NoneType
if not grabbed:
- return None
+ return MediaStreamError
# reducer frames size if you want more performance otherwise comment this line
frame = await reducer(frame, percentage=30) # reduce frame by 30%
@@ -145,7 +146,6 @@ uvicorn.run(web(), host="localhost", port=8000)
# close app safely
web.shutdown()
-
```
**And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address.**
@@ -255,53 +255,48 @@ web.shutdown()
-## Rules for Altering WebGear_RTC Files and Folders
-
-WebGear_RTC gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind:
-
-### Rules for Altering Data Files
-
-- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions.
-- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear_rtc` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones.
-- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear_rtc-with-custom-mounting-points).
+## Using WebGear_RTC with MiddleWares
-### Rules for Altering Data Folders
-
-- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear_rtc-with-custom-mounting-points).
-- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear_rtc` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process).
+WebGear_RTC also natively supports ASGI middleware classes with Starlette for implementing behavior that is applied across your entire ASGI application easily.
-
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
-## Bonus Usage Examples
+!!! info "All supported middlewares can be found [here ➶](https://www.starlette.io/middleware/)"
-Because of WebGear_RTC API's flexible internal wapper around [VideoGear](../../videogear/overview/), it can easily access any parameter of [CamGear](#camgear) and [PiGear](#pigear) videocapture APIs.
+For this example, let's use [`CORSMiddleware`](https://www.starlette.io/middleware/#corsmiddleware) for implementing appropriate [CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) to outgoing responses in our application in order to allow cross-origin requests from browsers, as follows:
-!!! info "Following usage examples are just an idea of what can be done with WebGear_RTC API, you can try various [VideoGear](../../videogear/params/), [CamGear](../../camgear/params/) and [PiGear](../../pigear/params/) parameters directly in WebGear_RTC API in the similar manner."
+!!! danger "The default parameters used by the CORSMiddleware implementation are restrictive by default, so you'll need to explicitly enable particular origins, methods, or headers, in order for browsers to be permitted to use them in a Cross-Domain context."
-### Using WebGear_RTC with Pi Camera Module
-
-Here's a bare-minimum example of using WebGear_RTC API with the Raspberry Pi camera module while tweaking its various properties in just one-liner:
+!!! tip "Starlette provides several arguments for enabling origins, methods, or headers for CORSMiddleware API. More information can be found [here ➶](https://www.starlette.io/middleware/#corsmiddleware)"
```python
# import libs
-import uvicorn
+import uvicorn, asyncio
+from starlette.middleware import Middleware
+from starlette.middleware.cors import CORSMiddleware
from vidgear.gears.asyncio import WebGear_RTC
-# various webgear_rtc performance and Raspberry Pi camera tweaks
+# add various performance tweaks as usual
options = {
"frame_size_reduction": 25,
- "hflip": True,
- "exposure_mode": "auto",
- "iso": 800,
- "exposure_compensation": 15,
- "awb_mode": "horizon",
- "sensor_mode": 0,
}
-# initialize WebGear_RTC app
+# initialize WebGear_RTC app with a valid source
web = WebGear_RTC(
- enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options
-)
+ source="/home/foo/foo1.mp4", logging=True, **options
+) # enable source i.e. `test.mp4` and enable `logging` for debugging
+
+# define and assign suitable cors middlewares
+web.middleware = [
+ Middleware(
+ CORSMiddleware,
+ allow_origins=["*"],
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+ )
+]
# run this app on Uvicorn server at address http://localhost:8000/
uvicorn.run(web(), host="localhost", port=8000)
@@ -310,31 +305,29 @@ uvicorn.run(web(), host="localhost", port=8000)
web.shutdown()
```
+**And that's all, Now you can see output at [`http://localhost:8000`](http://localhost:8000) address.**
+
-### Using WebGear_RTC with real-time Video Stabilization enabled
-
-Here's an example of using WebGear_RTC API with real-time Video Stabilization enabled:
+## Rules for Altering WebGear_RTC Files and Folders
-```python
-# import libs
-import uvicorn
-from vidgear.gears.asyncio import WebGear_RTC
+WebGear_RTC gives us complete freedom of altering data files generated in [**Auto-Generation Process**](../overview/#auto-generation-process), But you've to keep the following rules in mind:
-# various webgear_rtc performance tweaks
-options = {
- "frame_size_reduction": 25,
-}
+### Rules for Altering Data Files
+
+- [x] You allowed to alter/change code in all existing [default downloaded files](../overview/#auto-generation-process) at your convenience without any restrictions.
+- [x] You allowed to delete/rename all existing data files, except remember **NOT** to delete/rename three critical data-files (i.e `index.html`, `404.html` & `500.html`) present in `templates` folder inside the `webgear_rtc` directory at the [default location](../overview/#default-location), otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process), and it will overwrite the existing files with Server ones.
+- [x] You're allowed to add your own additional `.html`, `.css`, `.js`, etc. files in the respective folders at the [**default location**](../overview/#default-location) and [custom mounted Data folders](#using-webgear_rtc-with-custom-mounting-points).
-# initialize WebGear_RTC app with a raw source and enable video stabilization(`stabilize=True`)
-web = WebGear_RTC(source="foo.mp4", stabilize=True, logging=True, **options)
+### Rules for Altering Data Folders
+
+- [x] You're allowed to add/mount any number of additional folder as shown in [this example above](#using-webgear_rtc-with-custom-mounting-points).
+- [x] You're allowed to delete/rename existing folders at the [**default location**](../overview/#default-location) except remember **NOT** to delete/rename `templates` folder in the `webgear_rtc` directory where critical data-files (i.e `index.html`, `404.html` & `500.html`) are located, otherwise, it will trigger [Auto-generation process](../overview/#auto-generation-process).
-# run this app on Uvicorn server at address http://localhost:8000/
-uvicorn.run(web(), host="localhost", port=8000)
+
-# close app safely
-web.shutdown()
-```
+## Bonus Examples
-
-
\ No newline at end of file
+!!! example "Checkout more advanced WebGear_RTC examples with unusual configuration [here ➶](../../../help/webgear_rtc_ex/)"
+
+
\ No newline at end of file
diff --git a/docs/gears/webgear_rtc/overview.md b/docs/gears/webgear_rtc/overview.md
index a1541b8e9..d7087f9aa 100644
--- a/docs/gears/webgear_rtc/overview.md
+++ b/docs/gears/webgear_rtc/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -34,7 +34,7 @@ limitations under the License.
WebGear_RTC is implemented with the help of [**aiortc**](https://aiortc.readthedocs.io/en/latest/) library which is built on top of asynchronous I/O framework for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) and supports many features like SDP generation/parsing, Interactive Connectivity Establishment with half-trickle and mDNS support, DTLS key and certificate generation, DTLS handshake, etc.
-WebGear_RTC can handle [multiple consumers](../../webgear_rtc/advanced/#using-webgear_rtc-as-real-time-broadcaster) seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to manipulate frames easily before sending them across the network(see this [doc](../../webgear_rtc/advanced/#using-webgear_rtc-with-a-custom-sourceopencv) example).
+WebGear_RTC can handle [multiple consumers](../../webgear_rtc/advanced/#using-webgear_rtc-as-real-time-broadcaster) seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to transform frames easily before sending them across the network(see this [doc](../../webgear_rtc/advanced/#using-webgear_rtc-with-a-custom-sourceopencv) example).
WebGear_RTC API works in conjunction with [**Starlette**](https://www.starlette.io/) ASGI application and can also flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, [Response classes](https://www.starlette.io/responses/), [Routing tables](https://www.starlette.io/routing/), [Static Files](https://www.starlette.io/staticfiles/), [Templating engine(with Jinja2)](https://www.starlette.io/templates/), etc.
diff --git a/docs/gears/webgear_rtc/params.md b/docs/gears/webgear_rtc/params.md
index 81ce84e03..ba244f17b 100644
--- a/docs/gears/webgear_rtc/params.md
+++ b/docs/gears/webgear_rtc/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -75,7 +75,7 @@ This parameter can be used to pass user-defined parameter to WebGear_RTC API by
WebGear_RTC(logging=True, **options)
```
-* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server. The value defaults to `20`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows:
+* **`frame_size_reduction`** _(int/float)_ : This attribute controls the size reduction _(in percentage)_ of the frame to be streamed on Server and it has the most significant effect on performance. The value defaults to `20`, and must be no higher than `90` _(fastest, max compression, Barely Visible frame-size)_ and no lower than `0` _(slowest, no compression, Original frame-size)_. Its recommended value is between `40-60`. Its usage is as follows:
```python
# frame-size will be reduced by 50%
@@ -84,6 +84,36 @@ This parameter can be used to pass user-defined parameter to WebGear_RTC API by
WebGear_RTC(logging=True, **options)
```
+* **`jpeg_compression_quality`** _(int/float)_ : This attribute controls the JPEG quantization factor. Its value varies from `10` to `100` (the higher is the better quality but performance will be lower). Its default value is `90`. Its usage is as follows:
+
+ ```python
+ # activate jpeg encoding and set quality 95%
+ options = {"jpeg_compression": True, "jpeg_compression_quality": 95}
+ ```
+
+* **`jpeg_compression_fastdct`** _(bool)_ : This attribute if True, WebGear API uses fastest DCT method that speeds up decoding by 4-5% for a minor loss in quality. Its default value is also `True`, and its usage is as follows:
+
+ ```python
+ # activate jpeg encoding and enable fast dct
+ options = {"jpeg_compression": True, "jpeg_compression_fastdct": True}
+ ```
+
+* **`jpeg_compression_fastupsample`** _(bool)_ : This attribute if True, WebGear API use fastest color upsampling method. Its default value is `False`, and its usage is as follows:
+
+ ```python
+ # activate jpeg encoding and enable fast upsampling
+ options = {"jpeg_compression": True, "jpeg_compression_fastupsample": True}
+ ```
+
+* **`jpeg_compression_colorspace`** _(str)_ : This internal attribute is used to specify incoming frames colorspace with compression. Its usage is as follows:
+
+ !!! info "Supported colorspace values are `RGB`, `BGR`, `RGBX`, `BGRX`, `XBGR`, `XRGB`, `GRAY`, `RGBA`, `BGRA`, `ABGR`, `ARGB`, `CMYK`. More information can be found [here ➶](https://gitlab.com/jfolz/simplejpeg)"
+
+ ```python
+ # Specify incoming frames are `grayscale`
+ options = {"jpeg_compression": "GRAY"}
+ ```
+
* **`enable_live_broadcast`** _(boolean)_ : WebGear_RTC by default only supports one-to-one peer connection with a single consumer/client, Hence this boolean attribute can be used to enable live broadcast to multiple peer consumers/clients at same time. Its default value is `False`. Its usage is as follows:
!!! note "`enable_infinite_frames` is enforced by default when this attribute is enabled(`True`)."
diff --git a/docs/gears/webgear_rtc/usage.md b/docs/gears/webgear_rtc/usage.md
index 6b5a322ce..4ed7f3046 100644
--- a/docs/gears/webgear_rtc/usage.md
+++ b/docs/gears/webgear_rtc/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -31,6 +31,30 @@ WebGear_RTC API is the part of `asyncio` package of VidGear, thereby you need to
pip install vidgear[asyncio]
```
+### Aiortc
+
+Must Required only if you're using [WebGear_RTC API](../../gears/webgear_rtc/overview/). You can easily install it via pip:
+
+??? error "Microsoft Visual C++ 14.0 is required."
+
+ Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices:
+
+ !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well."
+
+ - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16).
+ - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019).
+ - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe)
+
+ Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only:
+
+ - [x] Windows 10 SDK
+ - [x] C++ x64/x86 build tools
+
+ Finally, proceed installing `aiortc` via pip.
+
+```sh
+ pip install aiortc
+```
### ASGI Server
@@ -48,7 +72,7 @@ Let's implement a Bare-Minimum usage example:
You can access and run WebGear_RTC VideoStreamer Server programmatically in your python script in just a few lines of code, as follows:
-!!! tip "For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](./../../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)"
+!!! tip "For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. More information can be found [here ➶](../../../help/webgear_rtc_faqs/#is-it-possible-to-stream-on-a-different-device-on-the-network-with-webgear_rtc)"
!!! info "We are using `frame_size_reduction` attribute for frame size reduction _(in percentage)_ to be streamed with its [`options`](../params/#options) dictionary parameter to cope with performance-throttling in this example."
@@ -72,7 +96,7 @@ uvicorn.run(web(), host="localhost", port=8000)
web.shutdown()
```
-which can be accessed on any browser on the network at http://localhost:8000/.
+which can be accessed on any browser on your machine at http://localhost:8000/.
### Running from Terminal
@@ -94,8 +118,6 @@ which can also be accessed on any browser on the network at http://localhost:800
You can run `#!py3 python3 -m vidgear.gears.asyncio -h` help command to see all the advanced settings, as follows:
- !!! warning "If you're using `--options/-op` flag, then kindly wrap your dictionary value in single `''` quotes."
-
```sh
usage: python -m vidgear.gears.asyncio [-h] [-m MODE] [-s SOURCE] [-ep ENABLEPICAMERA] [-S STABILIZE]
[-cn CAMERA_NUM] [-yt stream_mode] [-b BACKEND] [-cs COLORSPACE]
diff --git a/docs/gears/writegear/compression/advanced/cciw.md b/docs/gears/writegear/compression/advanced/cciw.md
index d23d616e6..bdd9a008b 100644
--- a/docs/gears/writegear/compression/advanced/cciw.md
+++ b/docs/gears/writegear/compression/advanced/cciw.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -75,7 +75,7 @@ execute_ffmpeg_cmd(ffmpeg_command)
## Usage Examples
-!!! tip "Following usage examples is just an idea of what can be done with this powerful function. So just Tinker with various FFmpeg parameters/commands yourself and see it working. Also, if you're unable to run any terminal FFmpeg command, then [report an issue](../../../../../contribution/issue/)."
+!!! abstract "Following usage examples is just an idea of what can be done with this powerful function. So just Tinker with various FFmpeg parameters/commands yourself and see it working. Also, if you're unable to run any terminal FFmpeg command, then [report an issue](../../../../../contribution/issue/)."
### Using WriteGear to separate Audio from Video
@@ -119,7 +119,7 @@ In this example, we will merge audio with video:
!!! tip "You can also directly add external audio input to video-frames in WriteGear. For more information, See [this FAQ example ➶](../../../../../help/writegear_faqs/#how-add-external-audio-file-input-to-video-frames)"
-!!! warning "Example Assumptions"
+!!! alert "Example Assumptions"
* You already have a separate video(i.e `'input-video.mp4'`) and audio(i.e `'input-audio.aac'`) files.
diff --git a/docs/gears/writegear/compression/advanced/ffmpeg_install.md b/docs/gears/writegear/compression/advanced/ffmpeg_install.md
index 94dc3b021..7b0aa9795 100644
--- a/docs/gears/writegear/compression/advanced/ffmpeg_install.md
+++ b/docs/gears/writegear/compression/advanced/ffmpeg_install.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -21,7 +21,7 @@ limitations under the License.
# FFmpeg Installation Instructions
-
+
WriteGear must requires FFmpeg executables for its Compression capabilities in Compression Mode. You can following machine-specific instructions for its installation:
@@ -69,7 +69,7 @@ The WriteGear API supports _Auto-Installation_ and _Manual Configuration_ method
!!! quote "This is a recommended approach on Windows Machines"
-If WriteGear API not receives any input from the user on [**`custom_ffmpeg`**](../../params/#custom_ffmpeg) parameter, then on Windows system WriteGear API **auto-generates** the required FFmpeg Static Binaries, according to your system specifications, into the temporary directory _(for e.g. `C:\Temp`)_ of your machine.
+If WriteGear API not receives any input from the user on [**`custom_ffmpeg`**](../../params/#custom_ffmpeg) parameter, then on Windows system WriteGear API **auto-generates** the required FFmpeg Static Binaries from a dedicated [**Github Server**](https://github.com/abhiTronix/FFmpeg-Builds) into the temporary directory _(for e.g. `C:\Temp`)_ of your machine.
!!! warning Important Information
@@ -86,7 +86,7 @@ If WriteGear API not receives any input from the user on [**`custom_ffmpeg`**](.
* **Download:** You can also manually download the latest Windows Static Binaries(*based on your machine arch(x86/x64)*) from the link below:
- *Windows Static Binaries:* http://ffmpeg.zeranoe.com/builds/
+ *Windows Static Binaries:* https://ffmpeg.org/download.html#build-windows
* **Assignment:** Then, you can easily assign the custom path to the folder containing FFmpeg executables(`for e.g 'C:/foo/Downloads/ffmpeg/bin'`) or path of `ffmpeg.exe` executable itself to the [**`custom_ffmpeg`**](../../params/#custom_ffmpeg) parameter in the WriteGear API.
diff --git a/docs/gears/writegear/compression/overview.md b/docs/gears/writegear/compression/overview.md
index 800d726a6..b7972d359 100644
--- a/docs/gears/writegear/compression/overview.md
+++ b/docs/gears/writegear/compression/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/writegear/compression/params.md b/docs/gears/writegear/compression/params.md
index b0f3b91d4..76e6d3ed1 100644
--- a/docs/gears/writegear/compression/params.md
+++ b/docs/gears/writegear/compression/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -118,6 +118,8 @@ This parameter allows us to exploit almost all FFmpeg supported parameters effor
!!! warning "While providing additional av-source with `-i` FFmpeg parameter in `output_params` make sure it don't interfere with WriteGear's frame pipeline otherwise it will break things!"
+ !!! error "All ffmpeg parameters are case-sensitive. Remember to double check every parameter if any error occurs."
+
!!! tip "Kindly check [H.264 docs ➶](https://trac.ffmpeg.org/wiki/Encode/H.264) and other [FFmpeg Docs ➶](https://ffmpeg.org/documentation.html) for more information on these parameters"
```python
@@ -173,6 +175,8 @@ This parameter allows us to exploit almost all FFmpeg supported parameters effor
All the encoders that are compiled with FFmpeg in use, are supported by WriteGear API. You can easily check the compiled encoders by running following command in your terminal:
+!!! info "Similarily, supported demuxers and filters depends upons compiled FFmpeg in use."
+
```sh
ffmpeg -encoders # use `ffmpeg.exe -encoders` on windows
```
diff --git a/docs/gears/writegear/compression/usage.md b/docs/gears/writegear/compression/usage.md
index 7527f9bea..bc3b0c0ea 100644
--- a/docs/gears/writegear/compression/usage.md
+++ b/docs/gears/writegear/compression/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -92,7 +92,9 @@ writer.close()
## Using Compression Mode in RGB Mode
-In Compression Mode, WriteGear API contains [`rgb_mode`](../../../../bonus/reference/writegear/#vidgear.gears.writegear.WriteGear.write) boolean parameter for RGB Mode, which when enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_. This mode makes WriteGear directly compatible with libraries that only supports RGB format. The complete usage example is as follows:
+In Compression Mode, WriteGear API contains [`rgb_mode`](../../../../bonus/reference/writegear/#vidgear.gears.writegear.WriteGear.write) boolean parameter for RGB Mode, which when enabled _(i.e. `rgb_mode=True`)_, specifies that incoming frames are of RGB format _(instead of default BGR format)_. This mode makes WriteGear directly compatible with libraries that only supports RGB format.
+
+The complete usage example is as follows:
```python
# import required libraries
@@ -215,15 +217,15 @@ writer.close()
## Using Compression Mode for Streaming URLs
-In Compression Mode, WriteGear can make complex job look easy with FFmpeg. It also allows any URLs _(as output)_ for network streaming with its [`output_filename`](../params/#output_filename) parameter.
+In Compression Mode, WriteGear also allows URL strings _(as output)_ for network streaming with its [`output_filename`](../params/#output_filename) parameter.
-_In this example, let's stream Live Camera Feed directly to Twitch!_
+In this example, we will stream live camera feed directly to Twitch:
-!!! info "YouTube-Live Streaming example code also available in [WriteGear FAQs ➶](../../../../help/writegear_faqs/#is-youtube-live-streaming-possibe-with-writegear)"
+!!! info "YouTube-Live Streaming example code also available in [WriteGear FAQs ➶](../../../../help/writegear_ex/#using-writegears-compression-mode-for-youtube-live-streaming)"
!!! warning "This example assume you already have a [**Twitch Account**](https://www.twitch.tv/) for publishing video."
-!!! danger "Make sure to change [_Twitch Stream Key_](https://www.youtube.com/watch?v=xwOtOfPMIIk) with yours in following code before running!"
+!!! alert "Make sure to change [_Twitch Stream Key_](https://www.youtube.com/watch?v=xwOtOfPMIIk) with yours in following code before running!"
```python
# import required libraries
@@ -292,16 +294,16 @@ writer.close()
## Using Compression Mode with Hardware encoders
-By default, WriteGear API uses *libx264 encoder* for encoding its output files in Compression Mode. But you can easily change encoder to your suitable [supported encoder](../params/#supported-encoders) by passing `-vcodec` FFmpeg parameter as an attribute in its [*output_param*](../params/#output_params) dictionary parameter. In addition to this, you can also specify the additional properties/features of your system's GPU easily.
+By default, WriteGear API uses `libx264` encoder for encoding output files in Compression Mode. But you can easily change encoder to your suitable [supported encoder](../params/#supported-encoders) by passing `-vcodec` FFmpeg parameter as an attribute with its [*output_param*](../params/#output_params) dictionary parameter. In addition to this, you can also specify the additional properties/features of your system's GPU easily.
??? warning "User Discretion Advised"
- This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which **MAY/MAY NOT** suit your system. Kindly use suitable parameters based your supported system and FFmpeg configurations only.
+ This example is just conveying the idea on how to use FFmpeg's hardware encoders with WriteGear API in Compression mode, which **MAY/MAY NOT** suit your system. Kindly use suitable parameters based your system hardware settings only.
-In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'` like properties by formatting them as `option` dictionary parameter's attributes, as follows:
+In this example, we will be using `h264_vaapi` as our hardware encoder and also optionally be specifying our device hardware's location (i.e. `'-vaapi_device':'/dev/dri/renderD128'`) and other features such as `'-vf':'format=nv12,hwupload'`:
-!!! danger "Check VAAPI support"
+??? alert "Remember to check VAAPI support"
To use `h264_vaapi` encoder, remember to check if its available and your FFmpeg compiled with VAAPI support. You can easily do this by executing following one-liner command in your terminal, and observing if output contains something similar as follows:
@@ -427,26 +429,156 @@ writer.close()
## Using Compression Mode with Live Audio Input
-In Compression Mode, WriteGear API allows us to exploit almost all FFmpeg supported parameters that you can think of, in its Compression Mode. Hence, processing, encoding, and combining audio with video is pretty much straightforward.
+In Compression Mode, WriteGear API allows us to exploit almost all FFmpeg supported parameters that you can think of in its Compression Mode. Hence, combining audio with live video frames is pretty easy.
-!!! warning "Example Assumptions"
+In this example code, we will merging the audio from a Audio Device _(for e.g. Webcam inbuilt mic)_ to live frames incoming from the Video Source _(for e.g external webcam)_, and save the output as a compressed video file, all in real time:
- * You're running are Linux machine.
- * You already have appropriate audio & video drivers and softwares installed on your machine.
+!!! alert "Example Assumptions"
-!!! danger "Locate your Sound Card"
+ * You're running are Linux machine.
+ * You already have appropriate audio driver and software installed on your machine.
- Remember to locate your Sound Card before running this example:
- * Note down the Sound Card value using `arecord -L` command on the your Linux terminal.
- * It may be similar to this `plughw:CARD=CAMERA,DEV=0`
+??? tip "Identifying and Specifying sound card on different OS platforms"
+
+ === "On Windows"
-??? tips
+ Windows OS users can use the [dshow](https://trac.ffmpeg.org/wiki/DirectShow) (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card:
- The useful audio input options for ALSA input are `-ar` (_audio sample rate_) and `-ac` (_audio channels_). Specifying audio sampling rate/frequency will force the audio card to record the audio at that specified rate. Usually the default value is `"44100"` (Hz) but `"48000"`(Hz) works, so chose wisely. Specifying audio channels will force the audio card to record the audio as mono, stereo or even 2.1, and 5.1(_if supported by your audio card_). Usually the default value is `"1"` (mono) for Mic input and `"2"` (stereo) for Line-In input. Kindly go through [FFmpeg docs](https://ffmpeg.org/ffmpeg.html) for more of such options.
+ - [x] **[OPTIONAL] Enable sound card(if disabled):** First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. **Follow this [post ➶](https://forums.tomshardware.com/threads/no-sound-through-stereo-mix-realtek-hd-audio.1716182/) for more details.**
+ - [x] **Identify Sound Card:** Then, You can locate your soundcard using `dshow` as follows:
+
+ ```sh
+ c:\> ffmpeg -list_devices true -f dshow -i dummy
+ ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect
+ libavutil 51. 74.100 / 51. 74.100
+ libavcodec 54. 65.100 / 54. 65.100
+ libavformat 54. 31.100 / 54. 31.100
+ libavdevice 54. 3.100 / 54. 3.100
+ libavfilter 3. 19.102 / 3. 19.102
+ libswscale 2. 1.101 / 2. 1.101
+ libswresample 0. 16.100 / 0. 16.100
+ [dshow @ 03ACF580] DirectShow video devices
+ [dshow @ 03ACF580] "Integrated Camera"
+ [dshow @ 03ACF580] "USB2.0 Camera"
+ [dshow @ 03ACF580] DirectShow audio devices
+ [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)"
+ [dshow @ 03ACF580] "Microphone (USB2.0 Camera)"
+ dummy: Immediate exit requested
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows:
+
+ ```python
+ # assign appropriate input audio-source
+ output_params = {
+ "-i":"audio=Microphone (USB2.0 Camera)",
+ "-thread_queue_size": "512",
+ "-f": "dshow",
+ "-ac": "2",
+ "-acodec": "aac",
+ "-ar": "44100",
+ }
+ ```
+
+ !!! fail "If audio still doesn't work then [checkout this troubleshooting guide ➶](https://www.maketecheasier.com/fix-microphone-not-working-windows10/) or reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+ === "On Linux"
+
+ Linux OS users can use the [alsa](https://ffmpeg.org/ffmpeg-all.html#alsa) to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card:
+
+ - [x] **Identify Sound Card:** To get the list of all installed cards on your machine, you can type `arecord -l` or `arecord -L` _(longer output)_.
+
+ ```sh
+ arecord -l
+
+ **** List of CAPTURE Hardware Devices ****
+ card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in WriteGear as follows:
+
+ !!! info "The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as `hw:0` or `hw:1`"
+
+ ```python
+ # assign appropriate input audio-source
+ output_params = {
+ "-i": "hw:1",
+ "-thread_queue_size": "512",
+ "-f": "alsa",
+ "-ac": "2",
+ "-acodec": "aac",
+ "-ar": "44100",
+ }
+ ```
+
+ !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+ === "On MacOS"
+
+ MAC OS users can use the [avfoundation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines:
+
+
+ - [x] **Identify Sound Card:** Then, You can locate your soundcard using `avfoundation` as follows:
+
+ ```sh
+ ffmpeg -f qtkit -list_devices true -i ""
+ ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect
+ libavutil 51. 74.100 / 51. 74.100
+ libavcodec 54. 65.100 / 54. 65.100
+ libavformat 54. 31.100 / 54. 31.100
+ libavdevice 54. 3.100 / 54. 3.100
+ libavfilter 3. 19.102 / 3. 19.102
+ libswscale 2. 1.101 / 2. 1.101
+ libswresample 0. 16.100 / 0. 16.100
+ [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices:
+ [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in)
+ [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0
+ [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices:
+ [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio
+ [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows:
+
+ ```python
+ # assign appropriate input audio-source
+ output_params = {
+ "-audio_device_index": "0",
+ "-thread_queue_size": "512",
+ "-f": "avfoundation",
+ "-ac": "2",
+ "-acodec": "aac",
+ "-ar": "44100",
+ }
+ ```
+
+ !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+!!! danger "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
-In this example code, we will merge the audio from a Audio Source _(for e.g. Webcam inbuilt mic)_ to the frames of a Video Source _(for e.g external webcam)_, and save this data as a compressed video file, all in real time:
+!!! warning "You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams."
```python
# import required libraries
diff --git a/docs/gears/writegear/introduction.md b/docs/gears/writegear/introduction.md
index dbe1d5a13..0149e6376 100644
--- a/docs/gears/writegear/introduction.md
+++ b/docs/gears/writegear/introduction.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -29,7 +29,9 @@ limitations under the License.
> *WriteGear handles various powerful Video-Writer Tools that provide us the freedom to do almost anything imaginable with multimedia data.*
-WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**](https://ffmpeg.org/), a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specification _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_. It is powerful enough to perform complex tasks such as [Live-Streaming](../compression/usage/#using-compression-mode-for-streaming-urls) _(such as for Twitch)_ and [Multiplexing Video-Audio](../compression/usage/#using-compression-mode-with-live-audio-input) with real-time frames in way fewer lines of code.
+WriteGear API provides a complete, flexible, and robust wrapper around [**FFmpeg**](https://ffmpeg.org/), a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specifications _(such as`bitrate, codec, framerate, resolution, subtitles, etc.`)_.
+
+WriteGear also supports streaming with traditional protocols such as RTMP, RTSP/RTP. It is powerful enough to perform complex tasks such as [Live-Streaming](../compression/usage/#using-compression-mode-for-streaming-urls) _(such as for Twitch, YouTube etc.)_ and [Multiplexing Video-Audio](../compression/usage/#using-compression-mode-with-live-audio-input) with real-time frames in just few lines of code.
Best of all, WriteGear grants users the complete freedom to play with any FFmpeg parameter with its exclusive ==Custom Commands function== _(see this [doc](../compression/advanced/cciw/))_ without relying on any third-party API.
@@ -43,7 +45,7 @@ WriteGear primarily operates in following modes:
* [**Compression Mode**](../compression/overview/): In this mode, WriteGear utilizes powerful **FFmpeg** inbuilt encoders to encode lossless multimedia files. This mode provides us the ability to exploit almost any parameter available within FFmpeg, effortlessly and flexibly, and while doing that it robustly handles all errors/warnings quietly.
-* [**Non-Compression Mode**](../non_compression/overview/): In this mode, WriteGear utilizes basic **OpenCV's inbuilt VideoWriter API** tools. This mode also supports all parameters manipulation available within VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc.
+* [**Non-Compression Mode**](../non_compression/overview/): In this mode, WriteGear utilizes basic **OpenCV's inbuilt VideoWriter API** tools. This mode also supports all parameter transformations available within OpenCV's VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc.
@@ -73,4 +75,10 @@ from vidgear.gears import WriteGear
See here 🚀
-
\ No newline at end of file
+
+
+## Bonus Examples
+
+!!! example "Checkout more advanced WriteGear examples with unusual configuration [here ➶](../../../help/writegear_ex/)"
+
+
\ No newline at end of file
diff --git a/docs/gears/writegear/non_compression/overview.md b/docs/gears/writegear/non_compression/overview.md
index 7048da12f..553631d3e 100644
--- a/docs/gears/writegear/non_compression/overview.md
+++ b/docs/gears/writegear/non_compression/overview.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/writegear/non_compression/params.md b/docs/gears/writegear/non_compression/params.md
index f5c21a877..c8f38a17e 100644
--- a/docs/gears/writegear/non_compression/params.md
+++ b/docs/gears/writegear/non_compression/params.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/gears/writegear/non_compression/usage.md b/docs/gears/writegear/non_compression/usage.md
index f1f1e54d3..f26b7fdcf 100644
--- a/docs/gears/writegear/non_compression/usage.md
+++ b/docs/gears/writegear/non_compression/usage.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/help.md b/docs/help.md
index 08426b2bd..c43535723 100644
--- a/docs/help.md
+++ b/docs/help.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -73,9 +73,11 @@ Let others know how you are using VidGear and why you like it!
-## Help Author
+## Helping Author
-Donations help keep VidGear's Development alive and motivate me _(author)_. Giving a little means a lot, even the smallest contribution can make a huge difference. You can financially support through ko-fi 🤝:
+> Donations help keep VidGear's development alive and motivate me _(as author)_. :heart:
+
+It is (like all open source software) a labour of love and something I am doing with my own free time. If you would like to say thanks, please feel free to make a donation through ko-fi:
diff --git a/docs/help/camgear_ex.md b/docs/help/camgear_ex.md
new file mode 100644
index 000000000..5a0522d3a
--- /dev/null
+++ b/docs/help/camgear_ex.md
@@ -0,0 +1,243 @@
+
+
+# CamGear Examples
+
+
+
+## Synchronizing Two Sources in CamGear
+
+In this example both streams and corresponding frames will be processed synchronously i.e. with no delay:
+
+!!! danger "Using same source with more than one instances of CamGear can lead to [Global Interpreter Lock (GIL)](https://wiki.python.org/moin/GlobalInterpreterLock#:~:text=In%20CPython%2C%20the%20global%20interpreter,conditions%20and%20ensures%20thread%20safety.&text=The%20GIL%20can%20degrade%20performance%20even%20when%20it%20is%20not%20a%20bottleneck.) that degrades performance even when it is not a bottleneck."
+
+```python
+# import required libraries
+from vidgear.gears import CamGear
+import cv2
+import time
+
+# define and start the stream on first source ( For e.g #0 index device)
+stream1 = CamGear(source=0, logging=True).start()
+
+# define and start the stream on second source ( For e.g #1 index device)
+stream2 = CamGear(source=1, logging=True).start()
+
+# infinite loop
+while True:
+
+ frameA = stream1.read()
+ # read frames from stream1
+
+ frameB = stream2.read()
+ # read frames from stream2
+
+ # check if any of two frame is None
+ if frameA is None or frameB is None:
+ #if True break the infinite loop
+ break
+
+ # do something with both frameA and frameB here
+ cv2.imshow("Output Frame1", frameA)
+ cv2.imshow("Output Frame2", frameB)
+ # Show output window of stream1 and stream 2 separately
+
+ key = cv2.waitKey(1) & 0xFF
+ # check for 'q' key-press
+ if key == ord("q"):
+ #if 'q' key-pressed break out
+ break
+
+ if key == ord("w"):
+ #if 'w' key-pressed save both frameA and frameB at same time
+ cv2.imwrite("Image-1.jpg", frameA)
+ cv2.imwrite("Image-2.jpg", frameB)
+ #break #uncomment this line to break out after taking images
+
+cv2.destroyAllWindows()
+# close output window
+
+# safely close both video streams
+stream1.stop()
+stream2.stop()
+```
+
+
+
+## Using variable Youtube-DL parameters in CamGear
+
+CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter.
+
+The complete usage example is as follows:
+
+!!! tip "More information on `STREAM_RESOLUTION` & `STREAM_PARAMS` attributes can be found [here ➶](../../gears/camgear/advanced/source_params/#exclusive-camgear-parameters)"
+
+```python
+# import required libraries
+from vidgear.gears import CamGear
+import cv2
+
+# specify attributes
+options = {"STREAM_RESOLUTION": "720p", "STREAM_PARAMS": {"nocheckcertificate": True}}
+
+# Add YouTube Video URL as input source (for e.g https://youtu.be/bvetuLwJIkA)
+# and enable Stream Mode (`stream_mode = True`)
+stream = CamGear(
+ source="https://youtu.be/bvetuLwJIkA", stream_mode=True, logging=True, **options
+).start()
+
+# loop over
+while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # Show output window
+ cv2.imshow("Output", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+# close output window
+cv2.destroyAllWindows()
+
+# safely close video stream
+stream.stop()
+```
+
+
+
+
+
+## Using CamGear for capturing RSTP/RTMP URLs
+
+You can open any network stream _(such as RTSP/RTMP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter.
+
+Here's a high-level wrapper code around CamGear API to enable auto-reconnection during capturing:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+??? tip "Enforcing UDP stream"
+
+ You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code:
+
+ ```python
+ # import required libraries
+ import os
+
+ # enforce UDP
+ os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp"
+ ```
+
+ Finally, use [`backend`](../../gears/camgear/params/#backend) parameter value as `backend=cv2.CAP_FFMPEG` in CamGear.
+
+
+```python
+from vidgear.gears import CamGear
+import cv2
+import datetime
+import time
+
+
+class Reconnecting_CamGear:
+ def __init__(self, cam_address, reset_attempts=50, reset_delay=5):
+ self.cam_address = cam_address
+ self.reset_attempts = reset_attempts
+ self.reset_delay = reset_delay
+ self.source = CamGear(source=self.cam_address).start()
+ self.running = True
+
+ def read(self):
+ if self.source is None:
+ return None
+ if self.running and self.reset_attempts > 0:
+ frame = self.source.read()
+ if frame is None:
+ self.source.stop()
+ self.reset_attempts -= 1
+ print(
+ "Re-connection Attempt-{} occured at time:{}".format(
+ str(self.reset_attempts),
+ datetime.datetime.now().strftime("%m-%d-%Y %I:%M:%S%p"),
+ )
+ )
+ time.sleep(self.reset_delay)
+ self.source = CamGear(source=self.cam_address).start()
+ # return previous frame
+ return self.frame
+ else:
+ self.frame = frame
+ return frame
+ else:
+ return None
+
+ def stop(self):
+ self.running = False
+ self.reset_attempts = 0
+ self.frame = None
+ if not self.source is None:
+ self.source.stop()
+
+
+if __name__ == "__main__":
+ # open any valid video stream
+ stream = Reconnecting_CamGear(
+ cam_address="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov",
+ reset_attempts=20,
+ reset_delay=5,
+ )
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if None-type
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # Show output window
+ cv2.imshow("Output", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+```
+
+
\ No newline at end of file
diff --git a/docs/help/camgear_faqs.md b/docs/help/camgear_faqs.md
index cd62bd3a5..4aea0d91b 100644
--- a/docs/help/camgear_faqs.md
+++ b/docs/help/camgear_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -49,67 +49,30 @@ limitations under the License.
## How to compile OpenCV with GStreamer support?
-**Answer:** For compiling OpenCV with GSstreamer(`>=v1.0.0`) support, checkout this [tutorial](https://web.archive.org/web/20201225140454/https://medium.com/@galaktyk01/how-to-build-opencv-with-gstreamer-b11668fa09c) for Linux and Windows OSes, and **for MacOS do as follows:**
+**Answer:** For compiling OpenCV with GSstreamer(`>=v1.0.0`) support:
-**Step-1:** First Brew install GStreamer:
+=== "On Linux OSes"
-```sh
-brew update
-brew install gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav
-```
+ - [x] **Follow [this tutorial ➶](https://medium.com/@galaktyk01/how-to-build-opencv-with-gstreamer-b11668fa09c)**
-**Step-2:** Then, Follow [this tutorial ➶](https://www.learnopencv.com/install-opencv-4-on-macos/)
+=== "On Windows OSes"
+ - [x] **Follow [this tutorial ➶](https://medium.com/@galaktyk01/how-to-build-opencv-with-gstreamer-b11668fa09c)**
-
-
-
-## How to change quality and parameters of YouTube Streams with CamGear?
-
-CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. The complete usage example is as follows:
-
-!!! tip "More information on `STREAM_RESOLUTION` & `STREAM_PARAMS` attributes can be found [here ➶](../../gears/camgear/advanced/source_params/#exclusive-camgear-parameters)"
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-import cv2
-
-# specify attributes
-options = {"STREAM_RESOLUTION": "720p", "STREAM_PARAMS": {"nocheckcertificate": True}}
-
-# Add YouTube Video URL as input source (for e.g https://youtu.be/bvetuLwJIkA)
-# and enable Stream Mode (`stream_mode = True`)
-stream = CamGear(
- source="https://youtu.be/bvetuLwJIkA", stream_mode=True, logging=True, **options
-).start()
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
+=== "On MAC OSes"
+
+ - [x] **Follow [this tutorial ➶](https://www.learnopencv.com/install-opencv-4-on-macos/) but make sure to brew install GStreamer as follows:**
- # {do something with the frame here}
+ ```sh
+ brew install gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav
+ ```
- # Show output window
- cv2.imshow("Output", frame)
+
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-# close output window
-cv2.destroyAllWindows()
+## How to change quality and parameters of YouTube Streams with CamGear?
-# safely close video stream
-stream.stop()
-```
+**Answer:** CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. See [this bonus example ➶](../camgear_ex/#using-variable-youtube-dl-parameters-in-camgear).
@@ -117,57 +80,7 @@ stream.stop()
## How to open RSTP network streams with CamGear?
-You can open any local network stream _(such as RTSP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. The complete usage example is as follows:
-
-??? tip "Enforcing UDP stream"
-
- You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code:
-
- ```python
- # import required libraries
- import os
-
- # enforce UDP
- os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp"
- ```
-
- Finally, use [`backend`](../../gears/camgear/params/#backend) parameter value as `backend="CAP_FFMPEG"` in CamGear.
-
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-import cv2
-
-# open valid network video-stream
-stream = CamGear(source="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov").start()
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
- # {do something with the frame here}
-
- # Show output window
- cv2.imshow("Output", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-```
+**Answer:** You can open any local network stream _(such as RTSP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. See [this bonus example ➶](../camgear_ex/#using-camgear-for-capturing-rstprtmp-urls).
@@ -185,7 +98,7 @@ stream.stop()
## How to synchronize between two cameras?
-**Answer:** See [this issue comment ➶](https://github.com/abhiTronix/vidgear/issues/1#issuecomment-473943037).
+**Answer:** See [this bonus example ➶](../camgear_ex/#synchronizing-two-sources-in-camgear).
diff --git a/docs/help/general_faqs.md b/docs/help/general_faqs.md
index 6f6d475b5..dbf4dd344 100644
--- a/docs/help/general_faqs.md
+++ b/docs/help/general_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,23 +24,25 @@ limitations under the License.
-## "I'm new to Python Programming or its usage in Computer Vision", How to use vidgear in my projects?
+## "I'm new to Python Programming or its usage in OpenCV Library", How to use vidgear in my projects?
-**Answer:** It's recommended to first go through the following dedicated tutorials/websites thoroughly, and learn how OpenCV-Python works _(with examples)_:
+**Answer:** Before using vidgear, It's recommended to first go through the following dedicated blog sites and learn how OpenCV-Python syntax works _(with examples)_:
-- [**PyImageSearch.com** ➶](https://www.pyimagesearch.com/) is the best resource for learning OpenCV and its Python implementation. Adrian Rosebrock provides many practical OpenCV techniques with tutorials, code examples, blogs, and books at PyImageSearch.com. I also learned a lot about computer vision methods and various useful techniques. Highly recommended!
+- [**PyImageSearch.com** ➶](https://www.pyimagesearch.com/) is the best resource for learning OpenCV and its Python implementation. Adrian Rosebrock provides many practical OpenCV techniques with tutorials, code examples, blogs, and books at PyImageSearch.com. Highly recommended!
- [**learnopencv.com** ➶](https://www.learnopencv.com) Maintained by OpenCV CEO Satya Mallick. This blog is for programmers, hackers, engineers, scientists, students, and self-starters interested in Computer Vision and Machine Learning.
-- There's also the official [**OpenCV Tutorials** ➶](https://docs.opencv.org/master/d6/d00/tutorial_py_root.html), provided by the OpenCV folks themselves.
+- There's also the official [**OpenCV Tutorials** ➶](https://docs.opencv.org/master/d6/d00/tutorial_py_root.html) curated by the OpenCV developers.
-Finally, once done, see [Switching from OpenCV ➶](../../switch_from_cv/) and go through our [Gears ➶](../../gears/#gears-what-are-these) to learn how VidGear works. If you run into any trouble or have any questions, then see [getting help ➶](../get_help)
+Once done, visit [Switching from OpenCV ➶](../../switch_from_cv/) to easily replace OpenCV APIs with suitable [Gears ➶](../../gears/#gears-what-are-these) in your project. All the best! :smiley:
+
+!!! tip "If you run into any trouble or have any questions, then refer our [**Help**](../get_help) section."
## "VidGear is using Multi-threading, but Python is notorious for its poor performance in multithreading?"
-**Answer:** See [Threaded-Queue-Mode ➶](../../bonus/TQM/)
+**Answer:** Refer vidgear's [Threaded-Queue-Mode ➶](../../bonus/TQM/)
diff --git a/docs/help/get_help.md b/docs/help/get_help.md
index 4390d148d..b01b34706 100644
--- a/docs/help/get_help.md
+++ b/docs/help/get_help.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -37,7 +37,7 @@ There are several ways to get help with VidGear:
> Got a question related to VidGear Working?
-Checkout our Frequently Asked Questions, a curated list of all the questions with adequate answer that we commonly receive, for quickly troubleshooting your problems:
+Checkout the Frequently Asked Questions, a curated list of all the questions with adequate answer that we commonly receive, for quickly troubleshooting your problems:
- [General FAQs ➶](general_faqs.md)
- [CamGear FAQs ➶](camgear_faqs.md)
@@ -56,6 +56,26 @@ Checkout our Frequently Asked Questions, a curated list of all the questions wit
+## Bonus Examples
+
+> How we do this with that API?
+
+Checkout the Bonus Examples, a curated list of all advanced examples with unusual configuration, which isn't available in Vidgear API's usage examples:
+
+- [CamGear FAQs ➶](camgear_ex.md)
+- [PiGear FAQs ➶](pigear_ex.md)
+- [ScreenGear FAQs ➶](screengear_ex.md)
+- [StreamGear FAQs ➶](streamgear_ex.md)
+- [WriteGear FAQs ➶](writegear_ex.md)
+- [NetGear FAQs ➶](netgear_ex.md)
+- [WebGear FAQs ➶](webgear_ex.md)
+- [WebGear_RTC FAQs ➶](webgear_rtc_ex.md)
+- [VideoGear FAQs ➶](videogear_ex.md)
+- [Stabilizer Class FAQs ➶](stabilizer_ex.md)
+- [NetGear_Async FAQs ➶](netgear_async_ex.md)
+
+
+
## Join our Gitter Community channel
> Have you come up with some new idea 💡 or looking for the fastest way troubleshoot your problems
@@ -73,7 +93,7 @@ There you can ask quick questions, swiftly troubleshoot your problems, help othe
- [x] [Got a question or problem?](../../contribution/#got-a-question-or-problem)
- [x] [Found a typo?](../../contribution/#found-a-typo)
- [x] [Found a bug?](../../contribution/#found-a-bug)
-- [x] [Missing a feature/improvement?](../../contribution/#request-for-a-featureimprovementt)
+- [x] [Missing a feature/improvement?](../../contribution/#request-for-a-featureimprovement)
diff --git a/docs/help/netgear_async_ex.md b/docs/help/netgear_async_ex.md
new file mode 100644
index 000000000..a46c17c7e
--- /dev/null
+++ b/docs/help/netgear_async_ex.md
@@ -0,0 +1,169 @@
+
+
+# NetGear_Async Examples
+
+
+
+## Using NetGear_Async with WebGear
+
+The complete usage example is as follows:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+### Client + WebGear Server
+
+Open a terminal on Client System where you want to display the input frames _(and setup WebGear server)_ received from the Server and execute the following python code:
+
+!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear_Async will soon exit with `TimeoutError`. You can also try setting [`timeout`](../../gears/netgear_async/params/#timeout) parameter to a higher value to extend this timeout."
+
+!!! warning "Make sure you use different `port` value for NetGear_Async and WebGear API."
+
+!!! alert "High CPU utilization may occur on Client's end. User discretion is advised."
+
+!!! note "Note down the IP-address of this system _(required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code.""
+
+```python
+# import libraries
+from vidgear.gears.asyncio import NetGear_Async
+from vidgear.gears.asyncio import WebGear
+from vidgear.gears.asyncio.helper import reducer
+import uvicorn, asyncio, cv2
+
+# Define NetGear_Async Client at given IP address and define parameters
+# !!! change following IP address '192.168.x.xxx' with yours !!!
+client = NetGear_Async(
+ receive_mode=True,
+ pattern=1,
+ logging=True,
+).launch()
+
+# create your own custom frame producer
+async def my_frame_producer():
+
+ # loop over Client's Asynchronous Frame Generator
+ async for frame in client.recv_generator():
+
+ # {do something with received frames here}
+
+ # reducer frames size if you want more performance otherwise comment this line
+ frame = await reducer(
+ frame, percentage=30, interpolation=cv2.INTER_AREA
+ ) # reduce frame by 30%
+
+ # handle JPEG encoding
+ encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
+ # yield frame in byte format
+ yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n")
+ await asyncio.sleep(0)
+
+
+if __name__ == "__main__":
+ # Set event loop to client's
+ asyncio.set_event_loop(client.loop)
+
+ # initialize WebGear app without any source
+ web = WebGear(logging=True)
+
+ # add your custom frame producer to config with adequate IP address
+ web.config["generator"] = my_frame_producer
+
+ # run this app on Uvicorn server at address http://localhost:8000/
+ uvicorn.run(web(), host="localhost", port=8000)
+
+ # safely close client
+ client.close()
+
+ # close app safely
+ web.shutdown()
+```
+
+!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser."
+
+### Server
+
+Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code:
+
+!!! note "Replace the IP address in the following code with Client's IP address you noted earlier."
+
+```python
+# import library
+from vidgear.gears.asyncio import NetGear_Async
+import cv2, asyncio
+
+# initialize Server without any source
+server = NetGear_Async(
+ source=None,
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+)
+
+# Create a async frame generator as custom source
+async def my_frame_generator():
+
+ # !!! define your own video source here !!!
+ # Open any video stream such as live webcam
+ # video stream on first index(i.e. 0) device
+ stream = cv2.VideoCapture(0)
+
+ # loop over stream until its terminated
+ while True:
+
+ # read frames
+ (grabbed, frame) = stream.read()
+
+ # check if frame empty
+ if not grabbed:
+ break
+
+ # do something with the frame to be sent here
+
+ # yield frame
+ yield frame
+ # sleep for sometime
+ await asyncio.sleep(0)
+
+ # close stream
+ stream.release()
+
+
+if __name__ == "__main__":
+ # set event loop
+ asyncio.set_event_loop(server.loop)
+ # Add your custom source generator to Server configuration
+ server.config["generator"] = my_frame_generator()
+ # Launch the Server
+ server.launch()
+ try:
+ # run your main function task until it is complete
+ server.loop.run_until_complete(server.task)
+ except (KeyboardInterrupt, SystemExit):
+ # wait for interrupts
+ pass
+ finally:
+ # finally close the server
+ server.close()
+```
+
+
diff --git a/docs/help/netgear_async_faqs.md b/docs/help/netgear_async_faqs.md
index f68ecf407..289ca8d29 100644
--- a/docs/help/netgear_async_faqs.md
+++ b/docs/help/netgear_async_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/help/netgear_ex.md b/docs/help/netgear_ex.md
new file mode 100644
index 000000000..ef43baaa8
--- /dev/null
+++ b/docs/help/netgear_ex.md
@@ -0,0 +1,368 @@
+
+
+# NetGear Examples
+
+
+
+## Using NetGear with WebGear
+
+The complete usage example is as follows:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+### Client + WebGear Server
+
+Open a terminal on Client System where you want to display the input frames _(and setup WebGear server)_ received from the Server and execute the following python code:
+
+!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear will soon exit with `RuntimeError`. You can also try setting [`max_retries`](../../gears/netgear/params/#options) and [`request_timeout`](../../gears/netgear/params/#options) like attributes to a higher value to avoid this."
+
+!!! warning "Make sure you use different `port` value for NetGear and WebGear API."
+
+!!! alert "High CPU utilization may occur on Client's end. User discretion is advised."
+
+!!! note "Note down the IP-address of this system _(required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code.""
+
+```python
+# import necessary libs
+import uvicorn, asyncio, cv2
+from vidgear.gears.asyncio import WebGear
+from vidgear.gears.asyncio.helper import reducer
+
+# initialize WebGear app without any source
+web = WebGear(logging=True)
+
+
+# activate jpeg encoding and specify other related parameters
+options = {
+ "jpeg_compression": True,
+ "jpeg_compression_quality": 90,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": True,
+}
+
+# create your own custom frame producer
+async def my_frame_producer():
+ # initialize global params
+ # Define NetGear Client at given IP address and define parameters
+ # !!! change following IP address '192.168.x.xxx' with yours !!!
+ client = NetGear(
+ receive_mode=True,
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+ **options,
+ )
+
+ # loop over frames
+ while True:
+ # receive frames from network
+ frame = self.client.recv()
+
+ # if NoneType
+ if frame is None:
+ return None
+
+ # do something with your OpenCV frame here
+
+ # reducer frames size if you want more performance otherwise comment this line
+ frame = await reducer(
+ frame, percentage=30, interpolation=cv2.INTER_AREA
+ ) # reduce frame by 30%
+
+ # handle JPEG encoding
+ encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
+ # yield frame in byte format
+ yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n")
+ await asyncio.sleep(0)
+ # close stream
+ client.close()
+
+
+# add your custom frame producer to config with adequate IP address
+web.config["generator"] = my_frame_producer
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser."
+
+
+### Server
+
+Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code:
+
+!!! note "Replace the IP address in the following code with Client's IP address you noted earlier."
+
+```python
+# import required libraries
+from vidgear.gears import VideoGear
+from vidgear.gears import NetGear
+import cv2
+
+# activate jpeg encoding and specify other related parameters
+options = {
+ "jpeg_compression": True,
+ "jpeg_compression_quality": 90,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": True,
+}
+
+# Open live video stream on webcam at first index(i.e. 0) device
+stream = VideoGear(source=0).start()
+
+# Define NetGear server at given IP address and define parameters
+# !!! change following IP address '192.168.x.xxx' with client's IP address !!!
+server = NetGear(
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+ **options
+)
+
+# loop over until KeyBoard Interrupted
+while True:
+
+ try:
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if None-type
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to server
+ server.send(frame)
+
+ except KeyboardInterrupt:
+ break
+
+# safely close video stream
+stream.stop()
+
+# safely close server
+server.close()
+```
+
+
+
+## Using NetGear with WebGear_RTC
+
+The complete usage example is as follows:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+### Client + WebGear_RTC Server
+
+Open a terminal on Client System where you want to display the input frames _(and setup WebGear_RTC server)_ received from the Server and execute the following python code:
+
+!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear will soon exit with `RuntimeError`. You can also try setting [`max_retries`](../../gears/netgear/params/#options) and [`request_timeout`](../../gears/netgear/params/#options) like attributes to a higher value to avoid this."
+
+!!! warning "Make sure you use different `port` value for NetGear and WebGear_RTC API."
+
+!!! alert "High CPU utilization may occur on Client's end. User discretion is advised."
+
+!!! note "Note down the IP-address of this system _required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code.""
+
+```python
+# import required libraries
+import uvicorn, asyncio, cv2
+from av import VideoFrame
+from aiortc import VideoStreamTrack
+from aiortc.mediastreams import MediaStreamError
+from vidgear.gears import NetGear
+from vidgear.gears.asyncio import WebGear_RTC
+from vidgear.gears.asyncio.helper import reducer
+
+# initialize WebGear_RTC app without any source
+web = WebGear_RTC(logging=True)
+
+# activate jpeg encoding and specify other related parameters
+options = {
+ "jpeg_compression": True,
+ "jpeg_compression_quality": 90,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": True,
+}
+
+
+# create your own Bare-Minimum Custom Media Server
+class Custom_RTCServer(VideoStreamTrack):
+ """
+ Custom Media Server using OpenCV, an inherit-class
+ to aiortc's VideoStreamTrack.
+ """
+
+ def __init__(
+ self,
+ address=None,
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+ options={},
+ ):
+ # don't forget this line!
+ super().__init__()
+
+ # initialize global params
+ # Define NetGear Client at given IP address and define parameters
+ self.client = NetGear(
+ receive_mode=True,
+ address=address,
+ port=protocol,
+ pattern=pattern,
+ receive_mode=True,
+ logging=logging,
+ **options
+ )
+
+ async def recv(self):
+ """
+ A coroutine function that yields `av.frame.Frame`.
+ """
+ # don't forget this function!!!
+
+ # get next timestamp
+ pts, time_base = await self.next_timestamp()
+
+ # receive frames from network
+ frame = self.client.recv()
+
+ # if NoneType
+ if frame is None:
+ raise MediaStreamError
+
+ # reducer frames size if you want more performance otherwise comment this line
+ frame = await reducer(frame, percentage=30) # reduce frame by 30%
+
+ # contruct `av.frame.Frame` from `numpy.nd.array`
+ av_frame = VideoFrame.from_ndarray(frame, format="bgr24")
+ av_frame.pts = pts
+ av_frame.time_base = time_base
+
+ # return `av.frame.Frame`
+ return av_frame
+
+ def terminate(self):
+ """
+ Gracefully terminates VideoGear stream
+ """
+ # don't forget this function!!!
+
+ # terminate
+ if not (self.client is None):
+ self.client.close()
+ self.client = None
+
+
+# assign your custom media server to config with adequate IP address
+# !!! change following IP address '192.168.x.xxx' with yours !!!
+web.config["server"] = Custom_RTCServer(
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+ **options
+)
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser."
+
+### Server
+
+Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code:
+
+!!! note "Replace the IP address in the following code with Client's IP address you noted earlier."
+
+```python
+# import required libraries
+from vidgear.gears import VideoGear
+from vidgear.gears import NetGear
+import cv2
+
+# activate jpeg encoding and specify other related parameters
+options = {
+ "jpeg_compression": True,
+ "jpeg_compression_quality": 90,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": True,
+}
+
+# Open live video stream on webcam at first index(i.e. 0) device
+stream = VideoGear(source=0).start()
+
+# Define NetGear server at given IP address and define parameters
+# !!! change following IP address '192.168.x.xxx' with client's IP address !!!
+server = NetGear(
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+ **options
+)
+
+# loop over until KeyBoard Interrupted
+while True:
+
+ try:
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to server
+ server.send(frame)
+
+ except KeyboardInterrupt:
+ break
+
+# safely close video stream
+stream.stop()
+
+# safely close server
+server.close()
+```
+
+
\ No newline at end of file
diff --git a/docs/help/netgear_faqs.md b/docs/help/netgear_faqs.md
index 4766cdb17..f8aab71fe 100644
--- a/docs/help/netgear_faqs.md
+++ b/docs/help/netgear_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -39,12 +39,13 @@ limitations under the License.
Here's the compatibility chart for NetGear's [Exclusive Modes](../../gears/netgear/overview/#exclusive-modes):
-| Exclusive Modes | Multi-Servers | Multi-Clients | Secure | Bidirectional |
-| :-------------: | :-----------: | :-----------: | :----: | :-----------: |
-| **Multi-Servers** | - | No _(throws error)_ | Yes | No _(disables it)_ |
-| **Multi-Clients** | No _(throws error)_ | - | Yes | No _(disables it)_ |
-| **Secure** | Yes | Yes | - | Yes |
-| **Bidirectional** | No _(disabled)_ | No _(disabled)_ | Yes | - |
+| Exclusive Modes | Multi-Servers | Multi-Clients | Secure | Bidirectional | SSH Tunneling |
+| :-------------: | :-----------: | :-----------: | :----: | :-----------: | :-----------: |
+| **Multi-Servers** | - | No _(throws error)_ | Yes | No _(disables it)_ | No _(throws error)_ |
+| **Multi-Clients** | No _(throws error)_ | - | Yes | No _(disables it)_ | No _(throws error)_ |
+| **Secure** | Yes | Yes | - | Yes | Yes |
+| **Bidirectional** | No _(disabled)_ | No _(disabled)_ | Yes | - | Yes |
+| **SSH Tunneling** | No _(throws error)_ | No _(throws error)_ | Yes | Yes | - |
@@ -93,6 +94,13 @@ Here's the compatibility chart for NetGear's [Exclusive Modes](../../gears/netge
+
+## How to access NetGear API outside network or remotely?
+
+**Answer:** See its [SSH Tunneling Mode doc ➶](../../gears/netgear/advanced/ssh_tunnel/).
+
+
+
## Are there any side-effect of sending data with frames?
**Answer:** Yes, it may lead to additional **LATENCY** depending upon the size/amount of the data being transferred. User discretion is advised.
diff --git a/docs/help/pigear_ex.md b/docs/help/pigear_ex.md
new file mode 100644
index 000000000..03d86f63e
--- /dev/null
+++ b/docs/help/pigear_ex.md
@@ -0,0 +1,75 @@
+
+
+# PiGear Examples
+
+
+
+## Setting variable `picamera` parameters for Camera Module at runtime
+
+You can use `stream` global parameter in PiGear to feed any [`picamera`](https://picamera.readthedocs.io/en/release-1.10/api_camera.html) parameters at runtime.
+
+In this example we will set initial Camera Module's `brightness` value `80`, and will change it `50` when **`z` key** is pressed at runtime:
+
+```python
+# import required libraries
+from vidgear.gears import PiGear
+import cv2
+
+# initial parameters
+options = {"brightness": 80} # set brightness to 80
+
+# open pi video stream with default parameters
+stream = PiGear(logging=True, **options).start()
+
+# loop over
+while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+
+ # {do something with the frame here}
+
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+ # check for 'z' key if pressed
+ if key == ord("z"):
+ # change brightness to 50
+ stream.stream.brightness = 50
+
+# close output window
+cv2.destroyAllWindows()
+
+# safely close video stream
+stream.stop()
+```
+
+
\ No newline at end of file
diff --git a/docs/help/pigear_faqs.md b/docs/help/pigear_faqs.md
index 176e7a6d0..3c24814da 100644
--- a/docs/help/pigear_faqs.md
+++ b/docs/help/pigear_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -67,53 +67,6 @@ limitations under the License.
## How to change `picamera` settings for Camera Module at runtime?
-**Answer:** You can use `stream` global parameter in PiGear to feed any `picamera` setting at runtime. See following sample usage example:
-
-!!! info ""
- In this example we will set initial Camera Module's `brightness` value `80`, and will change it `50` when **`z` key** is pressed at runtime.
-
-```python
-# import required libraries
-from vidgear.gears import PiGear
-import cv2
-
-# initial parameters
-options = {"brightness": 80} # set brightness to 80
-
-# open pi video stream with default parameters
-stream = PiGear(logging=True, **options).start()
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
-
- # {do something with the frame here}
-
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
- # check for 'z' key if pressed
- if key == ord("z"):
- # change brightness to 50
- stream.stream.brightness = 50
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-```
+**Answer:** You can use `stream` global parameter in PiGear to feed any `picamera` setting at runtime. See [this bonus example ➶](../pigear_ex/#setting-variable-picamera-parameters-for-camera-module-at-runtime)
\ No newline at end of file
diff --git a/docs/help/screengear_ex.md b/docs/help/screengear_ex.md
new file mode 100644
index 000000000..80463ee11
--- /dev/null
+++ b/docs/help/screengear_ex.md
@@ -0,0 +1,149 @@
+
+
+# ScreenGear Examples
+
+
+
+## Using ScreenGear with NetGear and WriteGear
+
+The complete usage example is as follows:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+### Client + WriteGear
+
+Open a terminal on Client System _(where you want to save the input frames received from the Server)_ and execute the following python code:
+
+!!! info "Note down the IP-address of this system(required at Server's end) by executing the command: `hostname -I` and also replace it in the following code."
+
+!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import required libraries
+from vidgear.gears import NetGear
+from vidgear.gears import WriteGear
+import cv2
+
+# define various tweak flags
+options = {"flag": 0, "copy": False, "track": False}
+
+# Define Netgear Client at given IP address and define parameters
+# !!! change following IP address '192.168.x.xxx' with yours !!!
+client = NetGear(
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ receive_mode=True,
+ logging=True,
+ **options
+)
+
+# Define writer with default parameters and suitable output filename for e.g. `Output.mp4`
+writer = WriteGear(output_filename="Output.mp4")
+
+# loop over
+while True:
+
+ # receive frames from network
+ frame = client.recv()
+
+ # check for received frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # write frame to writer
+ writer.write(frame)
+
+# close output window
+cv2.destroyAllWindows()
+
+# safely close client
+client.close()
+
+# safely close writer
+writer.close()
+```
+
+### Server + ScreenGear
+
+Now, Open the terminal on another Server System _(with a montior/display attached to it)_, and execute the following python code:
+
+!!! info "Replace the IP address in the following code with Client's IP address you noted earlier."
+
+!!! tip "You can terminate stream on both side anytime by pressing ++ctrl+"C"++ on your keyboard!"
+
+```python
+# import required libraries
+from vidgear.gears import VideoGear
+from vidgear.gears import NetGear
+
+# define dimensions of screen w.r.t to given monitor to be captured
+options = {"top": 40, "left": 0, "width": 100, "height": 100}
+
+# open stream with defined parameters
+stream = ScreenGear(logging=True, **options).start()
+
+# define various netgear tweak flags
+options = {"flag": 0, "copy": False, "track": False}
+
+# Define Netgear server at given IP address and define parameters
+# !!! change following IP address '192.168.x.xxx' with client's IP address !!!
+server = NetGear(
+ address="192.168.x.xxx",
+ port="5454",
+ protocol="tcp",
+ pattern=1,
+ logging=True,
+ **options
+)
+
+# loop over until KeyBoard Interrupted
+while True:
+
+ try:
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to server
+ server.send(frame)
+
+ except KeyboardInterrupt:
+ break
+
+# safely close video stream
+stream.stop()
+
+# safely close server
+server.close()
+```
+
+
+
diff --git a/docs/help/screengear_faqs.md b/docs/help/screengear_faqs.md
index 63118e583..7fb76d802 100644
--- a/docs/help/screengear_faqs.md
+++ b/docs/help/screengear_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/help/stabilizer_ex.md b/docs/help/stabilizer_ex.md
new file mode 100644
index 000000000..8b8636265
--- /dev/null
+++ b/docs/help/stabilizer_ex.md
@@ -0,0 +1,236 @@
+
+
+# Stabilizer Class Examples
+
+
+
+## Saving Stabilizer Class output with Live Audio Input
+
+In this example code, we will merging the audio from a Audio Device _(for e.g. Webcam inbuilt mic input)_ with Stablized frames incoming from the Stabilizer Class _(which is also using same Webcam video input through OpenCV)_, and save the final output as a compressed video file, all in real time:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+!!! alert "Example Assumptions"
+
+ * You're running are Linux machine.
+ * You already have appropriate audio driver and software installed on your machine.
+
+
+??? tip "Identifying and Specifying sound card on different OS platforms"
+
+ === "On Windows"
+
+ Windows OS users can use the [dshow](https://trac.ffmpeg.org/wiki/DirectShow) (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card:
+
+ - [x] **[OPTIONAL] Enable sound card(if disabled):** First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. **Follow this [post ➶](https://forums.tomshardware.com/threads/no-sound-through-stereo-mix-realtek-hd-audio.1716182/) for more details.**
+
+ - [x] **Identify Sound Card:** Then, You can locate your soundcard using `dshow` as follows:
+
+ ```sh
+ c:\> ffmpeg -list_devices true -f dshow -i dummy
+ ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect
+ libavutil 51. 74.100 / 51. 74.100
+ libavcodec 54. 65.100 / 54. 65.100
+ libavformat 54. 31.100 / 54. 31.100
+ libavdevice 54. 3.100 / 54. 3.100
+ libavfilter 3. 19.102 / 3. 19.102
+ libswscale 2. 1.101 / 2. 1.101
+ libswresample 0. 16.100 / 0. 16.100
+ [dshow @ 03ACF580] DirectShow video devices
+ [dshow @ 03ACF580] "Integrated Camera"
+ [dshow @ 03ACF580] "USB2.0 Camera"
+ [dshow @ 03ACF580] DirectShow audio devices
+ [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)"
+ [dshow @ 03ACF580] "Microphone (USB2.0 Camera)"
+ dummy: Immediate exit requested
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows:
+
+ ```python
+ # assign appropriate input audio-source
+ output_params = {
+ "-i":"audio=Microphone (USB2.0 Camera)",
+ "-thread_queue_size": "512",
+ "-f": "dshow",
+ "-ac": "2",
+ "-acodec": "aac",
+ "-ar": "44100",
+ }
+ ```
+
+ !!! fail "If audio still doesn't work then [checkout this troubleshooting guide ➶](https://www.maketecheasier.com/fix-microphone-not-working-windows10/) or reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+ === "On Linux"
+
+ Linux OS users can use the [alsa](https://ffmpeg.org/ffmpeg-all.html#alsa) to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card:
+
+ - [x] **Identify Sound Card:** To get the list of all installed cards on your machine, you can type `arecord -l` or `arecord -L` _(longer output)_.
+
+ ```sh
+ arecord -l
+
+ **** List of CAPTURE Hardware Devices ****
+ card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio]
+ Subdevices: 1/1
+ Subdevice #0: subdevice #0
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in WriteGear as follows:
+
+ !!! info "The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as `hw:0` or `hw:1`"
+
+ ```python
+ # assign appropriate input audio-source
+ output_params = {
+ "-i": "hw:1",
+ "-thread_queue_size": "512",
+ "-f": "alsa",
+ "-ac": "2",
+ "-acodec": "aac",
+ "-ar": "44100",
+ }
+ ```
+
+ !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+ === "On MacOS"
+
+ MAC OS users can use the [avfoundation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines:
+
+
+ - [x] **Identify Sound Card:** Then, You can locate your soundcard using `avfoundation` as follows:
+
+ ```sh
+ ffmpeg -f qtkit -list_devices true -i ""
+ ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect
+ libavutil 51. 74.100 / 51. 74.100
+ libavcodec 54. 65.100 / 54. 65.100
+ libavformat 54. 31.100 / 54. 31.100
+ libavdevice 54. 3.100 / 54. 3.100
+ libavfilter 3. 19.102 / 3. 19.102
+ libswscale 2. 1.101 / 2. 1.101
+ libswresample 0. 16.100 / 0. 16.100
+ [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices:
+ [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in)
+ [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0
+ [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices:
+ [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio
+ [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone
+ ```
+
+
+ - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows:
+
+ ```python
+ # assign appropriate input audio-source
+ output_params = {
+ "-audio_device_index": "0",
+ "-thread_queue_size": "512",
+ "-f": "avfoundation",
+ "-ac": "2",
+ "-acodec": "aac",
+ "-ar": "44100",
+ }
+ ```
+
+ !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel"
+
+
+!!! danger "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
+
+!!! warning "You **MUST** use [`-input_framerate`](../../gears/writegear/compression/params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams."
+
+```python
+# import required libraries
+from vidgear.gears import WriteGear
+from vidgear.gears.stabilizer import Stabilizer
+import cv2
+
+# Open suitable video stream, such as webcam on first index(i.e. 0)
+stream = cv2.VideoCapture(0)
+
+# initiate stabilizer object with defined parameters
+stab = Stabilizer(smoothing_radius=30, crop_n_zoom=True, border_size=5, logging=True)
+
+# change with your webcam soundcard, plus add additional required FFmpeg parameters for your writer
+output_params = {
+ "-thread_queue_size": "512",
+ "-f": "alsa",
+ "-ac": "1",
+ "-ar": "48000",
+ "-i": "plughw:CARD=CAMERA,DEV=0",
+}
+
+# Define writer with defined parameters and suitable output filename for e.g. `Output.mp4
+writer = WriteGear(output_filename="Output.mp4", logging=True, **output_params)
+
+# loop over
+while True:
+
+ # read frames from stream
+ (grabbed, frame) = stream.read()
+
+ # check for frame if not grabbed
+ if not grabbed:
+ break
+
+ # send current frame to stabilizer for processing
+ stabilized_frame = stab.stabilize(frame)
+
+ # wait for stabilizer which still be initializing
+ if stabilized_frame is None:
+ continue
+
+ # {do something with the stabilized frame here}
+
+ # write stabilized frame to writer
+ writer.write(stabilized_frame)
+
+
+# clear stabilizer resources
+stab.clean()
+
+# safely close video stream
+stream.release()
+
+# safely close writer
+writer.close()
+```
+
+
\ No newline at end of file
diff --git a/docs/help/stabilizer_faqs.md b/docs/help/stabilizer_faqs.md
index 5a33629c1..7406c3795 100644
--- a/docs/help/stabilizer_faqs.md
+++ b/docs/help/stabilizer_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -30,7 +30,7 @@ limitations under the License.
## How much latency you would typically expect with Stabilizer Class?
-**Answer:** The stabilizer will be Slower for High-Quality videos-frames. Try reducing frames size _(Use [`reducer()`](../../bonus/reference/helper/#reducer) method)_ before feeding them for reducing latency. Also, see [`smoothing_radius`](../../gears/stabilizer/params/#smoothing_radius) parameter of Stabilizer class that handles the quality of stabilization at the expense of latency and sudden panning. The larger its value, the less will be panning, more will be latency, and vice-versa.
+**Answer:** The stabilizer will be Slower for High-Quality videos-frames. Try reducing frames size _(Use [`reducer()`](../../bonus/reference/helper/#vidgear.gears.helper.reducer--reducer) method)_ before feeding them for reducing latency. Also, see [`smoothing_radius`](../../gears/stabilizer/params/#smoothing_radius) parameter of Stabilizer class that handles the quality of stabilization at the expense of latency and sudden panning. The larger its value, the less will be panning, more will be latency, and vice-versa.
diff --git a/docs/help/streamgear_ex.md b/docs/help/streamgear_ex.md
new file mode 100644
index 000000000..d8a83db14
--- /dev/null
+++ b/docs/help/streamgear_ex.md
@@ -0,0 +1,161 @@
+
+
+# StreamGear Examples
+
+
+
+## StreamGear Live-Streaming Usage with PiGear
+
+In this example, we will be Live-Streaming video-frames from Raspberry Pi _(with Camera Module connected)_ using PiGear API and StreamGear API's Real-time Frames Mode:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency."
+
+!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames."
+
+!!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../../gears/streamgear/params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter."
+
+=== "DASH"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import PiGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # add various Picamera tweak parameters to dictionary
+ options = {
+ "hflip": True,
+ "exposure_mode": "auto",
+ "iso": 800,
+ "exposure_compensation": 15,
+ "awb_mode": "horizon",
+ "sensor_mode": 0,
+ }
+
+ # open pi video stream with defined parameters
+ stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start()
+
+ # enable livestreaming and retrieve framerate from CamGear Stream and
+ # pass it as `-input_framerate` parameter for controlled framerate
+ stream_params = {"-input_framerate": stream.framerate, "-livestream": True}
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="dash_out.mpd", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+=== "HLS"
+
+ ```python
+ # import required libraries
+ from vidgear.gears import PiGear
+ from vidgear.gears import StreamGear
+ import cv2
+
+ # add various Picamera tweak parameters to dictionary
+ options = {
+ "hflip": True,
+ "exposure_mode": "auto",
+ "iso": 800,
+ "exposure_compensation": 15,
+ "awb_mode": "horizon",
+ "sensor_mode": 0,
+ }
+
+ # open pi video stream with defined parameters
+ stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start()
+
+ # enable livestreaming and retrieve framerate from CamGear Stream and
+ # pass it as `-input_framerate` parameter for controlled framerate
+ stream_params = {"-input_framerate": stream.framerate, "-livestream": True}
+
+ # describe a suitable manifest-file location/name
+ streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # send frame to streamer
+ streamer.stream(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+
+ # safely close streamer
+ streamer.terminate()
+ ```
+
+
+
\ No newline at end of file
diff --git a/docs/help/streamgear_faqs.md b/docs/help/streamgear_faqs.md
index c49435926..955b15f76 100644
--- a/docs/help/streamgear_faqs.md
+++ b/docs/help/streamgear_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,13 +24,13 @@ limitations under the License.
## What is StreamGear API and what does it do?
-**Answer:** StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH)_ in just few lines of python code. _For more info. see [StreamGear doc ➶](../../gears/streamgear/overview/)_
+**Answer:** StreamGear automates transcoding workflow for generating _Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH)_ in just few lines of python code. _For more info. see [StreamGear doc ➶](../../gears/streamgear/introduction/)_
## How to get started with StreamGear API?
-**Answer:** See [StreamGear doc ➶](../../gears/streamgear/overview/). Still in doubt, then ask us on [Gitter ➶](https://gitter.im/vidgear/community) Community channel.
+**Answer:** See [StreamGear doc ➶](../../gears/streamgear/introduction/). Still in doubt, then ask us on [Gitter ➶](https://gitter.im/vidgear/community) Community channel.
@@ -42,7 +42,7 @@ limitations under the License.
## How to play Streaming Assets created with StreamGear API?
-**Answer:** You can easily feed Manifest file(`.mpd`) to DASH Supported Players Input but sure encoded chunks are present along with it. See this list of [recommended players ➶](../../gears/streamgear/overview/#recommended-stream-players)
+**Answer:** You can easily feed Manifest file(`.mpd`) to DASH Supported Players Input but sure encoded chunks are present along with it. See this list of [recommended players ➶](../../gears/streamgear/introduction/#recommended-stream-players)
@@ -60,24 +60,34 @@ limitations under the License.
## How to create additional streams in StreamGear API?
-**Answer:** [See this example ➶](../../gears/streamgear/usage/#a2-usage-with-additional-streams)
+**Answer:** [See this example ➶](../../gears/streamgear/ssm/usage/#usage-with-additional-streams)
-## How to use StreamGear API with real-time frames?
-**Answer:** See [Real-time Frames Mode ➶](../../gears/streamgear/usage/#b-real-time-frames-mode)
+## How to use StreamGear API with OpenCV?
+
+**Answer:** [See this example ➶](../../gears/streamgear/rtfm/usage/bare-minimum-usage-with-opencv)
-## How to use StreamGear API with OpenCV?
+## How to use StreamGear API with real-time frames?
-**Answer:** [See this example ➶](../../gears/streamgear/usage/#b4-bare-minimum-usage-with-opencv)
+**Answer:** See [Real-time Frames Mode ➶](../../gears/streamgear/rtfm/overview)
+## Is Real-time Frames Mode only used for Live-Streaming?
+
+**Answer:** Real-time Frame Modes and Live-Streaming are completely different terms and not directly related.
+
+- **Real-time Frame Mode** is one of [primary mode](./../gears/streamgear/introduction/#mode-of-operations) for directly transcoding real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire file)_ into a sequence of multiple smaller chunks/segments for streaming.
+
+- **Live-Streaming** is feature of StreamGear's primary modes that activates behaviour where chunks will contain information for few new frames only and forgets all previous ones for low latency streaming. It can be activated for any primary mode using exclusive [`-livestream`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter.
+
+
## How to use Hardware/GPU encoder for StreamGear trancoding?
-**Answer:** [See this example ➶](../../gears/streamgear/usage/#b7-usage-with-hardware-video-encoder)
+**Answer:** [See this example ➶](../../gears/streamgear/rtfm/usage/#usage-with-hardware-video-encoder)
\ No newline at end of file
diff --git a/docs/help/videogear_ex.md b/docs/help/videogear_ex.md
new file mode 100644
index 000000000..de8a92053
--- /dev/null
+++ b/docs/help/videogear_ex.md
@@ -0,0 +1,220 @@
+
+
+# VideoGear Examples
+
+
+
+## Using VideoGear with ROS(Robot Operating System)
+
+We will be using [`cv_bridge`](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython) to convert OpenCV frames to ROS image messages and vice-versa.
+
+In this example, we'll create a node that convert OpenCV frames into ROS image messages, and then publishes them over ROS.
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+!!! note "This example is vidgear implementation of this [wiki example](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)."
+
+```python
+# import roslib
+import roslib
+
+roslib.load_manifest("my_package")
+
+# import other required libraries
+import sys
+import rospy
+import cv2
+from std_msgs.msg import String
+from sensor_msgs.msg import Image
+from cv_bridge import CvBridge, CvBridgeError
+from vidgear.gears import VideoGear
+
+# custom publisher class
+class image_publisher:
+ def __init__(self, source=0, logging=False):
+ # create CV bridge
+ self.bridge = CvBridge()
+ # define publisher topic
+ self.image_pub = rospy.Publisher("image_topic_pub", Image)
+ # open stream with given parameters
+ self.stream_stab = VideoGear(source=source, logging=logging).start()
+ # define publisher topic
+ rospy.Subscriber("image_topic_sub", Image, self.callback)
+
+ def callback(self, data):
+
+ # {do something with received ROS node data here}
+
+ # read stabilized frames
+ frame = self.stream.read()
+ # check for stabilized frame if None-type
+ if not (frame is None):
+
+ # {do something with the frame here}
+
+ # publish our frame
+ try:
+ self.image_pub.publish(self.bridge.cv2_to_imgmsg(frame, "bgr8"))
+ except CvBridgeError as e:
+ # catch any errors
+ print(e)
+
+ def close(self):
+ # stop stream
+ self.stream_stab.stop()
+
+
+def main(args):
+ # !!! define your own video source here !!!
+ # Open any video stream such as live webcam
+ # video stream on first index(i.e. 0) device
+
+ # define publisher
+ ic = image_publisher(source=0, logging=True)
+ # initiate ROS node on publisher
+ rospy.init_node("image_publisher", anonymous=True)
+ try:
+ # run node
+ rospy.spin()
+ except KeyboardInterrupt:
+ print("Shutting down")
+ finally:
+ # close publisher
+ ic.close()
+
+
+if __name__ == "__main__":
+ main(sys.argv)
+```
+
+
+
+## Using VideoGear for capturing RSTP/RTMP URLs
+
+Here's a high-level wrapper code around VideoGear API to enable auto-reconnection during capturing, plus stabilization is enabled _(`stabilize=True`)_ in order to stabilize captured frames on-the-go:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+??? tip "Enforcing UDP stream"
+
+ You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code:
+
+ ```python
+ # import required libraries
+ import os
+
+ # enforce UDP
+ os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp"
+ ```
+
+ Finally, use [`backend`](../../gears/videogear/params/#backend) parameter value as `backend=cv2.CAP_FFMPEG` in VideoGear.
+
+
+```python
+from vidgear.gears import VideoGear
+import cv2
+import datetime
+import time
+
+
+class Reconnecting_VideoGear:
+ def __init__(self, cam_address, stabilize=False, reset_attempts=50, reset_delay=5):
+ self.cam_address = cam_address
+ self.stabilize = stabilize
+ self.reset_attempts = reset_attempts
+ self.reset_delay = reset_delay
+ self.source = VideoGear(
+ source=self.cam_address, stabilize=self.stabilize
+ ).start()
+ self.running = True
+
+ def read(self):
+ if self.source is None:
+ return None
+ if self.running and self.reset_attempts > 0:
+ frame = self.source.read()
+ if frame is None:
+ self.source.stop()
+ self.reset_attempts -= 1
+ print(
+ "Re-connection Attempt-{} occured at time:{}".format(
+ str(self.reset_attempts),
+ datetime.datetime.now().strftime("%m-%d-%Y %I:%M:%S%p"),
+ )
+ )
+ time.sleep(self.reset_delay)
+ self.source = VideoGear(
+ source=self.cam_address, stabilize=self.stabilize
+ ).start()
+ # return previous frame
+ return self.frame
+ else:
+ self.frame = frame
+ return frame
+ else:
+ return None
+
+ def stop(self):
+ self.running = False
+ self.reset_attempts = 0
+ self.frame = None
+ if not self.source is None:
+ self.source.stop()
+
+
+if __name__ == "__main__":
+ # open any valid video stream
+ stream = Reconnecting_VideoGear(
+ cam_address="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov",
+ reset_attempts=20,
+ reset_delay=5,
+ )
+
+ # loop over
+ while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if None-type
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # Show output window
+ cv2.imshow("Output", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+ # close output window
+ cv2.destroyAllWindows()
+
+ # safely close video stream
+ stream.stop()
+```
+
+
\ No newline at end of file
diff --git a/docs/help/videogear_faqs.md b/docs/help/videogear_faqs.md
index 50be6a58f..67cdb89cb 100644
--- a/docs/help/videogear_faqs.md
+++ b/docs/help/videogear_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/help/webgear_ex.md b/docs/help/webgear_ex.md
new file mode 100644
index 000000000..05b1dc628
--- /dev/null
+++ b/docs/help/webgear_ex.md
@@ -0,0 +1,233 @@
+
+
+# WebGear Examples
+
+
+
+## Using WebGear with RaspberryPi Camera Module
+
+Because of WebGear API's flexible internal wapper around VideoGear, it can easily access any parameter of CamGear and PiGear videocapture APIs.
+
+!!! info "Following usage examples are just an idea of what can be done with WebGear API, you can try various [VideoGear](../../gears/videogear/params/), [CamGear](../../gears/camgear/params/) and [PiGear](../../gears/pigear/params/) parameters directly in WebGear API in the similar manner."
+
+Here's a bare-minimum example of using WebGear API with the Raspberry Pi camera module while tweaking its various properties in just one-liner:
+
+```python
+# import libs
+import uvicorn
+from vidgear.gears.asyncio import WebGear
+
+# various webgear performance and Raspberry Pi camera tweaks
+options = {
+ "frame_size_reduction": 40,
+ "jpeg_compression_quality": 80,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": False,
+ "hflip": True,
+ "exposure_mode": "auto",
+ "iso": 800,
+ "exposure_compensation": 15,
+ "awb_mode": "horizon",
+ "sensor_mode": 0,
+}
+
+# initialize WebGear app
+web = WebGear(
+ enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options
+)
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+
+
+## Using WebGear with real-time Video Stabilization enabled
+
+Here's an example of using WebGear API with real-time Video Stabilization enabled:
+
+```python
+# import libs
+import uvicorn
+from vidgear.gears.asyncio import WebGear
+
+# various webgear performance tweaks
+options = {
+ "frame_size_reduction": 40,
+ "jpeg_compression_quality": 80,
+ "jpeg_compression_fastdct": True,
+ "jpeg_compression_fastupsample": False,
+}
+
+# initialize WebGear app with a raw source and enable video stabilization(`stabilize=True`)
+web = WebGear(source="foo.mp4", stabilize=True, logging=True, **options)
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+
+
+
+## Display Two Sources Simultaneously in WebGear
+
+In this example, we'll be displaying two video feeds side-by-side simultaneously on browser using WebGear API by defining two separate frame generators:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+**Step-1 (Trigger Auto-Generation Process):** Firstly, run this bare-minimum code to trigger the [**Auto-generation**](../../gears/webgear/#auto-generation-process) process, this will create `.vidgear` directory at current location _(directory where you'll run this code)_:
+
+```python
+# import required libraries
+import uvicorn
+from vidgear.gears.asyncio import WebGear
+
+# provide current directory to save data files
+options = {"custom_data_location": "./"}
+
+# initialize WebGear app
+web = WebGear(source=0, logging=True, **options)
+
+# close app safely
+web.shutdown()
+```
+
+**Step-2 (Replace HTML file):** Now, go inside `.vidgear` :arrow_right: `webgear` :arrow_right: `templates` directory at current location of your machine, and there replace content of `index.html` file with following:
+
+```html
+{% extends "base.html" %}
+{% block content %}
+
WebGear Video Feed
+
+
+
+
+{% endblock %}
+```
+
+**Step-3 (Build your own Frame Producers):** Now, create a python script code with OpenCV source, as follows:
+
+```python
+# import necessary libs
+import uvicorn, asyncio, cv2
+from vidgear.gears.asyncio import WebGear
+from vidgear.gears.asyncio.helper import reducer
+from starlette.responses import StreamingResponse
+from starlette.routing import Route
+
+# provide current directory to load data files
+options = {"custom_data_location": "./"}
+
+# initialize WebGear app without any source
+web = WebGear(logging=True, **options)
+
+# create your own custom frame producer
+async def my_frame_producer1():
+
+ # !!! define your first video source here !!!
+ # Open any video stream such as "foo1.mp4"
+ stream = cv2.VideoCapture("foo1.mp4")
+ # loop over frames
+ while True:
+ # read frame from provided source
+ (grabbed, frame) = stream.read()
+ # break if NoneType
+ if not grabbed:
+ break
+
+ # do something with your OpenCV frame here
+
+ # reducer frames size if you want more performance otherwise comment this line
+ frame = await reducer(frame, percentage=30) # reduce frame by 30%
+ # handle JPEG encoding
+ encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
+ # yield frame in byte format
+ yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n")
+ await asyncio.sleep(0.00001)
+ # close stream
+ stream.release()
+
+
+# create your own custom frame producer
+async def my_frame_producer2():
+
+ # !!! define your second video source here !!!
+ # Open any video stream such as "foo2.mp4"
+ stream = cv2.VideoCapture("foo2.mp4")
+ # loop over frames
+ while True:
+ # read frame from provided source
+ (grabbed, frame) = stream.read()
+ # break if NoneType
+ if not grabbed:
+ break
+
+ # do something with your OpenCV frame here
+
+ # reducer frames size if you want more performance otherwise comment this line
+ frame = await reducer(frame, percentage=30) # reduce frame by 30%
+ # handle JPEG encoding
+ encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
+ # yield frame in byte format
+ yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n")
+ await asyncio.sleep(0.00001)
+ # close stream
+ stream.release()
+
+
+async def custom_video_response(scope):
+ """
+ Return a async video streaming response for `my_frame_producer2` generator
+ """
+ assert scope["type"] in ["http", "https"]
+ await asyncio.sleep(0.00001)
+ return StreamingResponse(
+ my_frame_producer2(),
+ media_type="multipart/x-mixed-replace; boundary=frame",
+ )
+
+
+# add your custom frame producer to config
+web.config["generator"] = my_frame_producer1
+
+# append new route i.e. new custom route with custom response
+web.routes.append(
+ Route("/video2", endpoint=custom_video_response)
+ )
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in Browser."
+
+
+
\ No newline at end of file
diff --git a/docs/help/webgear_faqs.md b/docs/help/webgear_faqs.md
index 6616fcf61..e39194337 100644
--- a/docs/help/webgear_faqs.md
+++ b/docs/help/webgear_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -48,7 +48,7 @@ limitations under the License.
## Is it possible to stream on a different device on the network with WebGear?
-!!! note "If you set `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine, then you must still use http://localhost:8000/ to access stream on your host machine browser."
+!!! alert "If you set `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine, then you must still use http://localhost:8000/ to access stream on that same host machine browser."
For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. Then type the IP-address of source machine followed by the defined `port` value in your desired Client Device's browser (for e.g. http://192.27.0.101:8000) to access the stream.
@@ -72,6 +72,12 @@ For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"
+## How can to add CORS headers to WebGear?
+
+**Answer:** See [this usage example ➶](../../gears/webgear/advanced/#using-webgear-with-middlewares).
+
+
+
## Can I change the default location?
**Answer:** Yes, you can use WebGear's [`custom_data_location`](../../gears/webgear/params/#webgear-specific-attributes) attribute of `option` parameter in WebGear API, to change [default location](../../gears/webgear/overview/#default-location) to somewhere else.
diff --git a/docs/help/webgear_rtc_ex.md b/docs/help/webgear_rtc_ex.md
new file mode 100644
index 000000000..894599957
--- /dev/null
+++ b/docs/help/webgear_rtc_ex.md
@@ -0,0 +1,213 @@
+
+
+# WebGear_RTC_RTC Examples
+
+
+
+## Using WebGear_RTC with RaspberryPi Camera Module
+
+Because of WebGear_RTC API's flexible internal wapper around VideoGear, it can easily access any parameter of CamGear and PiGear videocapture APIs.
+
+!!! info "Following usage examples are just an idea of what can be done with WebGear_RTC API, you can try various [VideoGear](../../gears/videogear/params/), [CamGear](../../gears/camgear/params/) and [PiGear](../../gears/pigear/params/) parameters directly in WebGear_RTC API in the similar manner."
+
+Here's a bare-minimum example of using WebGear_RTC API with the Raspberry Pi camera module while tweaking its various properties in just one-liner:
+
+```python
+# import libs
+import uvicorn
+from vidgear.gears.asyncio import WebGear_RTC
+
+# various webgear_rtc performance and Raspberry Pi camera tweaks
+options = {
+ "frame_size_reduction": 25,
+ "hflip": True,
+ "exposure_mode": "auto",
+ "iso": 800,
+ "exposure_compensation": 15,
+ "awb_mode": "horizon",
+ "sensor_mode": 0,
+}
+
+# initialize WebGear_RTC app
+web = WebGear_RTC(
+ enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options
+)
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+
+
+## Using WebGear_RTC with real-time Video Stabilization enabled
+
+Here's an example of using WebGear_RTC API with real-time Video Stabilization enabled:
+
+```python
+# import libs
+import uvicorn
+from vidgear.gears.asyncio import WebGear_RTC
+
+# various webgear_rtc performance tweaks
+options = {
+ "frame_size_reduction": 25,
+}
+
+# initialize WebGear_RTC app with a raw source and enable video stabilization(`stabilize=True`)
+web = WebGear_RTC(source="foo.mp4", stabilize=True, logging=True, **options)
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+
+
+## Display Two Sources Simultaneously in WebGear_RTC
+
+In this example, we'll be displaying two video feeds side-by-side simultaneously on browser using WebGear_RTC API by simply concatenating frames in real-time:
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+```python
+# import necessary libs
+import uvicorn, asyncio, cv2
+import numpy as np
+from av import VideoFrame
+from aiortc import VideoStreamTrack
+from vidgear.gears.asyncio import WebGear_RTC
+from vidgear.gears.asyncio.helper import reducer
+
+# initialize WebGear_RTC app without any source
+web = WebGear_RTC(logging=True)
+
+# frame concatenator
+def get_conc_frame(frame1, frame2):
+ h1, w1 = frame1.shape[:2]
+ h2, w2 = frame2.shape[:2]
+
+ # create empty matrix
+ vis = np.zeros((max(h1, h2), w1 + w2, 3), np.uint8)
+
+ # combine 2 frames
+ vis[:h1, :w1, :3] = frame1
+ vis[:h2, w1 : w1 + w2, :3] = frame2
+
+ return vis
+
+
+# create your own Bare-Minimum Custom Media Server
+class Custom_RTCServer(VideoStreamTrack):
+ """
+ Custom Media Server using OpenCV, an inherit-class
+ to aiortc's VideoStreamTrack.
+ """
+
+ def __init__(self, source1=None, source2=None):
+
+ # don't forget this line!
+ super().__init__()
+
+ # check is source are provided
+ if source1 is None or source2 is None:
+ raise ValueError("Provide both source")
+
+ # initialize global params
+ # define both source here
+ self.stream1 = cv2.VideoCapture(source1)
+ self.stream2 = cv2.VideoCapture(source2)
+
+ async def recv(self):
+ """
+ A coroutine function that yields `av.frame.Frame`.
+ """
+ # don't forget this function!!!
+
+ # get next timestamp
+ pts, time_base = await self.next_timestamp()
+
+ # read video frame
+ (grabbed1, frame1) = self.stream1.read()
+ (grabbed2, frame2) = self.stream2.read()
+
+ # if NoneType
+ if not grabbed1 or not grabbed2:
+ return None
+ else:
+ print("Got frames")
+
+ print(frame1.shape)
+ print(frame2.shape)
+
+ # concatenate frame
+ frame = get_conc_frame(frame1, frame2)
+
+ print(frame.shape)
+
+ # reducer frames size if you want more performance otherwise comment this line
+ # frame = await reducer(frame, percentage=30) # reduce frame by 30%
+
+ # contruct `av.frame.Frame` from `numpy.nd.array`
+ av_frame = VideoFrame.from_ndarray(frame, format="bgr24")
+ av_frame.pts = pts
+ av_frame.time_base = time_base
+
+ # return `av.frame.Frame`
+ return av_frame
+
+ def terminate(self):
+ """
+ Gracefully terminates VideoGear stream
+ """
+ # don't forget this function!!!
+
+ # terminate
+ if not (self.stream1 is None):
+ self.stream1.release()
+ self.stream1 = None
+
+ if not (self.stream2 is None):
+ self.stream2.release()
+ self.stream2 = None
+
+
+# assign your custom media server to config with both adequate sources (for e.g. foo1.mp4 and foo2.mp4)
+web.config["server"] = Custom_RTCServer(
+ source1="dance_videos/foo1.mp4", source2="dance_videos/foo2.mp4"
+)
+
+# run this app on Uvicorn server at address http://localhost:8000/
+uvicorn.run(web(), host="localhost", port=8000)
+
+# close app safely
+web.shutdown()
+```
+
+!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in Browser."
+
+
+
\ No newline at end of file
diff --git a/docs/help/webgear_rtc_faqs.md b/docs/help/webgear_rtc_faqs.md
index fd4212254..ac468f546 100644
--- a/docs/help/webgear_rtc_faqs.md
+++ b/docs/help/webgear_rtc_faqs.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -84,6 +84,13 @@ For accessing WebGear_RTC on different Client Devices on the network, use `"0.0.
+## How can to add CORS headers to WebGear_RTC?
+
+**Answer:** See [this usage example ➶](../../gears/webgear_rtc/advanced/#using-webgear_rtc-with-middlewares).
+
+
+
+
## Can I change the default location?
**Answer:** Yes, you can use WebGear_RTC's [`custom_data_location`](../../gears/webgear_rtc/params/#webgear_rtc-specific-attributes) attribute of `option` parameter in WebGear_RTC API, to change [default location](../../gears/webgear_rtc/overview/#default-location) to somewhere else.
diff --git a/docs/help/writegear_ex.md b/docs/help/writegear_ex.md
new file mode 100644
index 000000000..c505a55cb
--- /dev/null
+++ b/docs/help/writegear_ex.md
@@ -0,0 +1,306 @@
+
+
+
+# WriteGear Examples
+
+
+
+## Using WriteGear's Compression Mode for YouTube-Live Streaming
+
+!!! new "New in v0.2.1"
+ This example was added in `v0.2.1`.
+
+!!! alert "This example assume you already have a [**YouTube Account with Live-Streaming enabled**](https://support.google.com/youtube/answer/2474026#enable) for publishing video."
+
+!!! danger "Make sure to change [_YouTube-Live Stream Key_](https://support.google.com/youtube/answer/2907883#zippy=%2Cstart-live-streaming-now) with yours in following code before running!"
+
+```python
+# import required libraries
+from vidgear.gears import CamGear
+from vidgear.gears import WriteGear
+import cv2
+
+# define video source
+VIDEO_SOURCE = "/home/foo/foo.mp4"
+
+# Open stream
+stream = CamGear(source=VIDEO_SOURCE, logging=True).start()
+
+# define required FFmpeg optimizing parameters for your writer
+# [NOTE]: Added VIDEO_SOURCE as audio-source, since YouTube rejects audioless streams!
+output_params = {
+ "-i": VIDEO_SOURCE,
+ "-acodec": "aac",
+ "-ar": 44100,
+ "-b:a": 712000,
+ "-vcodec": "libx264",
+ "-preset": "medium",
+ "-b:v": "4500k",
+ "-bufsize": "512k",
+ "-pix_fmt": "yuv420p",
+ "-f": "flv",
+}
+
+# [WARNING] Change your YouTube-Live Stream Key here:
+YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx"
+
+# Define writer with defined parameters and
+writer = WriteGear(
+ output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY),
+ logging=True,
+ **output_params
+)
+
+# loop over
+while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # write frame to writer
+ writer.write(frame)
+
+# safely close video stream
+stream.stop()
+
+# safely close writer
+writer.close()
+```
+
+
+
+
+## Using WriteGear's Compression Mode creating MP4 segments from a video stream
+
+!!! new "New in v0.2.1"
+ This example was added in `v0.2.1`.
+
+```python
+# import required libraries
+from vidgear.gears import VideoGear
+from vidgear.gears import WriteGear
+import cv2
+
+# Open any video source `foo.mp4`
+stream = VideoGear(
+ source="foo.mp4", logging=True
+).start()
+
+# define required FFmpeg optimizing parameters for your writer
+output_params = {
+ "-c:v": "libx264",
+ "-crf": 22,
+ "-map": 0,
+ "-segment_time": 9,
+ "-g": 9,
+ "-sc_threshold": 0,
+ "-force_key_frames": "expr:gte(t,n_forced*9)",
+ "-clones": ["-f", "segment"],
+}
+
+# Define writer with defined parameters
+writer = WriteGear(output_filename="output%03d.mp4", logging=True, **output_params)
+
+# loop over
+while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # write frame to writer
+ writer.write(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+# close output window
+cv2.destroyAllWindows()
+
+# safely close video stream
+stream.stop()
+
+# safely close writer
+writer.close()
+```
+
+
+
+
+## Using WriteGear's Compression Mode to add external audio file input to video frames
+
+!!! new "New in v0.2.1"
+ This example was added in `v0.2.1`.
+
+!!! failure "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
+
+```python
+# import required libraries
+from vidgear.gears import CamGear
+from vidgear.gears import WriteGear
+import cv2
+
+# open any valid video stream(for e.g `foo_video.mp4` file)
+stream = CamGear(source="foo_video.mp4").start()
+
+# add various parameters, along with custom audio
+stream_params = {
+ "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!!
+ "-i": "foo_audio.aac", # assigns input audio-source: "foo_audio.aac"
+}
+
+# Define writer with defined parameters
+writer = WriteGear(output_filename="Output.mp4", logging=True, **stream_params)
+
+# loop over
+while True:
+
+ # read frames from stream
+ frame = stream.read()
+
+ # check for frame if Nonetype
+ if frame is None:
+ break
+
+ # {do something with the frame here}
+
+ # write frame to writer
+ writer.write(frame)
+
+ # Show output window
+ cv2.imshow("Output Frame", frame)
+
+ # check for 'q' key if pressed
+ key = cv2.waitKey(1) & 0xFF
+ if key == ord("q"):
+ break
+
+# close output window
+cv2.destroyAllWindows()
+
+# safely close video stream
+stream.stop()
+
+# safely close writer
+writer.close()
+```
+
+
+
+
+## Using WriteGear with ROS(Robot Operating System)
+
+We will be using [`cv_bridge`](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython) to convert OpenCV frames to ROS image messages and vice-versa.
+
+In this example, we'll create a node that listens to a ROS image message topic, converts the recieved images messages into OpenCV frames, draws a circle on it, and then process these frames into a lossless compressed file format in real-time.
+
+!!! new "New in v0.2.2"
+ This example was added in `v0.2.2`.
+
+!!! note "This example is vidgear implementation of this [wiki example](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)."
+
+```python
+# import roslib
+import roslib
+
+roslib.load_manifest("my_package")
+
+# import other required libraries
+import sys
+import rospy
+import cv2
+from std_msgs.msg import String
+from sensor_msgs.msg import Image
+from cv_bridge import CvBridge, CvBridgeError
+from vidgear.gears import WriteGear
+
+# custom publisher class
+class image_subscriber:
+ def __init__(self, output_filename="Output.mp4"):
+ # create CV bridge
+ self.bridge = CvBridge()
+ # define publisher topic
+ self.image_pub = rospy.Subscriber("image_topic_sub", Image, self.callback)
+ # Define writer with default parameters
+ self.writer = WriteGear(output_filename=output_filename)
+
+ def callback(self, data):
+ # convert recieved data to frame
+ try:
+ cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
+ except CvBridgeError as e:
+ print(e)
+
+ # check if frame is valid
+ if cv_image:
+
+ # {do something with the frame here}
+
+ # add circle
+ (rows, cols, channels) = cv_image.shape
+ if cols > 60 and rows > 60:
+ cv2.circle(cv_image, (50, 50), 10, 255)
+
+ # write frame to writer
+ writer.write(frame)
+
+ def close(self):
+ # safely close video stream
+ self.writer.close()
+
+
+def main(args):
+ # define publisher with suitable output filename
+ # such as `Output.mp4` for saving output
+ ic = image_subscriber(output_filename="Output.mp4")
+ # initiate ROS node on publisher
+ rospy.init_node("image_subscriber", anonymous=True)
+ try:
+ # run node
+ rospy.spin()
+ except KeyboardInterrupt:
+ print("Shutting down")
+ finally:
+ # close publisher
+ ic.close()
+
+
+if __name__ == "__main__":
+ main(sys.argv)
+```
+
+
\ No newline at end of file
diff --git a/docs/help/writegear_faqs.md b/docs/help/writegear_faqs.md
index 48824a720..bb2764b2c 100644
--- a/docs/help/writegear_faqs.md
+++ b/docs/help/writegear_faqs.md
@@ -3,7 +3,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -39,10 +39,8 @@ limitations under the License.
**Answer:** WriteGear will exit with `ValueError` if you feed frames of different dimensions or channels.
-
-
## How to install and configure FFmpeg correctly for WriteGear on my machine?
**Answer:** Follow these [Installation Instructions ➶](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation.
@@ -109,205 +107,21 @@ limitations under the License.
## Is YouTube-Live Streaming possibe with WriteGear?
-**Answer:** Yes, See example below:
-
-!!! new "New in v0.2.1"
- This example was added in `v0.2.1`.
-
-!!! warning "This example assume you already have a [**YouTube Account with Live-Streaming enabled**](https://support.google.com/youtube/answer/2474026#enable) for publishing video."
-
-!!! danger "Make sure to change [_YouTube-Live Stream Key_](https://support.google.com/youtube/answer/2907883#zippy=%2Cstart-live-streaming-now) with yours in following code before running!"
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import WriteGear
-import cv2
-
-# define video source
-VIDEO_SOURCE = "/home/foo/foo.mp4"
-
-# Open stream
-stream = CamGear(source=VIDEO_SOURCE, logging=True).start()
-
-# define required FFmpeg optimizing parameters for your writer
-# [NOTE]: Added VIDEO_SOURCE as audio-source, since YouTube rejects audioless streams!
-output_params = {
- "-i": VIDEO_SOURCE,
- "-acodec": "aac",
- "-ar": 44100,
- "-b:a": 712000,
- "-vcodec": "libx264",
- "-preset": "medium",
- "-b:v": "4500k",
- "-bufsize": "512k",
- "-pix_fmt": "yuv420p",
- "-f": "flv",
-}
-
-# [WARNING] Change your YouTube-Live Stream Key here:
-YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx"
-
-# Define writer with defined parameters and
-writer = WriteGear(
- output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY),
- logging=True,
- **output_params
-)
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
- # {do something with the frame here}
-
- # write frame to writer
- writer.write(frame)
-
-# safely close video stream
-stream.stop()
-
-# safely close writer
-writer.close()
-```
+**Answer:** Yes, See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-for-youtube-live-streaming).
## How to create MP4 segments from a video stream with WriteGear?
-**Answer:** See example below:
-
-!!! new "New in v0.2.1"
- This example was added in `v0.2.1`.
-
-```python
-# import required libraries
-from vidgear.gears import VideoGear
-from vidgear.gears import WriteGear
-import cv2
-
-# Open any video source `foo.mp4`
-stream = VideoGear(
- source="foo.mp4", logging=True
-).start()
-
-# define required FFmpeg optimizing parameters for your writer
-output_params = {
- "-c:v": "libx264",
- "-crf": 22,
- "-map": 0,
- "-segment_time": 9,
- "-g": 9,
- "-sc_threshold": 0,
- "-force_key_frames": "expr:gte(t,n_forced*9)",
- "-clones": ["-f", "segment"],
-}
-
-# Define writer with defined parameters
-writer = WriteGear(output_filename="output%03d.mp4", logging=True, **output_params)
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
- # {do something with the frame here}
-
- # write frame to writer
- writer.write(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close writer
-writer.close()
-```
+**Answer:** See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-creating-mp4-segments-from-a-video-stream).
## How add external audio file input to video frames?
-**Answer:** See example below:
-
-!!! new "New in v0.2.1"
- This example was added in `v0.2.1`.
-
-!!! failure "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all."
-
-```python
-# import required libraries
-from vidgear.gears import CamGear
-from vidgear.gears import WriteGear
-import cv2
-
-# open any valid video stream(for e.g `foo_video.mp4` file)
-stream = CamGear(source="foo_video.mp4").start()
-
-# add various parameters, along with custom audio
-stream_params = {
- "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!!
- "-i": "foo_audio.aac", # assigns input audio-source: "foo_audio.aac"
-}
-
-# Define writer with defined parameters
-writer = WriteGear(output_filename="Output.mp4", logging=True, **stream_params)
-
-# loop over
-while True:
-
- # read frames from stream
- frame = stream.read()
-
- # check for frame if Nonetype
- if frame is None:
- break
-
- # {do something with the frame here}
-
- # write frame to writer
- writer.write(frame)
-
- # Show output window
- cv2.imshow("Output Frame", frame)
-
- # check for 'q' key if pressed
- key = cv2.waitKey(1) & 0xFF
- if key == ord("q"):
- break
-
-# close output window
-cv2.destroyAllWindows()
-
-# safely close video stream
-stream.stop()
-
-# safely close writer
-writer.close()
-```
+**Answer:** See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-to-add-external-audio-file-input-to-video-frames).
diff --git a/docs/index.md b/docs/index.md
index e1a24f30f..d26ec494f 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -28,9 +28,9 @@ limitations under the License.
-> VidGear is a High-Performance **Video-Processing** Framework for building complex real-time media applications in python :fire:
+> VidGear is a cross-platform High-Performance **Video-Processing** Framework for building complex real-time media applications in python :fire:
-VidGear provides an easy-to-use, highly extensible, **Multi-Threaded + Asyncio Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering robust error-handling and real-time performance ⚡️.
+VidGear provides an easy-to-use, highly extensible, **[Multi-Threaded](bonus/TQM/#threaded-queue-mode) + [Asyncio](https://docs.python.org/3/library/asyncio.html) API Framework** on top of many state-of-the-art specialized libraries like *[OpenCV][opencv], [FFmpeg][ffmpeg], [ZeroMQ][zmq], [picamera][picamera], [starlette][starlette], [streamlink][streamlink], [pafy][pafy], [pyscreenshot][pyscreenshot], [aiortc][aiortc] and [python-mss][mss]* at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering robust error-handling and real-time performance ⚡️.
> _"Write Less and Accomplish More"_ — VidGear's Motto
@@ -40,13 +40,17 @@ VidGear focuses on simplicity, and thereby lets programmers and software develop
## Getting Started
-- [x] If this is your first time using VidGear, head straight to the [Installation ➶](installation.md) to install VidGear.
+!!! tip "In case you're run into any problems, consult the [Help](help/get_help) section."
-- [x] Once you have VidGear installed, **Checkout its Function-Specific [Gears ➶](gears.md)**
+- [x] If this is your first time using VidGear, head straight to the [**Installation**](installation.md) to install VidGear.
+
+- [x] Once you have VidGear installed, Checkout its **[Function-Specific Gears](gears.md)**.
+
+- [x] Also, if you're already familar with [**OpenCV**][opencv] library, then see **[Switching from OpenCV Library](switch_from_cv.md)**.
+
+!!! alert "If you're just getting started with OpenCV-Python programming, then refer this [FAQ ➶](help/general_faqs/#im-new-to-python-programming-or-its-usage-in-opencv-library-how-to-use-vidgear-in-my-projects)"
-- [x] Also, if you're already familar with [OpenCV][opencv] library, then see [Switching from OpenCV Library ➶](switch_from_cv.md)
-- [x] Or, if you're just getting started with OpenCV with Python, then see [here ➶](../help/general_faqs/#im-new-to-python-programming-or-its-usage-in-computer-vision-how-to-use-vidgear-in-my-projects)
@@ -63,7 +67,7 @@ These Gears can be classified as follows:
* [CamGear](gears/camgear/overview/): Multi-Threaded API targeting various IP-USB-Cameras/Network-Streams/Streaming-Sites-URLs.
* [PiGear](gears/pigear/overview/): Multi-Threaded API targeting various Raspberry-Pi Camera Modules.
* [ScreenGear](gears/screengear/overview/): Multi-Threaded API targeting ultra-fast Screencasting.
-* [VideoGear](gears/videogear/overview/): Common Video-Capture API with internal [Video Stabilizer](gears/stabilizer/overview/) wrapper.
+* [VideoGear](gears/videogear/overview/): Common Video-Capture API with internal [_Video Stabilizer_](gears/stabilizer/overview/) wrapper.
#### VideoWriter Gears
@@ -71,7 +75,7 @@ These Gears can be classified as follows:
#### Streaming Gears
-* [StreamGear](gears/streamgear/overview/): Handles Transcoding of High-Quality, Dynamic & Adaptive Streaming Formats.
+* [StreamGear](gears/streamgear/introduction/): Handles Transcoding of High-Quality, Dynamic & Adaptive Streaming Formats.
* **Asynchronous I/O Streaming Gear:**
@@ -92,29 +96,29 @@ These Gears can be classified as follows:
> Contributions are welcome, and greatly appreciated!
-Please see our [Contribution Guidelines ➶](contribution.md) for more details.
+Please see our [**Contribution Guidelines**](contribution.md) for more details.
## Community Channel
-If you've come up with some new idea, or looking for the fastest way troubleshoot your problems. Please checkout our [Gitter community channel ➶][gitter]
+If you've come up with some new idea, or looking for the fastest way troubleshoot your problems. Please checkout our [**Gitter community channel ➶**][gitter]
## Become a Stargazer
-You can be a [Stargazer :star2:][stargazer] by starring us on Github, it helps us a lot and you're making it easier for others to find & trust this library. Thanks!
+You can be a [**Stargazer :star2:**][stargazer] by starring us on Github, it helps us a lot and you're making it easier for others to find & trust this library. Thanks!
-## Support Us
+## Donations
-> VidGear relies on your support :heart:
+> VidGear is free and open source and will always remain so. :heart:
-Donations help keep VidGear's Open Source Development alive. No amount is too little, even the smallest contributions can make a huge difference.
+It is (like all open source software) a labour of love and something I am doing with my own free time. If you would like to say thanks, please feel free to make a donation:
-
+
@@ -122,13 +126,22 @@ Donations help keep VidGear's Open Source Development alive. No amount is too li
Here is a Bibtex entry you can use to cite this project in a publication:
+[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4718616.svg)](https://doi.org/10.5281/zenodo.4718616)
```BibTeX
-@misc{vidgear,
- author = {Abhishek Thakur},
- title = {vidgear},
- howpublished = {\url{https://github.com/abhiTronix/vidgear}},
- year = {2019-2021}
+@software{vidgear,
+ author = {Abhishek Thakur and
+ Christian Clauss and
+ Christian Hollinger and
+ Benjamin Lowe and
+ Mickaël Schoentgen and
+ Renaud Bouckenooghe},
+ title = {abhiTronix/vidgear: VidGear v0.2.2},
+ year = 2021
+ publisher = {Zenodo},
+ version = {vidgear-0.2.2},
+ doi = {10.5281/zenodo.4718616},
+ url = {https://doi.org/10.5281/zenodo.4718616}
}
```
diff --git a/docs/installation.md b/docs/installation.md
index 256ea7c3c..b77f53e44 100644
--- a/docs/installation.md
+++ b/docs/installation.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/installation/pip_install.md b/docs/installation/pip_install.md
index 2155bfcd5..dfb8c2a3e 100644
--- a/docs/installation/pip_install.md
+++ b/docs/installation/pip_install.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -21,85 +21,154 @@ limitations under the License.
# Install using pip
-> _Best option for quickly getting stable VidGear installed._
+> _Best option for easily getting stable VidGear installed._
## Prerequisites
-When installing VidGear with pip, you need to check manually if following dependencies are installed:
+When installing VidGear with [pip](https://pip.pypa.io/en/stable/installing/), you need to check manually if following dependencies are installed:
-### OpenCV
-Must require OpenCV(3.0+) python binaries installed for all core functions. You easily install it directly via [pip](https://pip.pypa.io/en/stable/installing/):
+???+ alert "Upgrade your `pip`"
-??? tip "OpenCV installation from source"
+ It strongly advised to upgrade to latest `pip` before installing vidgear to avoid any undesired installation error(s). There are two mechanisms to upgrade `pip`:
- You can also follow online tutorials for building & installing OpenCV on [Windows](https://www.learnopencv.com/install-opencv3-on-windows/), [Linux](https://www.pyimagesearch.com/2018/05/28/ubuntu-18-04-how-to-install-opencv/) and [Raspberry Pi](https://www.pyimagesearch.com/2018/09/26/install-opencv-4-on-your-raspberry-pi/) machines manually from its source.
+ 1. **`ensurepip`:** Python comes with an [`ensurepip`](https://docs.python.org/3/library/ensurepip.html#module-ensurepip) module[^1], which can easily upgrade/install `pip` in any Python environment.
-```sh
- pip install -U opencv-python
-```
+ === "Linux/MacOS"
-### FFmpeg
+ ```sh
+ python -m ensurepip --upgrade
+
+ ```
-Must require for the video compression and encoding compatibilities within [StreamGear](#streamgear) and [**Compression Mode**](../../gears/writegear/compression/overview/) in [WriteGear](#writegear) API.
+ === "Windows"
-!!! tip "FFmpeg Installation"
+ ```sh
+ py -m ensurepip --upgrade
+
+ ```
+ 2. **`pip`:** Use can also use existing `pip` to upgrade itself:
- Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation.
+ ??? info "Install `pip` if not present"
-### Picamera
+ * Download the script, from https://bootstrap.pypa.io/get-pip.py.
+ * Open a terminal/command prompt, `cd` to the folder containing the `get-pip.py` file and run:
-Must Required if you're using Raspberry Pi Camera Modules with its [PiGear](../../gears/pigear/overview/) API. You can easily install it via pip:
+ === "Linux/MacOS"
+ ```sh
+ python get-pip.py
+
+ ```
-!!! warning "Make sure to [**enable Raspberry Pi hardware-specific settings**](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior to using this library, otherwise it won't work."
+ === "Windows"
-```sh
- pip install picamera
-```
+ ```sh
+ py get-pip.py
+
+ ```
+ More details about this script can be found in [pypa/get-pip’s README](https://github.com/pypa/get-pip).
-### Aiortc
-Must Required only if you're using the [WebGear_RTC API](../../gears/webgear_rtc/overview/). You can easily install it via pip:
+ === "Linux/MacOS"
-??? error "Microsoft Visual C++ 14.0 is required."
-
- Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices:
+ ```sh
+ python -m pip install pip --upgrade
+
+ ```
- !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well."
+ === "Windows"
- - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16).
- - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019).
- - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe)
+ ```sh
+ py -m pip install pip --upgrade
+
+ ```
- Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only:
+### Core Prerequisites
- - [x] Windows 10 SDK
- - [x] C++ x64/x86 build tools
+* #### OpenCV
- Finally, proceed installing `aiortc` via pip.
+ Must require OpenCV(3.0+) python binaries installed for all core functions. You easily install it directly via [pip](https://pypi.org/project/opencv-python/):
-```sh
- pip install aiortc
-```
+ ??? tip "OpenCV installation from source"
-### Uvloop
+ You can also follow online tutorials for building & installing OpenCV on [Windows](https://www.learnopencv.com/install-opencv3-on-windows/), [Linux](https://www.pyimagesearch.com/2018/05/28/ubuntu-18-04-how-to-install-opencv/), [MacOS](https://www.pyimagesearch.com/2018/08/17/install-opencv-4-on-macos/) and [Raspberry Pi](https://www.pyimagesearch.com/2018/09/26/install-opencv-4-on-your-raspberry-pi/) machines manually from its source.
-Must required only if you're using the [NetGear_Async](../../gears/netgear_async/overview/) API on UNIX machines for maximum performance. You can easily install it via pip:
+ :warning: Make sure not to install both *pip* and *source* version together. Otherwise installation will fail to work!
-!!! error "uvloop is **[NOT yet supported on Windows Machines](https://github.com/MagicStack/uvloop/issues/14).**"
-!!! warning "Python-3.6 legacies support [**dropped in version `>=1.15.0`**](https://github.com/MagicStack/uvloop/releases/tag/v0.15.0). Kindly install previous `0.14.0` version instead."
+ ??? info "Other OpenCV binaries"
-```sh
- pip install uvloop
-```
+ OpenCV mainainers also provide additional binaries via pip that contains both main modules and contrib/extra modules [`opencv-contrib-python`](https://pypi.org/project/opencv-contrib-python/), and for server (headless) environments like [`opencv-python-headless`](https://pypi.org/project/opencv-python-headless/) and [`opencv-contrib-python-headless`](https://pypi.org/project/opencv-contrib-python-headless/). You can also install ==any one of them== in similar manner. More information can be found [here](https://github.com/opencv/opencv-python#installation-and-usage).
+
+
+ ```sh
+ pip install opencv-python
+ ```
+
+
+### API Specific Prerequisites
+
+* #### FFmpeg
+
+ Require only for the video compression and encoding compatibility within [**StreamGear API**](../../gears/streamgear/overview/) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/).
+
+ !!! tip "FFmpeg Installation"
+
+ Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation.
+
+* #### Picamera
+
+ Required only if you're using Raspberry Pi Camera Modules with its [**PiGear**](../../gears/pigear/overview/) API. You can easily install it via pip:
+
+
+ !!! warning "Make sure to [**enable Raspberry Pi hardware-specific settings**](https://picamera.readthedocs.io/en/release-1.13/quickstart.html) prior to using this library, otherwise it won't work."
+
+ ```sh
+ pip install picamera
+ ```
+
+* #### Aiortc
+
+ Required only if you're using the [**WebGear_RTC**](../../gears/webgear_rtc/overview/) API. You can easily install it via pip:
+
+ ??? error "Microsoft Visual C++ 14.0 is required."
+
+ Installing `aiortc` on windows may sometimes require Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices:
+
+ !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well."
+
+ - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16).
+ - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019).
+ - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe)
+
+ Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only:
+
+ - [x] Windows 10 SDK
+ - [x] C++ x64/x86 build tools
+
+ Finally, proceed installing `aiortc` via pip.
+
+ ```sh
+ pip install aiortc
+ ```
+
+* #### Uvloop
+
+ Required only if you're using the [**NetGear_Async**](../../gears/netgear_async/overview/) API on UNIX machines for maximum performance. You can easily install it via pip:
+
+ !!! error "uvloop is **[NOT yet supported on Windows Machines](https://github.com/MagicStack/uvloop/issues/14).**"
+ !!! warning "Python-3.6 legacies support [**dropped in version `>=1.15.0`**](https://github.com/MagicStack/uvloop/releases/tag/v0.15.0). Kindly install previous `0.14.0` version instead."
+
+ ```sh
+ pip install uvloop
+ ```
## Installation
-Installation is as simple as:
+**Installation is as simple as:**
??? warning "Windows Installation"
@@ -108,45 +177,101 @@ Installation is as simple as:
A quick solution may be to preface every Python command with `python -m` like this:
```sh
- python -m pip install vidgear
+ python -m pip install vidgear
+
+ # or with asyncio support
+ python -m pip install vidgear[asyncio]
+ ```
+
+ And, If you don't have the privileges to the directory you're installing package. Then use `--user` flag, that makes pip install packages in your home directory instead:
+
+ ``` sh
+ python -m pip install --user vidgear
- # or with asyncio support
- python -m pip install vidgear[asyncio]
+ # or with asyncio support
+ python -m pip install --user vidgear[asyncio]
```
- If you don't have the privileges to the directory you're installing package. Then use `--user` flag, that makes pip install packages in your home directory instead:
+ Or, If you're using `py` as alias for installed python, then:
``` sh
- python -m pip install --user vidgear
+ py -m pip install --user vidgear
- # or with asyncio support
- python -m pip install --user vidgear[asyncio]
+ # or with asyncio support
+ py -m pip install --user vidgear[asyncio]
```
+??? experiment "Installing vidgear with only selective dependencies"
+
+ Starting with version `v0.2.2`, you can now run any VidGear API by installing only just specific dependencies required by the API in use(except for some Core dependencies).
+
+ This is useful when you want to manually review, select and install minimal API-specific dependencies on bare-minimum vidgear from scratch on your system:
+
+ - To install bare-minimum vidgear without any dependencies, use [`--no-deps`](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-no-deps) pip flag as follows:
+
+ ```sh
+ # Install stable release without any dependencies
+ pip install --no-deps --upgrade vidgear
+ ```
+
+ - Then, you must install all **Core dependencies**:
+
+ ```sh
+ # Install core dependencies
+ pip install cython, numpy, requests, tqdm, colorlog
+
+ # Install opencv(only if not installed previously)
+ pip install opencv-python
+ ```
+
+ - Finally, manually install your **API-specific dependencies** as required by your API(in use):
+
+
+ | APIs | Dependencies |
+ |:---:|:---|
+ | CamGear | `pafy`, `youtube-dl`, `streamlink` |
+ | PiGear | `picamera` |
+ | VideoGear | - |
+ | ScreenGear | `mss`, `pyscreenshot`, `Pillow` |
+ | WriteGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/writegear/compression/advanced/ffmpeg_install/#ffmpeg-installation-instructions) |
+ | StreamGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/streamgear/ffmpeg_install/#ffmpeg-installation-instructions) |
+ | NetGear | `pyzmq`, `simplejpeg` |
+ | WebGear | `starlette`, `jinja2`, `uvicorn`, `simplejpeg` |
+ | WebGear_RTC | `aiortc`, `starlette`, `jinja2`, `uvicorn` |
+ | NetGear_Async | `pyzmq`, `msgpack`, `msgpack_numpy`, `uvloop` |
+
+ ```sh
+ # Just copy-&-paste from above table
+ pip install
+ ```
+
+
```sh
- # Install stable release
- pip install vidgear
+# Install latest stable release
+pip install -U vidgear
- # Or Install stable release with Asyncio support
- pip install vidgear[asyncio]
+# Or Install latest stable release with Asyncio support
+pip install -U vidgear[asyncio]
```
**And if you prefer to install VidGear directly from the repository:**
```sh
- pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear
+pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear
- # or with asyncio support
- pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear[asyncio]
+# or with asyncio support
+pip install git+git://github.com/abhiTronix/vidgear@master#egg=vidgear[asyncio]
```
**Or you can also download its wheel (`.whl`) package from our repository's [releases](https://github.com/abhiTronix/vidgear/releases) section, and thereby can be installed as follows:**
```sh
- pip install vidgear-0.2.1-py3-none-any.whl
+pip install vidgear-0.2.2-py3-none-any.whl
- # or with asyncio support
- pip install vidgear-0.2.1-py3-none-any.whl[asyncio]
+# or with asyncio support
+pip install vidgear-0.2.2-py3-none-any.whl[asyncio]
```
+
+[^1]: :warning: The `ensurepip` module is missing/disabled on Ubuntu. Use second method.
\ No newline at end of file
diff --git a/docs/installation/source_install.md b/docs/installation/source_install.md
index 0ad39d706..9f1e2cee3 100644
--- a/docs/installation/source_install.md
+++ b/docs/installation/source_install.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -26,54 +26,114 @@ limitations under the License.
## Prerequisites
-When installing VidGear from source, FFmpeg and Aiortc is the only dependency you need to install manually:
+When installing VidGear from source, FFmpeg and Aiortc are the only two API specific dependencies you need to install manually:
!!! question "What about rest of the dependencies?"
- Any other python dependencies will be automatically installed based on your OS specifications.
+ Any other python dependencies _(Core/API specific)_ will be automatically installed based on your OS specifications.
+
-### FFmpeg
+???+ alert "Upgrade your `pip`"
-Must require for the video compression and encoding compatibilities within [StreamGear](#streamgear) and [**Compression Mode**](../../gears/writegear/compression/overview/) in [WriteGear](#writegear) API.
+ It strongly advised to upgrade to latest `pip` before installing vidgear to avoid any undesired installation error(s). There are two mechanisms to upgrade `pip`:
-!!! tip "FFmpeg Installation"
+ 1. **`ensurepip`:** Python comes with an [`ensurepip`](https://docs.python.org/3/library/ensurepip.html#module-ensurepip) module[^1], which can easily upgrade/install `pip` in any Python environment.
- Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation.
+ === "Linux/MacOS"
+ ```sh
+ python -m ensurepip --upgrade
+
+ ```
-### Aiortc
+ === "Windows"
-Must Required only if you're using the [WebGear_RTC API](../../gears/webgear_rtc/overview/). You can easily install it via pip:
+ ```sh
+ py -m ensurepip --upgrade
+
+ ```
+ 2. **`pip`:** Use can also use existing `pip` to upgrade itself:
-??? error "Microsoft Visual C++ 14.0 is required."
-
- Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices:
+ ??? info "Install `pip` if not present"
- !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well."
+ * Download the script, from https://bootstrap.pypa.io/get-pip.py.
+ * Open a terminal/command prompt, `cd` to the folder containing the `get-pip.py` file and run:
- - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16).
- - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019).
- - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe)
+ === "Linux/MacOS"
- Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only:
+ ```sh
+ python get-pip.py
+
+ ```
- - [x] Windows 10 SDK
- - [x] C++ x64/x86 build tools
+ === "Windows"
- Finally, proceed installing `aiortc` via pip.
+ ```sh
+ py get-pip.py
+
+ ```
+ More details about this script can be found in [pypa/get-pip’s README](https://github.com/pypa/get-pip).
-```sh
- pip install aiortc
-```
+
+ === "Linux/MacOS"
+
+ ```sh
+ python -m pip install pip --upgrade
+
+ ```
+
+ === "Windows"
+
+ ```sh
+ py -m pip install pip --upgrade
+
+ ```
+
+### API Specific Prerequisites
+
+* #### FFmpeg
+
+ Require only for the video compression and encoding compatibility within [**StreamGear API**](../../gears/streamgear/overview/) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/).
+
+ !!! tip "FFmpeg Installation"
+
+ Follow this dedicated [**FFmpeg Installation doc**](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation.
+* #### Aiortc
+
+ Required only if you're using the [**WebGear_RTC**](../../gears/webgear_rtc/overview/) API. You can easily install it via pip:
+
+ ??? error "Microsoft Visual C++ 14.0 is required."
+
+ Installing `aiortc` on windows may sometimes requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices:
+
+ !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well."
+
+ - Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16).
+ - Alternative link to Microsoft [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019).
+ - Offline installer: [vs_buildtools.exe](https://aka.ms/vs/16/release/vs_buildtools.exe)
+
+ Afterwards, Select: Workloads → Desktop development with C++, then for Individual Components, select only:
+
+ - [x] Windows 10 SDK
+ - [x] C++ x64/x86 build tools
+
+ Finally, proceed installing `aiortc` via pip.
+
+ ```sh
+ pip install aiortc
+ ```
+
## Installation
-If you want to just install and try out the checkout the latest beta [`testing`](https://github.com/abhiTronix/vidgear/tree/testing) branch , you can do so with the following command. This can be useful if you want to provide feedback for a new feature or want to confirm if a bug you have encountered is fixed in the `testing` branch.
+**If you want to just install and try out the checkout the latest beta [`testing`](https://github.com/abhiTronix/vidgear/tree/testing) branch , you can do so with the following command:**
-!!! warning "DO NOT clone or install `development` branch, as it is not tested with CI environments and is possibly very unstable or unusable."
+!!! info "This can be useful if you want to provide feedback for a new feature or want to confirm if a bug you have encountered is fixed in the `testing` branch."
+
+!!! warning "DO NOT clone or install `development` branch unless advised, as it is not tested with CI environments and possibly very unstable or unusable."
??? tip "Windows Installation"
@@ -81,7 +141,7 @@ If you want to just install and try out the checkout the latest beta [`testing`]
* Use following commands to clone and install VidGear:
- ```sh
+ ```sh
# clone the repository and get inside
git clone https://github.com/abhiTronix/vidgear.git && cd vidgear
@@ -93,7 +153,73 @@ If you want to just install and try out the checkout the latest beta [`testing`]
# OR install with asyncio support
python - m pip install .[asyncio]
- ```
+ ```
+
+ * If you're using `py` as alias for installed python, then:
+
+ ``` sh
+ # clone the repository and get inside
+ git clone https://github.com/abhiTronix/vidgear.git && cd vidgear
+
+ # checkout the latest testing branch
+ git checkout testing
+
+ # install normally
+ python -m pip install .
+
+ # OR install with asyncio support
+ python - m pip install .[asyncio]
+ ```
+
+??? experiment "Installing vidgear with only selective dependencies"
+
+ Starting with version `v0.2.2`, you can now run any VidGear API by installing only just specific dependencies required by the API in use(except for some Core dependencies).
+
+ This is useful when you want to manually review, select and install minimal API-specific dependencies on bare-minimum vidgear from scratch on your system:
+
+ - To install bare-minimum vidgear without any dependencies, use [`--no-deps`](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-no-deps) pip flag as follows:
+
+ ```sh
+ # clone the repository and get inside
+ git clone https://github.com/abhiTronix/vidgear.git && cd vidgear
+
+ # checkout the latest testing branch
+ git checkout testing
+
+ # Install without any dependencies
+ pip install --no-deps .
+ ```
+
+ - Then, you must install all **Core dependencies**:
+
+ ```sh
+ # Install core dependencies
+ pip install cython, numpy, requests, tqdm, colorlog
+
+ # Install opencv(only if not installed previously)
+ pip install opencv-python
+ ```
+
+ - Finally, manually install your **API-specific dependencies** as required by your API(in use):
+
+
+ | APIs | Dependencies |
+ |:---:|:---|
+ | CamGear | `pafy`, `youtube-dl`, `streamlink` |
+ | PiGear | `picamera` |
+ | VideoGear | - |
+ | ScreenGear | `mss`, `pyscreenshot`, `Pillow` |
+ | WriteGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/writegear/compression/advanced/ffmpeg_install/#ffmpeg-installation-instructions) |
+ | StreamGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/streamgear/ffmpeg_install/#ffmpeg-installation-instructions) |
+ | NetGear | `pyzmq`, `simplejpeg` |
+ | WebGear | `starlette`, `jinja2`, `uvicorn`, `simplejpeg` |
+ | WebGear_RTC | `aiortc`, `starlette`, `jinja2`, `uvicorn` |
+ | NetGear_Async | `pyzmq`, `msgpack`, `msgpack_numpy`, `uvloop` |
+
+ ```sh
+ # Just copy-&-paste from above table
+ pip install
+ ```
```sh
# clone the repository and get inside
@@ -119,3 +245,6 @@ pip install git+git://github.com/abhiTronix/vidgear@testing#egg=vidgear[asyncio]
```
+
+
+[^1]: The `ensurepip` module was added to the Python standard library in Python 3.4.
diff --git a/docs/license.md b/docs/license.md
index 6e972a8b2..b65ef570a 100644
--- a/docs/license.md
+++ b/docs/license.md
@@ -2,7 +2,7 @@
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
-Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,7 +24,7 @@ This library is released under the **[Apache 2.0 License](https://github.com/abh
## Copyright Notice
- Copyright (c) 2019-2020 Abhishek Thakur(@abhiTronix)
+ Copyright (c) 2019 Abhishek Thakur(@abhiTronix)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/docs/overrides/404.html b/docs/overrides/404.html
index be90f793f..ab2502822 100644
--- a/docs/overrides/404.html
+++ b/docs/overrides/404.html
@@ -1,8 +1,62 @@
+{% extends "main.html" %} {% block content %}
+
404
+
+
UH OH! You're lost.
+
The page you are looking for does not exist. How you got here is a mystery. But you can click the button below to go back to the homepage.