Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
skydoves committed Sep 25, 2023
2 parents 91dd83f + 17ea968 commit f779467
Show file tree
Hide file tree
Showing 34 changed files with 1,000 additions and 314 deletions.
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,29 +123,29 @@ Video roadmap and changelog is available [here](https://github.com/GetStream/pro
### 0.4.0 milestone

- [X] Screensharing from mobile
- [ ] Implement Chat overlay for Dogfooding
- [ ] Add Dogfooding instructions + directs Google Play
- [ ] Reaction dialog API for Compose
- [ ] Complete Livestreaming APIs and Tutorials for hosting & watching
- [X] Picture of the video stream at the highest resolution + docs on how to add a button for this (Daniel)
- [X] Audio & Video filters. Support (Daniel)
- [ ] Default livestream player UI + docs (Jaewoong/ Daniel)
- [ ] Implement Chat overlay for Dogfooding (Jaewoong)
- [ ] Add Dogfooding instructions + directs Google Play (Jaewoong)
- [ ] Reaction dialog API for Compose (Jaewoong)
- [ ] Android SDK development.md cleanup (Daniel)
- [ ] Upgrade to more recent versions of webrtc (Kanat)
- [ ] Picture of the video stream at highest resolution
- [ ] Review foreground service vs backend for some things like screensharing etc
- [ ] Audio & Video filters. Support (Daniel)
- [ ] Dynascale 2.0
- [ ] Support participant.custom field which was previously ignored. ParticipantState line 216
- [ ] Call Analytics stateflow (Thierry)
- [ ] Test coverage
- [ ] Logging is too verbose (rtc is very noisy), clean it up to focus on the essential for info and higher
- [ ] Review foreground service vs backend for audio rooms etc. (Daniel)
- [ ] Support participant.custom field which was previously ignored. ParticipantState line 216 (Daniel)
- [ ] Logging is too verbose (rtc is very noisy), clean it up to focus on the essential for info and higher (Daniel)

### 0.5.0 milestone

- [ ] Testing on more devices
- [ ] Enable SFU switching
- [ ] H264 workaround on Samsung 23? (see https://github.com/livekit/client-sdk-android/blob/main/livekit-android-sdk/src/main/java/io/livekit/android/webrtc/SimulcastVideoEncoderFactoryWrapper.kt#L34 and
- https://github.com/react-native-webrtc/react-native-webrtc/issues/983#issuecomment-975624906)
- [ ] Test coverage
- [ ] Dynascale 2.0 (depending backend support)
- [ ] Testing on more devices
- [ ] Camera controls
- [ ] Tap to focus
- [ ] H264 workaround on Samsung 23 (see https://github.com/livekit/client-sdk-android/blob/main/livekit-android-sdk/src/main/java/io/livekit/android/webrtc/SimulcastVideoEncoderFactoryWrapper.kt#L34 and
- https://github.com/react-native-webrtc/react-native-webrtc/issues/983#issuecomment-975624906)


### Dynascale 2.0

Expand Down
65 changes: 63 additions & 2 deletions docusaurus/docs/Android/06-advanced/05-apply-video-filters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ How does this work? You can inject a filter through `Call.videoFilter`, you will

## Adding a Video Filter

Create a `BitmapVideoFilter` or `RawVideoFilter` instance in your project. Here is how the abstract classes look like:
Create a `BitmapVideoFilter` or `RawVideoFilter` instance in your project. Here is how the abstract classes are defined:

```kotlin
abstract class BitmapVideoFilter : VideoFilter() {
Expand Down Expand Up @@ -131,4 +131,65 @@ The result:

## Audio Filters

TODO
The StreamVideo SDK also supports custom audio processing of the local track. This opens up possibilities for custom echo filtering, voice changing or other audio effects.

If you want to have custom audio processing, you need to provide your own implementation of the `AudioFilter` interface to `Call.audioFilter`.

The `AudioFilter` is defined like this:

```kotlin
interface AudioFilter {
/**
* Invoked after an audio sample is recorded. Can be used to manipulate
* the ByteBuffer before it's fed into WebRTC. Currently the audio in the
* ByteBuffer is always PCM 16bit and the buffer sample size is ~10ms.
*
* @param audioFormat format in android.media.AudioFormat
*/
fun filter(audioFormat: Int, channelCount: Int, sampleRate: Int, sampleData: ByteBuffer)
}
```

In the following example, we will build a simple audio filter that gives the user's voice a robotic touch.

```kotlin
// We assume that you already have a call instance (call is started)
// Create a simple filter (pitch modification) and assign it to the call

call.audioFilter = object: AudioFilter {

override fun filter(audioFormat: Int, channelCount: Int, sampleRate: Int, sampleData: ByteBuffer) {
// You can modify the pitch factor to achieve a bit different effect
val pitchShiftFactor = 0.8f
val inputBuffer = audioBuffer.duplicate()
inputBuffer.order(ByteOrder.LITTLE_ENDIAN) // Set byte order for correct handling of PCM data

val numSamples = inputBuffer.remaining() / 2 // Assuming 16-bit PCM audio

val outputBuffer = ByteBuffer.allocate(inputBuffer.capacity())
outputBuffer.order(ByteOrder.LITTLE_ENDIAN)

for (channel in 0 until numChannels) {
val channelBuffer = ShortArray(numSamples)
inputBuffer.asShortBuffer().get(channelBuffer)

for (i in 0 until numSamples) {
val originalIndex = (i * pitchShiftFactor).toInt()

if (originalIndex >= 0 && originalIndex < numSamples) {
outputBuffer.putShort(channelBuffer[originalIndex])
} else {
// Fill with silence if the index is out of bounds
outputBuffer.putShort(0)
}
}
}

outputBuffer.flip()
audioBuffer.clear()
audioBuffer.put(outputBuffer)
audioBuffer.flip()
}
}
```
This is a simple algorithm that just does shifting of the indexes. For a more complex one, you can also use some voice processing library. The important part is that you update the `channelBuffer` with the filtered values.
34 changes: 34 additions & 0 deletions docusaurus/docs/Android/06-advanced/06-screenshots.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
title: Screenshots
description: How to take screenshots of VideoFrames
---

## Screenshots

You can take a picture of a `VideoTrack` at highest possible resolution by using `Call.takeScreenshot(videoTrack): Bitmap`. This can be useful for example if you want to take a screenshot of a screenshare at full resolution.

You need to get the right `VideoTrack` to take the screenshot first - the selection depends on your use case. Let's for example take the first participant in the list of participants in a Call:

```kotlin
val participant = call.state.participants.value[0]
val participantVideoTrack = participant.videoTrack.value
if (participantVideoTrack != null) {
val bitmap = call.takeScreenshot(participantVideoTrack)
// display, save or share the bitmap
}
```

:::note
A `VideoTrack` can be null when the current participant video is not visible on the screen. Video is only streamed from participants that are currently visible.
:::

Or for example if you specifically want to take a screenshot of a screenshare session:

```kotlin
val screenshareSession = call.state.screenSharingSession.value
val screenShareTrack = screenshareSession?.participant?.screenSharingTrack?.value
if (screenShareTrack != null) {
val bitmap = call.takeScreenshot(screenShareTrack)
// display, save or share the bitmap
}
```
Loading

0 comments on commit f779467

Please sign in to comment.