-
-
Notifications
You must be signed in to change notification settings - Fork 794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VideoEncoder using inputSurfaceView to send to RTMPClient #1702
Comments
Hello, I did a demo implementation of the library working with deepAR here: |
It was working great, but now I'm facing an issue with the aspect ratio during streaming. It's not displaying correctly. Specifically, after the face is detected by DeepAR, the resulting stream seems to convert to a landscape orientation instead. |
Can you share a screenshot? |
mobile side |
Ok, I see You only have problems in the preview (mobile side). Do you have a full code example with your modifications? |
private boolean initializeDeepAR(String licenseKey, CameraResolutionPreset resolutionPreset) {
try {
int width = resolutionPreset.getWidth();
int height = resolutionPreset.getHeight();
deepAR = new DeepAR(activity);
deepAR.setLicenseKey(licenseKey);
deepAR.initialize(activity, this);
deepAR.changeLiveMode(true);
TextureRegistry.SurfaceTextureEntry entry = flutterPlugin.getTextureRegistry().createSurfaceTexture();
tempSurfaceTexture = entry.surfaceTexture();
tempSurfaceTexture.setDefaultBufferSize(width, height);
surface = new Surface(tempSurfaceTexture);
textureId = entry.id();
var source = new DeepARSource(activity, licenseKey, deepAR, this);
MicrophoneSource microphoneSource = new MicrophoneSource();
rtmpStream = new RtmpStream(activity, this, source, microphoneSource);
rtmpStream.getGlInterface().setAutoHandleOrientation(false);
rtmpStream.getGlInterface().setIsPortrait(false); // Force portrait
boolean prepared = rtmpStream.prepareVideo(
width,
height,
1200 * 1000,
30
) && rtmpStream.prepareAudio(
44100,
true,
128 * 1000
);
if (prepared) {
Log.d("RTMP", "Stream prepared successfully");
rtmpStream.startPreview(surface, width, height);
rtmpStream.getGlInterface().setPreviewResolution(width, height);
// Add orientation check here
rtmpStream.getGlInterface().setAutoHandleOrientation(true);
boolean isAutoOrientation = rtmpStream.getGlInterface().getAutoHandleOrientation();
Log.d("OrientationCheck", "Auto Orientation: " + isAutoOrientation);
Log.d("OrientationCheck", "Current Orientation: " + rtmpStream.getGlInterface().getAutoHandleOrientation());
rtmpStream.startStream("rtmp://ingest.global-contribute.live-video.net/app/live_176582760_BKfgmOZd8rlcKONDfGZK9MxXCsV6cD");
Log.d("RTMP", "Streaming started successfully");
return true;
}
return false;
} catch (Exception e) {
Log.e(TAG, "Error: " + e.getMessage(), e);
return false;
}
}
this was intitialized method
deep ar source file
package com.example.myapp
import ai.deepar.ar.AREventListener
import ai.deepar.ar.DeepAR
import android.content.Context
import android.graphics.SurfaceTexture
import android.view.Surface
import com.pedro.encoder.input.sources.video.VideoSource
/**
* Created by pedro on 17/10/24.
*/
class DeepARSource(
context: Context,
licenseKey: String,
deepARInstance:DeepAR,
listener: AREventListener
): VideoSource() {
private var running = false
val deepAR=deepARInstance;
override fun create(width: Int, height: Int, fps: Int, rotation: Int): Boolean {
return true
}
override fun isRunning(): Boolean = running
override fun release() {
deepAR.release()
}
override fun start(surfaceTexture: SurfaceTexture) {
deepAR.setRenderSurface(Surface(surfaceTexture), height, width)
deepAR.startCapture()
running = true
}
override fun stop() {
deepAR.stopCapture()
running = false
}
} |
Hello,
I have been trying to send a video feed directly from DeepAR but DeepAR allows only one mode at a time (renderToSurfaceView) or offScreenRendering which gives me processed Frames that dont function very well.
My primary goal is to directly feed the surface as that will probably work best for my case but the setInputSurface method that comes with the VideoEncoder doesnt work in my case. I am working with the 2.2.6 version of the RootEncoder. With using frame by frame approach I managed to somewhat convert the frames and send them through the VideoEncoder and the GetVideoData interface but the performance is abysmal and the frames get distorted because of DeepArs weird way of processing frames. Is there anyway we could directly just use the Surface that DeepAR is rendering its frames on ?
The current approach which I experimented around with is something like:
// renderer.renderImage(frame);
ByteBuffer buffer=frame.getPlanes()[0].getBuffer();
int[] byteArray=getArrayfromBytes(buffer);
if(byteArray.length>0){
ByteBuffer yuvBuffer=YUVBufferExtractor.convertImageToYUV(frame);
// byte[] yuvByteArr = YUVUtil.ARGBtoYUV420SemiPlanar(byteArray, frame.getWidth()+48, frame.getHeight());
byte[] yuvByteArr=yuvBuffer.array();
Frame yuvFrame =new Frame(yuvByteArr, 0, frame.getWidth()*frame.getHeight());
renderer.renderImageThroughFrame(yuvFrame, frame.getWidth(), frame.getHeight());
videoEncoder.inputYUVData(yuvFrame);
}
}
The text was updated successfully, but these errors were encountered: