I wrote an app in iOS which takes two video sources, a moving character on a green screen and any With other videos The program then uses the GPUImage framework so that the chroma key shader can be connected through OpenGL 2 ES and then merged into each frame (so the bottom frame now shows where the green pixels are) and in a new video file Output is faster and faster than real time.
I have been working with Android to port the app. I thought it would be quite easy. After doing some research, I think I am wrong. There is an Android port of a GPUImage, but it does not handle video at this time. I did some research and came up with a very basic idea.
I was wondering if you think this approach is possible:
Convert a video file to resolution and other types of matches using the video ffmpeg or JavaCV rappers By doing this
Using the FMPPEG through the frame of each video, read the frame as a MediaMatData retreader is very slow and convert to some RGB format. Use shader to apply Chroma key effect, so both frames are merged.
Use ffmpeg in a new file for output results.
It seems slow, but if it seems possible I'll try to do it out. I'm not sure about ensuring that 2 Video Proposals / Bitrate etc match. A video will be fixed at 1280 * 720 and the second video source will come on the device from the camera, so they will be variable. Besides, I think that the use of FDPAP NDK means that which is the pain of the whole world which I wanted to save.
I have a headache thinking about it. Any advice would be greatly appreciated.
No comments:
Post a Comment