Hi, I have an ov9281 camera and a RPi4 and for my programs I want to capture directly to MMAL buffer as I have been advised that this lets me do image processing without the high cost of copying from gpu mem to cpu mem.
Does the Ardu cam driver support me in doing this?
I haven’t started trying yet I’m just trying figure out if it’s possible. Something along these lines:
“Instead of copying the buffer from the GPU and doing a colour space / pixel format conversion the GL_OES_EGL_image_external is used. This allows an EGL image to be created from GPU buffer handle (MMAL opaque buffer handle). The EGL image may then be used to create a texture (glEGLImageTargetTexture2DOES) and drawn by either OpenGL ES 1.0 or 2.0 contexts.”
I want to use OGL to generate an image pyramid using FBO render-to-texture’s and also run Gaussian blur’s on the gpu on that image pyramid + maybe other shader based manipulations.
Then pass this on to my computer vision program (slam).
Sounds great. We have not done relevant experiments yet. I am very interested in the Gaussuan blur you said. I am willing to cooperate with you. Maybe this will help repair the lens shading of the camera lens.
Awesome! I’ll start seeing if I can get a test bed up and running with the main pieces from that example and then I can let you know here. This is an evening/hobby project for me and I’m not an amazing programmer so its probably going to be slow going. Anyway I’ll post back here when I’m a bit more set up!
Of course. My email is [email protected]. About the official userland sdk, it just support ov5647 im219 and imx477. So, when you use ov9281 sensor, it can’t detected. Why not use 219 to verify the principle first. our SDK support ov9281 but don’t have OpenGL interface. If officially feasible, we could consider adding this interface to the reference. So I advise you use the imx219 test depends on official sdk.