Ask Your Question
0

Using SuperResolution with a Lepton 3.5 PureThermal 2

asked 2019-09-19 12:38:45 -0600

Rhapsodus gravatar image

updated 2019-09-19 13:40:13 -0600

Greetings all, I have been having difficulty with the OpenCV superres namespace. Mostly with the SetInput coming from a uvc video camera. In the open cv examples there is a small app which maps a video FrameSource to the SueprResolution class through the call SetInput. Calling Next frame results in a cv::Mat which gets shown through imshow. I have no issues with this part. My issues involve the FrameSource itself not talking to the hardware correctly. As a sanity check I used cheese and can see through the camera just fine. The included pictures may make more sense.

As seen by cheese: https://ibb.co/ByB9wgH As seen by OpenCV: https://ibb.co/mzcZ4Q0

Opencv has no issue with the Lepton 2.5, so is there a way to specify the pixel format in the parameter to createFrameSource_Camera or createFrameSource_Video ? If not, can I use my own 'frame' for processing by super resolution? As in, pass in my own cv::Mat or pixel* to SetInput that comes from a Ros Image Transport topic? It would be useful to run SuperResolution on a networked stream... Such as: ThermalImageCallback(const sensor_msgs::ImageConstPtr& msg as input into SuperResolution

    v4l2-ctl --list-formats
    ioctl: VIDIOC_ENUM_FMT
Index       : 0
Type        : Video Capture
Pixel Format: 'UYVY'
Name        : UYVY 4:2:2

Index       : 1
Type        : Video Capture
Pixel Format: 'Y16 '
Name        : 16-bit Greyscale

Index       : 2
Type        : Video Capture
Pixel Format: 'GREY'
Name        : 8-bit Greyscale

Index       : 3
Type        : Video Capture
Pixel Format: 'RGBP'
Name        : 16-bit RGB 5-6-5

Index       : 4
Type        : Video Capture
Pixel Format: 'BGR3'
Name        : 24-bit BGR 8-8-8

Has anyone else had luck getting a flir lepton working with opencv processed into superresolution?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
2

answered 2019-10-10 11:45:49 -0600

Rhapsodus gravatar image

I managed to get this working and the way to use your own buffer from an image, network feed, or any other source is to subclass cv::superres::FrameSource. There are two functions that must be over ridden. setFrame and nextFrame. nextFrame is the API the super resolution classes calls on the framesource to get whatever back buffer is required.

Since I am running super resolution with Cuda, my data must be uploaded to the GPU. And when the library calls nextFrame, I copy this gpu data into the destination buffer. Then super resolution takes over and uses the destination buffer it passes.

///
/// \brief The ThermalFrameSource class
///
class ThermalFrameSource : public cv::superres::FrameSource
{

public:
    ThermalFrameSource() : cv::superres::FrameSource()
    {

    }

    ///
    /// \brief ThermalFrameSource::setFrame
    /// \param frameToUse
    ///
    /// Note frameToUse is on the cpu, send to GPU
    virtual void setFrame( cv::Mat & frameToUse )
    {
        GpuFrame.upload(frameToUse);
    }

    ///
    /// \brief ThermalFrameSource::nextFrame
    /// \param frame
    ///
    /// Note this is on the gpu!!!
    virtual void nextFrame(cv::OutputArray frame)
    {
        // This is the line that matters here. In the cpu version you can copy 
        // any chunk of memory into the outputarray frame, even from static
        // buffers.
        GpuFrame.copyTo(frame);
    }

    virtual void reset()
    {

    }

private:
    cv::cuda::GpuMat GpuFrame;
    cv::Mat CustomFrame;
};

It is used by the following:

//Creation
SuperResolution = cv::superres::createSuperResolution_BTVL1_CUDA();
OpticalFlow     = cv::superres::createOptFlow_Farneback_CUDA();
SuperResolution->setOpticalFlow(OpticalFlow);
SuperResolution->setScale(4);
SuperResolution->setIterations(4);
SuperResolution->setTemporalAreaRadius(3);
FrameSource = new ThermalFrameSource();
SuperResolution->setInput(FrameSource);

//Usage. Note ThermalImage wraps a byte[120*160*3] array and is filled in
// with other parts of the code. Regardless of how it is filled, cv::Mat can use
// the memory pointer to pass to the custom FrameSource

        cv::Mat lowResThermal( 120, 160, CV_8UC3, ThermalImage->RawPixels());
        FrameSource->setFrame(lowResThermal);
        SuperResolution->nextFrame( SuperResThermal );
        uint8_t* resultingPixels = SuperResThermal.data;

To fill in my buffer (ThermalImage) from the camera, I use the libuvc API. Hope this helps others looking to use superresolution with a custom buffer.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2019-09-19 12:38:45 -0600

Seen: 1,401 times

Last updated: Oct 10 '19