Ask Your Question

JeyP4's profile - activity

2020-07-19 10:59:13 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-19 10:45:44 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-19 10:44:42 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-19 09:18:19 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-19 08:08:43 -0600 received badge  Student (source)
2020-07-19 07:19:13 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-19 06:56:17 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-19 06:52:38 -0600 commented question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Thanks @berak I follow your suggestion and explicitly present a simple program to reproduce my findings. (Appended my qu

2020-07-19 06:51:03 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-18 19:20:26 -0600 edited question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-07-18 19:19:31 -0600 asked a question Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images?

Why cv:: encode gives similar size of output buffer for bgr8 and mono8 images? Hello I notice 4.1 MB/s @50fps 1280x720

2020-03-17 05:16:22 -0600 commented question How to bound zebra crossing stripes?

@LBerger Thank you for spotting a good paper for me. Let me go through it and see if I can come up with a robust solutio

2020-03-14 04:52:21 -0600 edited question How to bound zebra crossing stripes?

How to bound zebra crossing stripes? Hello I understand that this a more direct question. Input: Output: I know bas

2020-03-14 04:51:30 -0600 edited question How to bound zebra crossing stripes?

How to bound zebra crossing stripes? Hello I understand that this a more direct question. Input: Output: I know bas

2020-03-13 15:36:35 -0600 asked a question How to bound zebra crossing stripes?

How to bound zebra crossing stripes? Hello I understand that this a more direct question. Input: Output: I know bas

2020-01-25 04:19:23 -0600 marked best answer [SOLVED]How to project points from undistort image to distort image?

I undistorted the fisheye lens image with help of cv::fisheye::calibrate and found below coefficients.

K = 
array([[541.11407173,   0.        , 659.87320043],
       [  0.        , 541.28079025, 318.68920531],
       [  0.        ,   0.        ,   1.        ]])
D =
array([[-3.91414244e-02],
       [-4.60198728e-03],
       [-3.02912651e-04],
       [ 2.83586453e-05]])

new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K, D, (1280, 720), np.eye(3), balance=1, new_size=(3400, 1912), fov_scale=1)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), new_K, (3400, 1912), cv2.CV_16SC2)
undistorted_img = cv2.remap(distorted_img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)

image description image description

How to find x and y ?

2020-01-24 20:54:59 -0600 received badge  Self-Learner (source)
2020-01-23 04:23:06 -0600 answered a question [SOLVED]How to project points from undistort image to distort image?

objp = np.array([[[(1595-new_K[0, 2])/new_K[0, 0], (922-new_K[1, 2])/new_K[1, 1], 0.]]]) rvec = np.array([[[0., 0., 0.]]

2020-01-20 08:47:26 -0600 asked a question [SOLVED]How to project points from undistort image to distort image?

How to project points from undistort image to distort image? I undistorted the fisheye lens image with help of cv::fishe

2020-01-02 10:32:52 -0600 edited question How to combine Color conversion(BGRA->BGR) and Remap operations?

How to combine Color conversion(BGRA->BGR) and Remap operations? I want to convert a distorted BGRx image to undistor

2020-01-01 15:28:35 -0600 edited question How to combine Color conversion(BGRA->BGR) and Remap operations?

How to combine Color conversion(BGRA->BGR) and Remap? OpenCV I want to convert a distorted BGRx image to undistorted

2020-01-01 15:27:55 -0600 asked a question How to combine Color conversion(BGRA->BGR) and Remap operations?

How to combine Color conversion(BGRA->BGR) and Remap? OpenCV I want to convert a distorted BGRx image to undistorted

2019-10-31 10:16:19 -0600 received badge  Self-Learner (source)
2019-10-30 11:54:58 -0600 marked best answer How to Undistort fisheye image into max bounding rectangle?

image description

I have calibrated camera with cv2.fisheye.calibrate with images of size 720x1280. Python script

K = 
array([[541.11407173,   0.        , 659.87320043],
       [  0.        , 541.28079025, 318.68920531],
       [  0.        ,   0.        ,   1.        ]])
D =
array([[-3.91414244e-02],
       [-4.60198728e-03],
       [-3.02912651e-04],
       [ 2.83586453e-05]])

And performed un-distortion with

new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K, D, (1280, 720), np.eye(3), balance=1)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), new_K, (1280, 720), cv2.CV_16SC2)
undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_CUBIC, borderMode=cv2.BORDER_CONSTANT)

image description

How can I achieve max obtainable rectangular image?

If I directly crop this image, quality deteriotes :(

I guess I have to play with balance, new_size, fov_scale properties of estimateNewCameraMatrixForUndistortRectify(). And also some properties of initUndistortRectifyMap().

2019-10-30 11:54:17 -0600 answered a question How to Undistort fisheye image into max bounding rectangle?

new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K, D, (1280, 720), np.eye(3), balance=1, new_size=(3400,

2019-10-28 14:27:47 -0600 edited question How to Undistort fisheye image into max bounding rectangle?

How to Undistort fisheye image into max bounding rectangle? I have calibrated camera with cv2.fisheye.calibrate with i

2019-10-28 11:26:19 -0600 edited question How to Undistort fisheye image into max bounding rectangle?

How to Undistort fisheye image into max bounding rectangle? I have calibrated camera with cv2.fisheye.calibrate with i

2019-10-28 11:24:48 -0600 asked a question How to Undistort fisheye image into max bounding rectangle?

How to Undistort fisheye image into max bounding rectangle? I have calibrated camera with cv2.fisheye.calibrate with i

2019-09-15 14:11:06 -0600 edited question Is YUV2BGR_NV12 conversion necessary to imshow an YUV image?

Is YUV2BGR_NV12 conversion necessary to imshow an YUV image? Hello I have NV12 (YUV 4:2:0) image data. I am able to con

2019-09-15 14:09:54 -0600 edited question Is YUV2BGR_NV12 conversion necessary to imshow an YUV image?

Is YUV2BGR_NV12 conversion is necessary to imshow an YUV image? Hello I have NV12 (YUV 4:2:0) image data. I am able to

2019-09-15 12:08:00 -0600 edited question Is YUV2BGR_NV12 conversion necessary to imshow an YUV image?

Is YUV2BGR_NV12 conversion is necessary to imshow an YUV image? Hello I have NV12 (YUV 4:2:0) image data. I am able to

2019-09-15 12:07:45 -0600 edited question Is YUV2BGR_NV12 conversion necessary to imshow an YUV image?

Is YUV2BGR_NV12 conversion is necessory to imshow an YUV image? Hello I have NV12 (YUV 4:2:0) image data. I am able to

2019-09-15 12:07:42 -0600 edited question Is YUV2BGR_NV12 conversion necessary to imshow an YUV image?

Is YUV2BGR_NV12 conversion is necessory to imshow an YUV image? Hello I have NV12 (YUV 4:2:0) image data. I am able to

2019-09-15 12:06:31 -0600 edited question Is YUV2BGR_NV12 conversion necessary to imshow an YUV image?

Is YUV2BGR_NV12 conversion is necessory to imshow an YUV image? Hello I have NV12 (YUV 4:2:0) image data. I am able to

2019-09-15 12:05:48 -0600 asked a question Is YUV2BGR_NV12 conversion necessary to imshow an YUV image?

Is YUV2BGR_NV12 conversion is necessory to imshow an YUV image? Hello I have NV12 (YUV 4:2:0) image data. I am able to

2019-07-16 16:36:33 -0600 edited question How to grow bright pixels in grey region?

How to grow bright pixels in grey region? How can I grow bright pixel in grey region? Input: Ouput: Note: bright

2019-07-16 16:36:02 -0600 asked a question How to grow bright pixels in grey region?

How to grow bright pixels in grey region? How can I grow bright pixel in gry region? Input: Ouput: Note: bright p

2019-04-09 11:59:47 -0600 marked best answer How to update cv::namedwindow in multi-threading environment?

If 1 callback receives an image and performs some image-processing. How can the output image be shown in multi-threading environment? By multi-threading I mean, if that particular callback(depthCallback) can be invoked by more than one thread.

And, one more query: Is using waitkey(1), optimal for real-time application?

class storedData {
  public:    
    cv::Mat im, depth, outIm;
    void imCallback(const sensor_msgs::CompressedImageConstPtr& msgIm) {
      im = cv::imdecode(cv::Mat(msgIm->data),3);
    }

    void depthCallback(const sensor_msgs::CompressedImageConstPtr& msgDepth)
    {        
      depth = cv::imdecode(cv::Mat(msgDepth->data),0);

      //    Performs something using both images(im & depth), Result : outIm    //

      cv::imshow("view", outIm);
      cv::waitKey(1);
    }
};
int main(int argc, char **argv)
{
  ros::init(argc, argv, "PredictiveDisplay");
  ros::NodeHandle nh;
  storedData obj;
  cv::namedWindow("view", 0);
  cv::startWindowThread();

  ros::Subscriber subIm = nh.subscribe("/image", 2, &storedData::imCallback, &obj);
  ros::Subscriber subDepth = nh.subscribe("/depth", 2, &storedData::depthCallback, &obj);

  ros::AsyncSpinner spinner(2);
  spinner.start();
  ros::waitForShutdown();

  cv::destroyWindow("view");
}
2019-04-09 09:55:45 -0600 edited question How to initialize array of cv::Mat with rows, cols and value?

How to initialize array of cv::Mat with rows, cols and value? How to initialize an array with 10 Mat's? cv::Mat im[10](

2019-04-09 09:54:59 -0600 asked a question How to initialize array of cv::Mat with rows, cols and value?

How to initialize array of cv::Mat with rows, cols and value? How to initialize an array with 10 Mat's? cv::Mat im[10](

2019-04-03 09:21:10 -0600 asked a question How to efficiently represent blobs in binary image with ellipse?

How to efficiently represent blobs in binary image with ellipse? I want to represent blobs with oriented ellipse. I hav

2019-03-30 04:51:46 -0600 commented question How to update cv::namedwindow in multi-threading environment?

In single threading ros::spin(), above code runs well. And update image normally. In multi-threading ros::AsyncSpinner s