Ask Your Question

FooBar's profile - activity

2020-11-28 05:42:58 -0600 received badge  Nice Question (source)
2020-11-06 13:37:17 -0600 received badge  Good Answer (source)
2020-06-19 10:23:23 -0600 received badge  Popular Question (source)
2019-05-18 15:48:31 -0600 received badge  Notable Question (source)
2018-11-29 06:22:55 -0600 received badge  Notable Question (source)
2018-08-07 20:15:17 -0600 received badge  Popular Question (source)
2018-03-01 04:39:40 -0600 received badge  Popular Question (source)
2017-12-01 13:57:31 -0600 commented question SolvePnP can't get correct result

"P != eye(3)" The projection matrix is 3x4, so why do you compare it to a 3x3 matrix? "and the number is larger than 4."

2017-12-01 13:55:58 -0600 commented question Best calibration checkerboard config (size and square length) ?

Just on small comment on patterns: Please do not print out paper and somehow fix it to a cardboard. Invest 20$ and let i

2017-11-13 01:38:29 -0600 edited answer How can I test if camera calibration was optimal?

"But how can I know that camera calibtation was optimal? " That's a tricky question. A good rmse (well below a pixel fo

2017-11-13 01:36:33 -0600 commented answer Marker pose calculation with solvepnp

(8,8,0): I've never seen an 8m calibration target :)

2017-11-13 01:34:59 -0600 answered a question How can I test if camera calibration was optimal?

"But how can I know that camera calibtation was optimal? " That's a tricky question. A good rmse (well below a pixel fo

2017-11-12 09:49:29 -0600 asked a question TriangulatePoints with known object?

TriangulatePoints with known object? Hey! I have a calibrated stereo setup (using the IR and RGB Camera from a older Pr

2017-06-25 06:11:43 -0600 commented answer Calculate x distance to line from edge at middle of y?

This representation is numerically unstable and not able to handle vertical lines.

2017-06-01 06:37:11 -0600 edited question How to detect 3d textue vs printed document?

I am trying to detect a textured artwork from their photocopied or printed versions using image processing. The texture as such is very thin (nearly 100 microns) so shape and shadow technique did not give result that well. Any other method through which I can achieve the same?

2017-05-22 10:56:02 -0600 edited question How to capture from mako camera with vimba?

I have a mako camera and want to capture real time video from it. I do not know if I capture directly from vimba or I use opencv? If I use opencv so how I connect vimba with opencv? Thanks

2017-05-22 09:10:36 -0600 commented answer How to verify the accuracy of solvePnP return values?

This is your free choice. "projectPoints" IS a tool to simulate the scene.

2017-05-19 09:14:04 -0600 answered a question How to verify the accuracy of solvePnP return values?

You can generate a set of 3d points and a known transformation. Then use projectPoints with these values and some intrinsic camera matrix. This will create a set of 3d points, a transformation and 2d points that you can use as a test case.

so roughly

projectPoints(3d_points, R, t, camMatrix) -> 2d_points

and

solvePnP(2d_points, 3d_points, camMatrix) -> (R,t)
2017-05-02 13:01:46 -0600 edited question what is the solution to this error

Traceback (most recent call last): File "C:\dfsdf.py", line 13, in <module> cv2.imshow("input", img) error: ......\opencv-2.4.13.2\modules\highgui\src\window.cpp:269: error: (-215) size.width>0 && size.height>0 in function cv::imshow

2017-05-02 10:29:09 -0600 answered a question Determinate speed of car from video

This is not possible. Visual odometry is only accurate up to scale so you will need additional information like an object with known size. This fact gets more intuitive if you think about adding a camera to a toy car that is driving through a toy city. The images will look the same as in the real world.

2017-03-25 17:45:01 -0600 received badge  Nice Answer (source)
2017-01-29 15:33:22 -0600 received badge  Nice Answer (source)
2016-12-24 02:49:22 -0600 commented question How to find each black bar's begin and end row numbers ?

This is not a "do my homework"-forum. What did you try before asking here?

2016-12-15 03:07:46 -0600 edited question which python version should i install on my system and how.

i have install open cv version 2.4.9 and visual studio 2010 X86,i have windows 8 64 bit system , now i want to configure open cv with python i have tried this lib and software

matplotlib-1.3.0.win32-py2.7

numpy-1.7.1-win32-superpack-python2.7

python-2.7.12 shel (for python x64) results are : Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information.

import numpy import matplotlib import cv2

Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> import cv2 ImportError: DLL load failed: %1 is not a valid Win32 application.

python-2.7.12 shel (for python x86) results are : Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information.

import numpy import matplotlib import cv2

Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> import cv2 ImportError: DLL load failed: %1 is not a valid Win32 application.

2016-11-12 21:29:59 -0600 edited question How to find a segment location after applying PyrMeanShiftFiltering.

I am using following code to differentiate between a sky blue region (required region) in my image C:\fakepath\input image.jpg

and the background. I have applied PyrMeanShiftFiltering function to do segmentation of the image. After applying code the result

C:\fakepath\result.jpg

image have successfully segmented sky blue section but now i want to know the location of this sky blue region (mid point). Can any one help me how i can know the coordinates of this blue region in my image after doing this image segmentation.

#include "cv.h"
#include "highgui.h"
#include "math.h"
#include <iostream.h>


int main(int argc, char** argv)
{
IplImage* output_image;

    IplImage* image = cvLoadImage("31.jpg",CV_LOAD_IMAGE_COLOR);
  CvMemStorage* storage = cvCreateMemStorage(0);
        cvNamedWindow( "origional image", 1 );
        cvShowImage( "origional image", image);
        cvWaitKey(0);

    IplImage *filtered = cvCreateImage(cvGetSize(image),image->depth,image->nChannels);

        cvCopy(image,filtered,NULL);

  int level = 3;
  int spatial_radius = 40;
  int color_radius  = 40;

  filtered->width &= -(1<<level);
  filtered->height &= -(1<<level);

  cvPyrMeanShiftFiltering(filtered, filtered,spatial_radius,color_radius,level);
  cvNamedWindow( "fourth", 1 );
        cvShowImage( "fourth", filtered);
        cvWaitKey(0);

   return 0;
}
2016-11-02 12:44:56 -0600 commented question How to detect rotation angle 0 , -90 ,+90 or 180

If you have additional requirements (e.g. cards without barcode) it would be helpful to integrate this information into the question. How do you expect someone to guess this?

2016-11-02 12:13:54 -0600 commented question How to detect rotation angle 0 , -90 ,+90 or 180

Hey! These are possible approaches, have you also thought about template matching without features: cv::matchTemplate I also thought about detecting (or reading the barcode. e.g. using zbar). OpenCV has also some OCR capabilities: OCRTesseract

2016-11-01 14:42:02 -0600 commented question How to detect rotation angle 0 , -90 ,+90 or 180

It took me 5 seconds to come up with around four ideas and you really have none? Where is your own work?

2016-10-30 09:55:08 -0600 edited question Detecting an object and Defining its position in an image?

Hi forum,

How can I detect an object and define its position in an image using opencv c++?

Do you think using Cascade of Haar-Like Classifier for image are correct way? If not, what can i do?

Thanks for your help!! Minh.

2016-10-30 04:32:14 -0600 commented question Commercial use of Hough Transforms

You most probably will use another feature like orb or brisk which have similar performance.

2016-10-28 01:43:49 -0600 commented question Commercial use of Hough Transforms

The patent is 50 years old... Only the algorithms in the nonfree-modules are patented, the rest in OpenCV can be used commercially.

2016-09-13 09:40:55 -0600 received badge  Nice Answer (source)
2016-09-09 13:35:35 -0600 answered a question Change pixel x,y location with mathematical function

Have a look at cv::remap

2016-09-06 13:54:57 -0600 edited question What does the ValueError : could not broadcast from input array(1) to input array(2) mean?
import numpy as np
import cv2
from matplotlib import pyplot as plt
while(1):
    img=cv2.imread('D:\IMG_0590_1.jpg')    
    ball = img[278:104, 330:158]
    img[181:355, 100:274] = ball

    cv2.imshow('img',img)

    if cv2.waitKey(10) & 0xFF == ord('q'):
            break
cv2.destroyAllWindows()

This is my code but it is giving the error mentioned in the question. What is the problem?

2016-09-06 13:17:14 -0600 commented question Bring an Object that appear far in an Image closer to your view !

Could you post example images of what you want to achieve?

2016-09-02 00:39:30 -0600 commented answer Efficiently passing Mat object in different classes

"whereas saving the image to disk involves multiple RAM and disk accesses." You could write the file to a RamDisk which would already speed up the current implementation by just changing the directory.

2016-09-01 03:19:34 -0600 received badge  Nice Answer (source)
2016-08-31 10:10:12 -0600 answered a question Can a paper printed chessboard affect camera calibration?

I prefer to create my calibration targets from aluminum composites: https://us.whitewall.com/photo-lab/al... They are planar enough, can be rather large and are extremely cheap.

2016-08-30 13:11:06 -0600 commented question Problem with drawmatches()

Please add the code here so that the question can still be useful after the link expires. "falling on image "13380-sony.jpg"." How do you expect someone to know what is happing?

2016-08-29 15:52:43 -0600 commented question Stereo camera calibration gives bad rms error

Please don't post text as an image. (Why not print it, scan it and then post it?) And what is the meaning of the matrices?

2016-08-29 12:51:32 -0600 commented question Stereo camera calibration gives bad rms error

Do you first get the intrinsic calibration for the two cameras?

2016-08-29 03:07:23 -0600 commented answer Speed up for remap with convertMaps

'cv::initUndistortRectifyMap' That should be the inverse function. It creates map so that it applies the distortion model to a pixel. (The pixel at (x,y) in my dst image is seen at map(x, y)=distort(x, y) in the src image). Thanks for also running an evaluation. Have you also measured (or estimated) the variance in the runs? I had some iterations where the remap was around 2 or 3 times slower than in the mean case.

2016-08-27 14:37:21 -0600 asked a question Speed up for remap with convertMaps

Hello!

I'm currently working on distorting an image with remap to simulate a camera. This is how I create the maps:

Mat float_distortion_map_x(height_, width_, CV_32FC1);
Mat float_distortion_map_y(height_, width_, CV_32FC1);

Point2f undistorted;
for (int x=0; x<width_; ++x)
    for (int y=0; y < height_; ++y)
    {
        {
            undistort_point(Point2f(x, y), undistorted); // wrapper for cv::undistortPoints
            float_distortion_map_x.at<float>(y, x) = undistorted.x;
            float_distortion_map_y.at<float>(y, x) = undistorted.y;
        }
    }

#if 1
    ROS_INFO("NO Conversion");
    float_distortion_map_x.copyTo(distortion_map_1_);
    float_distortion_map_y.copyTo(distortion_map_2_);
#else
    ROS_INFO("CONVERTING");
    distortion_map_1_ = Mat(height_, width_, CV_16SC2);
    distortion_map_1_ = Mat(height_, width_, CV_16UC1);
    convertMaps(float_distortion_map_x, float_distortion_map_y,  distortion_map_1_, distortion_map_2_, CV_16SC2, false);
#endif

The results look great, If I run a cv::undistort on the result, I again get my original image, so that this part should be ok. However, the remap documentation claims a speed up of around 2 for the converted version which I cannot reproduce. I called remap with different interpolation types and the run time for a VGA image have been (mean for 100 conversions)

> LANCZOS4: 33ms,  
> Linear: 0.5ms, 
> Cubic: 1.3ms

But the same for the original maps and the converted ones. I could not see any difference. Has anyone also tested this claim and found something similar?

2016-08-22 02:51:38 -0600 answered a question identifier 'imshow' and 'waitkey' is undefined

imshow is in the highgui-module: highgui so you have to include that.

2016-08-21 07:41:42 -0600 edited question for c in cnts: NameError: name 'cnts' is not defined
# USAGE
# python motion_detector.py
# python motion_detector.py --video videos/example_01.mp4

# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())

# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

# otherwise, we are reading from a video file
else:
    camera = cv2.VideoCapture(args["video"])

# initialize the first frame in the video stream
firstFrame = None

# loop over the frames of the video
while True:
    # grab the current frame and initialize the occupied/unoccupied
    # text
    (grabbed, frame) = camera.read()
    text = "Unoccupied"

    # if the frame could not be grabbed, then we have reached the end
    # of the video
    if not grabbed:
        break

    # resize the frame, convert it to grayscale, and blur it
    frame = imutils.resize(frame, width=500)
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (21, 21), 0)

    # if the first frame is None, initialize it
    if firstFrame is None:
        firstFrame = gray
        continue

    # compute the absolute difference between the current frame and
    # first frame
    frameDelta = cv2.absdiff(firstFrame, gray)
    thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

    # dilate the thresholded image to fill in holes, then find contours
    # on thresholded image
    thresh = cv2.dilate(thresh, None, iterations=2)
    _, contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    # loop over the contours
    for c in cnts:
        # if the contour is too small, ignore it
        if cv2.contourArea(c) < args["min_area"]:
            continue

        # compute the bounding box for the contour, draw it on the frame,
        # and update the text
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
        text = "Occupied"

    # draw the text and timestamp on the frame
    cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
    cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
                (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

    # show the frame and record if the user presses a key
    cv2.imshow("Security Feed", frame)
    cv2.imshow("Thresh", thresh)
    cv2.imshow("Frame Delta", frameDelta)
    key = cv2.waitKey(1) & 0xFF

    # if the `q` key is pressed, break from the lop
    if key == ord("q"):
        break

# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()
2016-08-20 05:59:11 -0600 answered a question How to remove green color i.e. to set it to 0 in an image?

You could try to use split and merge: Split/Merge

split your image, call setTo(0) for your green channel and merge them again.

2016-08-19 10:51:55 -0600 edited answer Color to a particular OpenCV location