Ask Your Question

Prototype's profile - activity

2020-04-24 10:37:25 -0600 received badge  Popular Question (source)
2020-03-10 03:03:02 -0600 received badge  Notable Question (source)
2019-10-18 10:03:00 -0600 received badge  Popular Question (source)
2018-04-27 09:20:45 -0600 received badge  Popular Question (source)
2017-07-12 09:53:53 -0600 commented answer In k means clustering, how do I reconstruct just a part of the image?

@LBerger , I was able to achieve Kmeans in python. How do I do the rest?

2017-07-12 09:35:13 -0600 commented answer In k means clustering, how do I reconstruct just a part of the image?

@LBerger How do I achieve the same in python? Please help me out.

2017-03-03 19:00:38 -0600 asked a question opencv python k-means seperate all the colours into different images

following is the opencv python code to get clustered image. After clustering, how do I seperate each color into different images i.e one cluster should be in one image , and the others in seoerate images. Can some please help me as soon as possible? If there is a way to put all the colors in one image one after the other like in the form of a bar graph, that would be great too. Please help me as soon as possible.

import numpy as np
import cv2


cap = cv2.VideoCapture(1)
cap.set(3,160)
cap.set(4,120)

# Capture frame-by-frame
ret, frame1 = cap.read()
ret, frame2 = cap.read()
ret, frame3 = cap.read()
ret, frame4 = cap.read()
ret, frame5 = cap.read()
ret, frame = cap.read()


cv2.imshow("original",frame)
Z = frame.reshape((-1,3))

# convert to np.float32
Z = np.float32(Z)

# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 3
ret,label,center=cv2.kmeans(Z,K,criteria,10,cv2.KMEANS_RANDOM_CENTERS)

# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((frame.shape))




cv2.imshow("clustered",res2)
cv2.waitKey(0)
cv2.destroyAllWindows
2017-03-03 12:30:57 -0600 asked a question kmeans clustering convert code to python

I have the following code for k-means clustering , but it is in C++ . I need it in python. I was able to convert just the k-means clustering part into python. Can someone please convert the part where I access the labels and regenerate the image with just the colors into python as soon as possible. Thanks a lot in advance.

#include <opencv2/opencv.hpp> 

using namespace std;
using namespace cv;


void main(void)
{
    Mat img=imread("14780105241945453.png",IMREAD_COLOR);
    cout << "Pixels "<<img.rows*img.cols<<"\n";
    Mat src,srcF;
    cvtColor(img, src, CV_BGR2Lab);
    src.convertTo(srcF, CV_32FC3);
    cout << "Pixels " << srcF.rows*srcF.cols << "\n";
    vector<Vec3f> plan;
    plan.assign((Vec3f*)srcF.datastart, (Vec3f*)srcF.dataend);
    cout << "Pixels " << plan.size() << "\n";
    int clusterCount = 3;
    Mat labels;
    Mat centers;
    kmeans(plan, clusterCount,labels,TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 10, 1.0), 3, KMEANS_PP_CENTERS, centers);
    Mat mask;
    namedWindow("Original", WINDOW_NORMAL);
    imshow("Original", img);
int maxCluster=0,ind=-1;
for (int i = 0; i < clusterCount; i++)
{
    cv::Mat cloud = (labels == i) ;
    namedWindow(format("Cluster %d",i), WINDOW_NORMAL);
    Mat result = Mat::zeros(img.rows, img.cols, CV_8UC3);

    if (cloud.isContinuous())
        mask = cloud.reshape(0, img.rows);
    else
        cout << "error";
    int m=countNonZero(mask);
    if (m > maxCluster)
    {
        maxCluster = m;
        ind=i;
    }
    img.copyTo(result, mask);
    imshow(format("Cluster %d", i), result);
    imwrite(format("Cluster%d.png", i), result);

}
cout<<"Cluster max is "<<ind<<" with "<<maxCluster<<" pixels";
waitKey();
}

Please help me. Thanks a lot in advance and as soon as possible.

2017-01-16 02:08:15 -0600 answered a question Does the resolution of an image affect the distortion co-efficients

@Tetragramm does the camera matrix vary for images caught in auto mode from different heights?

2017-01-16 02:06:31 -0600 asked a question Camera Matrix

Are the fx,fy,cx and cy for a camera constant? or do they differ with different pictures when clicked?

2016-12-14 06:15:15 -0600 commented question Does the resolution of an image affect the distortion co-efficients

@Tetragramm and @berak , can you please help me with this

2016-12-14 06:14:28 -0600 asked a question Does the resolution of an image affect the distortion co-efficients

What parameters does the distortion co-efficients rely on. If I take an image with 2MP and another image with 12MP with the same camera , will the distortion co-efficients change?

2016-12-14 02:02:14 -0600 asked a question Live video capture using Nikon D3300 camera id

I am trying to use video capture mainly for camera calibration using opencv . The program is asking for camera id like 1 and 0 works for my web cam. How do I calibrate the Nikon D3300 and enter the camera ID and use it. The program is the sample program found on the official documentation. The camera ID 1,2,3 is not working.

2016-12-13 02:07:08 -0600 received badge  Critic (source)
2016-12-13 02:06:53 -0600 received badge  Supporter (source)
2016-12-13 01:53:09 -0600 commented answer Camera callibration does not complete for my set of images.

@Tetragramm The image works when I enter 4X4 and not 7X7. But when I crop the image a little more, It fails.

2016-12-13 00:55:44 -0600 commented answer Camera callibration does not complete for my set of images.

@Tetragramm I did that. Now the problem is the code keeps running and after a few minutes it terminates or it shows the following erro rerror: (-215) src.size == dst.size && src.channels() == dst.channels() in function cvConvertScale I gave width 7 and breadth 7. Please help me

2016-12-12 11:22:40 -0600 commented answer How to I find the distortion coefficients of a camera from a certain height?
2016-12-12 11:21:20 -0600 asked a question Camera callibration does not complete for my set of images.

I am trying to calibrate my camera using the sample code found on samples/cpp/tutorial_code/calib3d/camera_calibration/ ( the official documentation). It runs when I enter it for the image given by them, but when I run it for my images it shows no output. I have attached the images and also the data entered. Images is image description)

For this particular image , The program runs when width and height is 4 and 4 but it distorts this image even more. But clearly, it should work for 7X7. The code crashes for 7X7.

Is there anything I am missing out?

Can you please help me in solving this problem.

The data entered in the VID file is as follows

<?xml version="1.0"?>
<opencv_storage>
<Settings>
  <!-- Number of inner corners per a item row and column. (square, circle) -->
  <BoardSize_Width>7</BoardSize_Width>
  <BoardSize_Height>7</BoardSize_Height>

  <!-- The size of a square in some user defined metric system (pixel, millimeter)-->
  <Square_Size>50</Square_Size>

  <!-- The type of input used for camera calibration. One of: CHESSBOARD CIRCLES_GRID ASYMMETRIC_CIRCLES_GRID -->
  <Calibrate_Pattern>"CHESSBOARD"</Calibrate_Pattern>

  <!-- The input to use for calibration. 
        To use an input camera -> give the ID of the camera, like "1"
        To use an input video  -> give the path of the input video, like "/tmp/x.avi"
        To use an image list   -> give the path to the XML or YAML file containing the list of the images, like "/tmp/circles_list.xml"
        -->
  <Input>"/home/manohar/VID.xml"</Input>
  <!--  If true (non-zero) we flip the input images around the horizontal axis.-->
  <Input_FlipAroundHorizontalAxis>0</Input_FlipAroundHorizontalAxis>

  <!-- Time delay between frames in case of camera. -->
  <Input_Delay>100</Input_Delay>    

  <!-- How many frames to use, for calibration. -->
  <Calibrate_NrOfFrameToUse>10</Calibrate_NrOfFrameToUse>
  <!-- Consider only fy as a free parameter, the ratio fx/fy stays the same as in the input cameraMatrix. 
       Use or not setting. 0 - False Non-Zero - True-->
  <Calibrate_FixAspectRatio> 1 </Calibrate_FixAspectRatio>
  <!-- If true (non-zero) tangential distortion coefficients  are set to zeros and stay zero.-->
  <Calibrate_AssumeZeroTangentialDistortion>1</Calibrate_AssumeZeroTangentialDistortion>
  <!-- If true (non-zero) the principal point is not changed during the global optimization.-->
  <Calibrate_FixPrincipalPointAtTheCenter> 1 </Calibrate_FixPrincipalPointAtTheCenter>

  <!-- The name of the output log file. -->
  <Write_outputFileName>"out_camera_data.xml"</Write_outputFileName>
  <!-- If true (non-zero) we write to the output file the feature points.-->
  <Write_DetectedFeaturePoints>1</Write_DetectedFeaturePoints>
  <!-- If true (non-zero) we write to the output file the extrinsic camera parameters.-->
  <Write_extrinsicParameters>1</Write_extrinsicParameters>
  <!-- If true (non-zero) we show after calibration the undistorted images.-->
  <Show_UndistortedImage>1</Show_UndistortedImage>
  <!-- If true (non-zero) will be used fisheye camera model.-->
  <Calibrate_UseFisheyeModel>0</Calibrate_UseFisheyeModel>
  <!-- If true (non-zero) distortion coefficient k1 will be equals to zero.-->
  <Fix_K1>0</Fix_K1>
  <!-- If true (non-zero) distortion coefficient k2 will be equals to zero.-->
  <Fix_K2>0</Fix_K2>
  <!-- If true (non-zero) distortion coefficient k3 will be equals to zero.-->
  <Fix_K3>0</Fix_K3>
  <!-- If true (non-zero) distortion coefficient k4 will be equals to zero.-->
  <Fix_K4>1</Fix_K4>
  <!-- If true (non-zero) distortion coefficient k5 will be equals to zero.-->
  <Fix_K5>1</Fix_K5>
</Settings>
</opencv_storage>
2016-12-02 11:14:28 -0600 commented question I get an error during the camera callibration

@mshabunin Thanks a lot for the link. There are too many files and I am new to coding. If it's not too much to ask ,could you please give me onstructions as to how to use it. There are too many files and I don't know which one to use.

2016-12-01 03:27:10 -0600 asked a question I get an error during the camera callibration

This is the error I get when I run the sample calibration code from the official documentation link samples/cpp/tutorial_code/calib3d/camera_calibration/

what(): /home/manohar/opencv-3.1.0/modules/core/src/convert.cpp:5475: error: (-215) src.size == dst.size && src.channels() == dst.channels() in function cvConvertScale

I entered the correct number of squares and other required details. Please help me with this, Thanks in advance.

2016-11-28 11:00:46 -0600 commented answer How to I find the distortion coefficients of a camera from a certain height?

@Tetragramm Thanks a lot.I wil look into it.

2016-11-28 10:48:15 -0600 commented answer How to I find the distortion coefficients of a camera from a certain height?

@Tetragramm Thanks a lot. Are you sure that the distortion coefficients are indepentdent of height? because I was told that it is not and my assignment was to find a way to figure out the distorion coefficients of our camera from that height.

2016-11-28 09:57:48 -0600 asked a question How to I find the distortion coefficients of a camera from a certain height?

I need to find the distortion coefficients of a camera which clicks picture from around 180m above the ground. How do I find the distortion coefficients of my camera when it takes aerial shots from about 180 meters? Please help me out here. Is there a relation between distortion coefficients when taken from a lower height to distortion coefficients when image taken from a higher altitude?

2016-11-26 03:09:32 -0600 received badge  Enthusiast
2016-11-25 04:53:33 -0600 asked a question How do I threshhold a range of pixels of an image using python?

I need to convert the image to binary and then convert a range of pixels to a particular color value. How do I go about this. Code if possible please. Thanks in advance.

2016-11-15 08:24:34 -0600 commented answer Opencv color ranges for hsv

@berak , I am terribly sorry to disturb you again , But the code you gave me has no ouput. I am new to opencv and I really want to learn it.

2016-11-15 08:20:26 -0600 commented answer Opencv color ranges for hsv

@berak In my image I have both black and another color. How do I just access the color of the pixel which is not black?. I do not know the co-ordinates of the pixel?

2016-11-14 05:27:34 -0600 commented answer Opencv color ranges for hsv

@berak thanks. But how do I find the maximum and minimum range. For example, the maximum and minimum range for violent?

2016-11-14 05:03:09 -0600 asked a question Opencv color ranges for hsv

What are the hsv ranges for the colors black,blue,red,green,orange,grey,yellow,purple,brown and white. I need to check if my hsv image is in this range and print the color. How do I find the ranges of these colors in hsv and if you have it ,please post it, thanks in advance.

2016-11-05 12:56:56 -0600 asked a question color detection of an image in opencv using c++

I have an image with a black background and some color. How do I print the color of the image in the console. I basically want to first convert the image to hsv and then by setting the boundary values and if my image is within the given range, it should return a value to another variable. If it is true then using if loops it should be able to print the color of the image in the console. How do I go about it? I am new to opencv and if possible please help me with the code.

2016-11-03 09:18:19 -0600 asked a question I have an image which has black and another color. How do I check if it is within range and print the color?

I basically want to convert the image to hsv and then place boundaries. If it is within the given boundary it should print the color. For example if it is within the hsv region of red which I define, It should return a boolean value like 1 or 0 to another variable and using "if loops", I should be able to check and print it. How do I go about it and if possible please help me with the code. Thanks in advance.

Sample image : image description

2016-11-02 11:35:49 -0600 received badge  Editor (source)
2016-11-02 11:34:46 -0600 commented answer In k means clustering, how do I reconstruct just a part of the image?

Thanks a lot. Its great!

2016-11-01 10:02:44 -0600 commented question In k means clustering, how do I reconstruct just a part of the image?

@berak, how do I assign red to the cluster idx and the white to another? Labes? If so please let me know how. I am new to opencv. So in the end, I basically need two images. one with the red circle where the white R occurs black and another with just the white R where everything else occurs black.

2016-11-01 09:38:18 -0600 asked a question In k means clustering, how do I reconstruct just a part of the image?

I performed k means clustering. How do I access the largest cluster?

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include<iostream>
using namespace cv;

int main( int argc, char** argv )
{
Mat newbgr;
Mat src;
Mat image = imread( "/home/manohar/Downloads/images/R-3.PNG", 1 );
imshow( "Original", image );

 cvtColor(image, src, CV_BGR2Lab);
 Mat samples(src.rows * src.cols, 3, CV_32F);
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
  for( int z = 0; z < 3; z++)
    samples.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];


  int clusterCount = 3;
 Mat labels;
 int attempts = 10;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10000, 0.0001), attempts, KMEANS_PP_CENTERS, centers );




Mat new_image( src.size(), src.type() );
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
{
   int cluster_idx = labels.at<int>(y + x*src.rows,0);
  new_image.at<Vec3b>(y,x)[0] = centers.at<float>(cluster_idx, 0);
  new_image.at<Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
  new_image.at<Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
}


imshow( "K means", new_image );

waitKey( 0 );
 return 0;
}