2019-07-12 09:35:07 -0600 | commented question | hierarchical clustering The clustering, when it works, ends up returning only one cluster. My intuition is that this might be due to the choice |
2019-07-12 08:25:56 -0600 | commented question | hierarchical clustering finally got it working (cv::Mat1f centers(initiNCenters, 3) : I replaced 3 with the descriptor's dimension. It was 3 bec |
2019-07-12 08:12:50 -0600 | commented question | hierarchical clustering I just tried using Mat(w,h, CV_32F) and this did not fix the issue... I keep on searching. Thanks again. |
2019-07-12 05:02:56 -0600 | commented question | hierarchical clustering I think I know where the issue is: the sample dimension is too big. I have reduced the dimension from 48 to 5 and it run |
2019-07-12 04:13:13 -0600 | commented question | hierarchical clustering I think it could be the types (my descriptor vector is of type double and the sample required for the hierarchical clust |
2019-07-12 04:08:56 -0600 | commented question | hierarchical clustering thanks for your answer. I will try to update my OpenCV version. Meanwhile below is how I fill my samples : for(unsigned |
2019-07-12 04:08:27 -0600 | commented question | hierarchical clustering thanks for your answer. I will try to update my OpenCV version. Meanwhile below is how I fill my samples : for(unsigned |
2019-07-12 04:07:58 -0600 | commented question | hierarchical clustering thanks for your answer. I will try to update my OpenCV version. Meanwhile below is how I fill my samples : for(unsigned |
2019-07-12 04:07:05 -0600 | commented question | hierarchical clustering thanks for your answer. I will try to update my OpenCV version. Meanwhile below is how I fill my samples : // fill in s |
2019-07-12 03:29:33 -0600 | asked a question | hierarchical clustering hierarchical clustering I am trying to cluster features using the flann hierarchicalClustering function. The features' d |
2017-06-04 21:00:56 -0600 | commented answer | distance to object from stereo pair Hi LBerger, I have applied your solution but I have inconsistent results. Regarding disp2 : Mat disp2(disparity.rows,disparity.cols,disparity.type,Scalar(10000)); since the disparity image in OpenCV is of type CV_16S its values span from -255 to +255, therefore the value 10000 is out of range. I obtain different z values (positive and negative values) from xyz depending on the value of ndisparities. Why ??? Also, I don't know why I am obtaining always -inf and +inf for the minimum and maximum values of x and y respectively. reprojectImageTo3D results are really weird (for instance, I get the same z for two different stereo pairs)... Thanks for your help. |
2017-06-04 11:43:05 -0600 | commented answer | distance to object from stereo pair Thank you. |
2017-06-03 13:09:56 -0600 | commented answer | distance to object from stereo pair Hi LBerger. thanks a lot. I'll try that. My only concern is about the mask. I guess I'll have to compute it from the left image ? The disparity doesn't seem to fit any of the images of the stereo pair... Thanks ! |
2017-06-02 16:24:54 -0600 | commented answer | distance to object from stereo pair Thanks for your reply. I have implemented the stereo_match example with my data (camera parameters and stereo pair). I managed to get 3D points cloud (therefore I didn't check if they are consistent since I do not have viz). My question is how to measure the distance to a particular object in the scene ? In color images, this object can easily be recognized since it has a distinct color, however, I don't know how to find it in the 3D point cloud or in the disparity image so that I can measure a distance to it. Is there a way to map one of the views on the cloud in order to isolate the 3D points belonging to this object ? Should I filter the object at the very beginning in the stereo pair and reconstruct the 3D model of this object alone ? (I would lose most of the ... (more) |
2017-05-30 17:50:18 -0600 | asked a question | distance to object from stereo pair Dear all, I have a pair of stereo images provided by a stereo sensor (the Zed camera). I do not have access to the camera (no camera parameter was provided). I am asked to measure the distance to a distinct object in the images using the stereo pair. I have never performed 3D reconstruction from stereo pairs, however I do not think we can infer actual distances to an object from a stereo pair without having the parameters of the cameras. I guess I can only provide a depth map. My questions are : 1. Is it possible to have the distance to an object in the scene from a stereo pair without having the camera parameters ?
Thank you for your help. |
2015-11-18 05:09:43 -0600 | received badge | ● Enthusiast |
2015-11-05 14:44:24 -0600 | commented question | ellipse approximation of blob using contours moments : confusing orientation angle @ L.Berger. Please check the poster's ID of the answer you are mentionning. You can see that it is not me. Anyways. I come to comment on my original question. After reading more carefuly drVit's document you have posted I managed to fetch consistent values or the orientation angle. Depending on the sign of the nominator and denominator in the ATAN expression. Thanks. |
2015-11-02 11:44:52 -0600 | commented question | ellipse approximation of blob using contours moments : confusing orientation angle Another thing is confusing me... the coordinates system in openCV in linux... for the contours, moments and drawing function. |
2015-11-02 11:41:16 -0600 | commented question | ellipse approximation of blob using contours moments : confusing orientation angle Hi L.Berger. I have already read it before posting my question. Actually it gives the same formula. Thank you anyways. |
2015-11-02 11:10:15 -0600 | received badge | ● Editor (source) |
2015-11-02 11:07:20 -0600 | asked a question | ellipse approximation of blob using contours moments : confusing orientation angle Dear all, I want to draw the ellipse approximating an isolated blob (the largest contour found with findContours). Using the formulas of paper :http://goo.gl/yvcUO5 for the major and minor axes I obtain consistent axes lengths. However using the formula to compute the orientation angle from the same paper( and which I find almost everywhere) I obtain odd results. This is the formula : theta = 0.5atan (2mu11 / (mu20-mu02)) ; As long as the blob (which represents a human silhouette) is not close to horizontal, the formula returns a consistent value of the orientation angle but as soon as the blob becomes almost horizontal the sign of the orientation angle is flipped suddenly. I know the reason of such a behavior. If we refer to the formula above : when the blob is not horizontal mu20 is smaller than mu02. This is true while the blob starts climbing (falling) counter clockwise until it reaches an orientation of about 45 degrees. When it reaches that value the pxels distribution of the blob becomes horizontal rather than vertical and mu20 becomes larger than mu02 which implies inversion of the angle's sign. I don't know if this formula is correct. Thanks a lot for your help. |
2015-11-02 10:32:27 -0600 | commented answer | human detection from still camera Ok. Thanks. |
2015-10-28 15:40:52 -0600 | commented question | human detection from still camera Ok. I came to the same conclusion. Thanks a lot for your help. |
2015-10-27 10:06:25 -0600 | commented question | human detection from still camera Thanks Steven. What I actually want to achieve is to be able to deal with the detected contours as blobs in order to process the pixels (of the whole area inside the contour). Let's say for instance tha I want to track a person based on the colors of his/her clothes, I will need the pixels that are contained inside the contour. Do you have an idea on how to do that ? |
2015-10-20 06:13:36 -0600 | commented question | human detection from still camera Thanks Steven for your answer. This raises the question when shall we use SimpleBlobDetector over FindContours ? |
2015-10-20 05:01:41 -0600 | received badge | ● Student (source) |
2015-10-20 00:32:06 -0600 | asked a question | human detection from still camera Dear all, I need to detect human silhouettes in a sequence provided by video-surveillance camera. The first step of my algorithm is motion detection based on background subtraction. I am done with this and I obtain a binary mask containing the object of interest and some noise. I want to detect the object of interest (the human silhouette) and track it (the final objective is to detect falls). For this end, I need to detect the blobs from the binary mask and find the largest in area and access its pixels (in order to perform further analysis on its texture). Shall I use contours detection function of OpenCV or simpleBlobDetector. I have tried the latter but it seems that it only return key-points (the centers of each blob) and I can't get access to the blob pixels. Please, can you give me your opinion regarding this topic and the class SimpleBlobDetector ? Thanks for your answers. |
2014-08-11 10:14:08 -0600 | commented question | standalone independant executable using code::blocks with Windows8 Thank you. I didn't manage to solve the issue so far however if I have a fix I will post it here. |
2014-08-09 14:08:28 -0600 | commented question | standalone independant executable using code::blocks with Windows8 ... and : C:\opencv\build\x86\mingW\lib\libopencv_core249.a(persistence.cpp.obj):persistence.cpp:(.text$_ZL6icvEofP13CvFileStorage+0x40): undefined reference to `gzeof' I am not sure, but I am wondering if it is not due to the order the libs were added to the linker... In the current configuration, I have entered them in the default order CMAKE has built them, Please, in which order shall OpenCV '.a' (static libs) files be added to the linker ? Finally, in openCV tutorials I have read, I have seen that people just copy all the files in 'lib' directory in the linker settings for the release as well as the debug modes. Is this normal to use the exact same list of libs for the Debug and Release modes ? I don't think so. L. |
2014-08-09 14:07:39 -0600 | commented question | standalone independant executable using code::blocks with Windows8 Hi Steven, I have added in the linker options/settings: -static-libgcc -static-libstdc++ since my last post, I have tried a number of settings and combinations of solutions based on my research on google. It still does not work, but at least I have a better idea of the issue. The latest thing I was doing is that, in release mode, I have removed all the ".dll.a" files from the linker settings leaving only the ".a" files. I did this to force C::B use only the static libraries (to make sure C::B will perform static linking and that therefore the ".exe" will not ask for dlls. This revealed the errors below : C:\opencv\build\x86\mingW\lib\libopencv_core249.a(persistence.cpp.obj):persistence.cpp:(.text$_ZL12icvCloseFileP13CvFileStorage+0x4e): undefined reference to `gzclose' |
2014-08-08 06:00:10 -0600 | commented question | standalone independant executable using code::blocks with Windows8 it was already there (I've added this before) |
2014-08-08 05:42:02 -0600 | commented question | standalone independant executable using code::blocks with Windows8 @steven : I am working on it... |
2014-08-08 05:01:29 -0600 | commented question | standalone independant executable using code::blocks with Windows8 @steven: Yes I did add the libs to the linker options otherwise it would never compile. As I said, the project compiles just fine... |
2014-08-08 04:59:13 -0600 | commented question | standalone independant executable using code::blocks with Windows8 I remember that sometime ago I managed to generate an independant executable that uses OpenCV but it was with Visual C++. You are right when you ask the question on whether the codeblocks project is correctly configured or not. But as I said in my first post, I have looked into code::blocks's GUI for all possible configuration options and it seems that there is no further settings related to my issue (AFAIK, no option to select static or dynamic linking). It seems that it depends on the nature of the libs (dll's or static libs) added to the "link libraries" list of the "linker settings" in the project build options of code::blocks but I am not sure yet. |
2014-08-08 04:36:42 -0600 | commented question | standalone independant executable using code::blocks with Windows8 Thank you for your answers, the missing dll is : libopencv_core249.dll. Regarding CMAKE, First time I have built OpenCV I did it with the default settings of CMAKE and for which the BUILD_SHARED_LIBS flag was ON then when the issue rised I have rebuilt it with the BUILD_SHARED_LIBS flag OFF. I have noticed that this did not change anything in the contents of the "lib" folder (which contains 87 ".dll.a" and ".a" files). I insist on the fact that I want to generate an independant executable (I am learning how to generate a standalone application that can be used by a third party user who does not have any of OpenCV, code::blocks or other programing tools). Thank you. |
2014-08-08 03:48:48 -0600 | asked a question | standalone independant executable using code::blocks with Windows8 Hello, I have developed a c++ console application project with code::blocks in windows using OpenCV 2.4.9. I have built OpenCV using CMAKE (with the BUILD_SHARED_LIBS flag OFF). Regarding Code::blocks, I have downloaded and installed the IDE alone and then I have installed the MingW/Msys compiler. The application runs normally in Debug and in Release modes. However, when I run the released "exe" it generates an error saying that a DLL is missing. Based on this, I came to the conclusion that the compiler was performing dynamic linking. I have looked everywhere in the code::blocks' GUI to see if there was an option that enables static linking but I did not succeed. I have also looked for help online (googled it) for 2 days but the solutions I have found did not work. Please, does anyone know how to generate a standalone independant executable (that will work in another PC. i.e: static linking) of a project that uses OpenCV in code::blocks ? Thank you for your help ! L. |