2019-06-24 04:27:02 -0600 | received badge | ● Famous Question (source) |
2018-03-22 10:16:01 -0600 | received badge | ● Notable Question (source) |
2017-09-21 10:47:38 -0600 | received badge | ● Popular Question (source) |
2016-01-20 08:37:28 -0600 | received badge | ● Nice Answer (source) |
2016-01-15 07:21:03 -0600 | commented answer | Can't create lib files with CMake Your linker is called with an unknown option or most common one with a wrong syntax. Usually |
2016-01-15 06:51:23 -0600 | commented answer | Can't create lib files with CMake What do you mean by "same error"? Which one? |
2016-01-15 03:01:22 -0600 | commented answer | Can't create lib files with CMake
|
2016-01-14 16:30:11 -0600 | commented answer | opencv mat object thread safety It is difficult to evaluate such things from a single code fragment without a "Minimal, Complete, and Verifiable example". If your program runs with one thread and didn't run with more, you have interference in between. Taking a copy is one way to prevent these. By the way, if you plan to transfer by TCP why worrying about local copy time effort? |
2016-01-14 16:28:19 -0600 | commented answer | opencv mat object thread safety What do you mean by essentially read only? If it is not completely read only you have changes and these might interfere with your threads. That is why I pointed to the typical problem of pointers or references to vector elements if reallocation occurs. Read only shall be forced by |
2016-01-14 10:29:45 -0600 | answered a question | opencv mat object thread safety If |
2016-01-14 10:06:54 -0600 | answered a question | Can't create lib files with CMake The answer is in the question: "Can't create lib files with CMake". You can never make libraries with Usually libraries are installed in You can also use As an example if we want to build in a subdirectory Compiling takes a lot of time and can be done parallel using several cores of your CPU by |
2016-01-12 12:08:44 -0600 | answered a question | color detect There are good examples for face detection and I am not sure if Canny on colour will make sense. But if you want to try: For a good face detection example e.g. look here. |
2016-01-11 10:18:08 -0600 | received badge | ● Nice Answer (source) |
2016-01-11 09:37:01 -0600 | received badge | ● Teacher (source) |
2016-01-11 07:47:46 -0600 | answered a question | Displaying multiple images
Your should (not tested) get the same with: But you will not get the same result as your original code e.g. But take care of being consistent. Lookup tables do also offer many ways to create faults. |
2016-01-09 06:38:37 -0600 | received badge | ● Student (source) |
2016-01-09 06:30:45 -0600 | asked a question | opencv 3.1 still using FLANN version 1.6.10 from 16 May 2011 ? Experimenting with FLANN as included in openCV 3.1 (December, 2015) I found the old version 1.6.10 (May 2011). Is there a special reason not to use the latest 1.8.4 (January 2013)? Looking at the changelog the authors provided a lot of fixes and enhancements. Is there a reason to keep this old version? Looking at the changelog of openCV the 1.6 of FLANN was included in 2.3.1 (August, 2011) and improved by Pierre-Emmanuel Viel in 3.0 alpha (August, 2014). Are those enhancements so extensive that it is better not upgraded? |
2016-01-06 05:15:35 -0600 | received badge | ● Enthusiast |
2016-01-05 11:53:48 -0600 | received badge | ● Supporter (source) |
2016-01-02 12:46:40 -0600 | received badge | ● Editor (source) |
2016-01-02 04:23:06 -0600 | commented answer | Skipping all but the latest frame in VideoCapture Yes thanks. Looks like plenty of effort just to clean a read buffer. But it would be a workaround. |
2016-01-01 14:13:53 -0600 | asked a question | Skipping all but the latest frame in VideoCapture As an answer to the question "Skipping frames in VideoCapture" Will Stewart presented In a loop the frames of a camera are processed using a very time consuming function. In the meantime the camera is providing further frames. Thus if the loop is doing the next Some sort of non blocking Doing multi-threading (like harsha proposed) would surely do the job. But it is like taking a sledgehammer to crack a nut. Especially on an embedded system this won't feel good. Processing as many frames of a camera as possible but these just in time, should be possible on a system designed "with a strong focus on real-time applications." Any further idea? |