2020-10-21 08:46:04 -0600 | received badge | ● Nice Question (source) |
2018-09-27 14:24:22 -0600 | answered a question | I lose some text while making scanner-like effect dilated_img = cv2.dilate(img, np.ones( *** (7, 7) *** , np.uint8)) bg_img = cv2.medianBlur(dilated_img, *** 21 *** ) |
2018-07-29 02:50:21 -0600 | received badge | ● Nice Answer (source) |
2018-07-28 23:07:40 -0600 | commented question | What is Point2f and Point3f? That is an object-oriented way of defining structs in C++. Using an array is slightly more cumbersome, because, in C++, |
2018-07-28 19:37:59 -0600 | answered a question | What is Point2f and Point3f? In OpenCV, coordinates can be 2-dimensional, 3-dimensional, or 4-dimensional. The number "2", "3", and "4" refers to th |
2018-07-28 19:31:09 -0600 | answered a question | Run OpenCV to MFC (1) Check whether you are using GDI or GDI+. (2) In case you are using GDI, here is the documentation for the function y |
2018-06-15 05:35:26 -0600 | commented question | Run OpenCV to MFC How did you "clean up m_Mat"? What is the code you used? That code probably didn't work correctly. If it is too hard to |
2018-06-07 15:02:48 -0600 | edited answer | Why huge performance gap between opencv c++ native and java code on Android? Your Java code: Imgproc.GaussianBlur(rgba_clone, rgba_clone, new Size(3, 3), 9); Your native C++ code: cv::GaussianB |
2018-06-07 15:02:01 -0600 | edited answer | Why huge performance gap between opencv c++ native and java code on Android? Your Java code: Imgproc.GaussianBlur(rgba_clone, rgba_clone, new Size(3, 3), 9); Your native C++ code: cv::GaussianB |
2018-06-07 15:00:12 -0600 | answered a question | Why huge performance gap between opencv c++ native and java code on Android? Your Java code: Imgproc.GaussianBlur(rgba_clone, rgba_clone, new Size(3, 3), 9); Your native C++ code: cv::GaussianB |
2018-05-11 08:12:35 -0600 | answered a question | Extracting area of interest from image How about converting to HSV colorspace (call cv::cvtColor with cv::COLOR_BGR2HSV), then threshold the Saturation channel |
2018-05-10 22:07:57 -0600 | commented question | How do I perform a normal convolution? but this wonderful forum website won't let me post it. Can you describe more clearly what is preventing you from p |
2018-05-10 22:04:27 -0600 | commented question | [Question/Java/Help] how to get openCV to run in a jar? Congrats for finding out the root cause. Yes, compiled C++ code (*.dll and *.so files) are platform-specific and archite |
2018-05-10 21:59:34 -0600 | commented question | How to correct this barrel distortion The image looks like it is already good. What accuracy do you need (max tolerance for absolute pixel distance error)? Al |
2018-05-10 06:50:41 -0600 | answered a question | How do I delete the shadow in this image? Try run MSER on the input image, and then plot the results (the blobs), and use that to help diagnose the problem. http |
2018-05-10 03:13:44 -0600 | commented question | How to detect and crop rectangle and apply transformation from an image? Please explain, step by step, whether each step produces the correct output. To understand whether the lines are detecte |
2018-05-10 03:10:00 -0600 | commented question | How to detect and crop rectangle and apply transformation from an image? Please do not abuse the at-sign notification on OpenCV Answers forum. Nobody, not even moderators, are obliged to give a |
2018-05-10 03:03:14 -0600 | answered a question | This OpenCV build doesn't support current CPU/HW configuration This can only be solved by: (Recommended) Find a different docker image, in which the CPU architecture requirement is |
2018-04-30 03:11:11 -0600 | received badge | ● Self-Learner (source) |
2018-04-29 17:14:07 -0600 | commented question | CMake error on Win10, VS2017 possibly FP16 related @LBerger: Thanks, I found that my issue is purely due to setting CPU_BASELINE to AVX2. When I set it to SSE4_2 there was |
2018-04-29 02:23:17 -0600 | asked a question | CMake error on Win10, VS2017 possibly FP16 related CMake error on Win10, VS2017 possibly FP16 related Full details: https://gist.github.com/kinchungwong/2a85d3ac5a5c619607 |
2018-04-18 06:43:15 -0600 | commented question | ConnectedComponents-like function for grayscale image Each method has shortcomings; for example, some are sensitive to random pixel noise (fluctuations), some are sensitive t |
2018-04-18 06:41:37 -0600 | commented question | ConnectedComponents-like function for grayscale image Consider posting a sample picture, or a sample picture close enough to the type of pictures you need to work with. Words |
2018-04-16 23:30:03 -0600 | commented question | ConnectedComponents-like function for grayscale image Does it work well? If your solution works well, then it is okay to use it. If you think it is worth sharing your source |
2018-04-09 00:30:04 -0600 | answered a question | Center of rectangles and centroid The typical way of calculating the center / centroid of a rectangle is cv::Rect2d rect = ...; cv::Point2d center = rec |
2018-04-04 16:42:03 -0600 | commented question | Running openCv on vxworks Is this a personal (hobby) or commercial (paid) project? Do you have support account or access to the forum at VxWorks? |
2018-03-29 06:21:50 -0600 | edited answer | Return Mat_<float> data not uchar For most image file types, cv::imread returns an image whose channel precision is 8-bit unsigned (CV_8UC1, CV_8UC3, CV_8 |
2018-03-29 05:52:11 -0600 | answered a question | Return Mat_<float> data not uchar For most image file types, cv::imread returns an image whose channel precision is 8-bit unsigned (CV_8UC1, CV_8UC3, CV_8 |
2018-03-29 05:48:35 -0600 | answered a question | How to create a Scalar with variable number of channels? cv::Scalar can store a maximum of 4 channels, each containing a double-precision floating point value. https://github.c |
2018-03-29 05:39:19 -0600 | edited answer | Deleting opencv github repo after successful building from source You will need to backup the following: Any source code changes you made. (If your changes are already pushed to anothe |
2018-03-29 05:38:14 -0600 | answered a question | Deleting opencv github repo after successful building from source You will need to backup the following: Any source code changes you made. (If your changes are already pushed to anothe |
2018-03-29 04:33:32 -0600 | answered a question | How warpAffine works? Yes. The amount of work performed by warpAffine is proportional to the destination matrix size (specified by dsize). It |
2016-12-08 05:18:49 -0600 | received badge | ● Popular Question (source) |
2016-12-08 05:18:49 -0600 | received badge | ● Notable Question (source) |
2016-05-24 13:18:28 -0600 | received badge | ● Enlightened (source) |
2016-05-24 13:18:28 -0600 | received badge | ● Good Answer (source) |
2015-10-24 11:48:54 -0600 | received badge | ● Nice Answer (source) |
2015-04-13 20:24:16 -0600 | received badge | ● Enthusiast |
2015-04-05 10:34:46 -0600 | commented question | Detect type of document in a real-time Only some general advice. (1) To get good accuracy, you must combine a lot of different methods. Therefore, don't fall into the trap that you can just keep the single best performing effort and throw away the best. (2) To combine different methods, you will look into some statistics and machine learning stuff to make a function (mathematically) to combine these results. (3) Regarding the performance, your choices are: (a) send image to server for processing, (b) do feature extraction on mobile and send "feature / signatures" to server for database lookup or feature-matching, (c) do everything on the mobile. Since nobody else have access to your source code, you are the only person who can conduct performance tests using each approach. |
2015-04-04 03:53:00 -0600 | answered a question | ConnectedComponnents and zero level in binary image Because the OpenCV connected components algorithm is designed for binary input, it will not find the holes which you have labeled 0(b) and 0(c). My suggestion is to perform Canny edge detection (or any edge detection, since your image is simple enough), followed by bitwise negation, and finally the connected components labeling with connectivity = 4. Here are some explanations of the detail I gave above. The bitwise negation causes the "Canny edge pixels" to become background pixels. In other words, they are delimiters - they represent pixels that do not connect, thus segmenting distinctly colored regions into pieces. All remaining pixels become foreground pixels. The connectivity needs to be 4 because the pixel chains formed by Canny are 8-connected. That is, sometimes the edge pixels go in diagonal directions. If the connected component algorithm had used 8-connectivity, it would leak through the boundary formed by the edge pixel chains. Using 4-connectivity in the connected component algorithm will not have this problem. |
2015-04-04 03:53:00 -0600 | asked a question | Is there plan to extend medianBlur method to kernel sizes which are rectangular-shaped, i.e. not just square-shaped? The current API for median filter is which forces the filter size to be square-shaped. However, in document image processing, it is commonly necessary to perform median filter with a elongated rectangle (with the longer dimension aligned with text direction), so the current API function won't be useful. Is there any plan to extend the Also, the algorithm which forms the basis of that implementation is capable of performing ordinal filtering as well. (Ordinal filtering means selecting the K-th sorted value from each sliding window. Median filtering is a special case of ordinal filtering with K being the median index in the area of the sliding window.) Is there plan to make that feature available on the API as well? |
2015-04-04 03:53:00 -0600 | commented question | median and mean filtering. Another property satisfied by the rectangular mean filter is that it is separable - it can be represented as a single-column mean filtering followed by single-row mean filtering, or vice versa. These operators are also commutative. Therefore, it is true that you can do a lot of things in many different ways - but to understand that, you need some good knowledge of convolution. |
2015-04-04 03:52:59 -0600 | commented question | median and mean filtering. @KansaiRobot Mean filtering is defined in terms of convolution followed by normalization. Convolution satisfies operator commutative property, therefore it can commute with some of those mathematical operations you mentioned. However, to construct some useful algorithms, you have to have college-level mathematical understanding in the field called "signal processing". Otherwise it will be too hard or impossible to understand or explain. |
2015-04-04 03:52:58 -0600 | commented question | Does OpenCV support the use of vector reciprocal on ARM NEON? @PedroBatista If your question is related to OpenCV you can post a question on this site. If your question is about source code sharing, unfortunately all of my work is done for my employer (due to the "work for hire" contract), therefore I cannot share any source code unless that sharing is explicitly permitted by and deemed beneficial to my employer. |
2015-03-27 02:53:18 -0600 | received badge | ● Nice Answer (source) |
2015-02-13 12:32:09 -0600 | received badge | ● Nice Answer (source) |
2015-02-11 12:12:03 -0600 | received badge | ● Good Answer (source) |
2014-12-02 01:24:04 -0600 | answered a question | blending two color images Because Red and Green (and also Blue if you have a third input) are separate color channels, it is not necessary to apply the convex combination constraint. The convex combination constraint is the rule that says the coefficients for input A (i.e. If this constraint is removed, then you can draw up any smooth curve connecting the two corner points In fact, the image result "c" is formed from a blending parameter of |
2014-11-15 23:26:47 -0600 | answered a question | How to fill a polygon with image OpenCV (textured polygon) Suppose you already have the Change the code that calls The next step is to create a "texture image" of the same size as the bounding rectangle of the polygon you're planning to draw. Then, use alpha blending to compose the texture image to the original image, using the mask image as opacity value. New Output = (original input) * (1.0 - opacity) + (texture) * (opacity) Read more about alpha blending at Wikipedia, or at OpenCV tutorial. There are many tips and tricks needed to apply alpha blending - too many to be explained here. If you encounter a problem, try search around and ask questions if you're stuck. One thing to remember is that the opacity value needs to be normalized to a maximum value of 1.0. If you are not sure, here are two "golden rules" you can use to verify the correctness of your implementation:
|
2014-11-12 23:57:02 -0600 | commented question | How to check if pixel is near white color
|