1 | initial version |
Hi, since the circled areas you wanna catch have specific spatial features. And also if these features appear to be similar on all the dataset you use, then, a features detection+description process can be applied, using classical features detector/descriptor like SIFT.
A simple way to do this can be like this : Considering a dataset of image samples where these artifacts are visible, then, apply the SIFT detector (check http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=featuredetector#Ptr<featuredetector< a=""> FeatureDetector::create(const string& detectorType)> > ).
First check if the detector is able to provide some keypoints on these features (And do not care about the other detected features ! They will be distinguished later on) !. Use method the drawkeypoint method for visual check ( <http: docs.opencv.org="" modules="" features2d="" doc="" drawing_function_of_keypoints_and_matches.html?highlight="drawkeypoint#void" drawkeypoints(const="" mat&="" image,="" const="" vector<keypoint="">& keypoints, Mat& outImage, const Scalar& color, int flags)> ). You can make tests with various keypoint detector and choose the one that detects your features the more often (whatever the over detected keypoints are !).
Second step : describe features in order to distinguish them. For that use a feature descriptor, choose the most appropriate doing tests using the flexible method DescriptorExtractor::create ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html?highlight=features%20descriptor#descriptorextractor-create ) Use DescriptorExtractor::compute method to describe each previously detected keypoint.
Then, manually label the features that the detector finds and store... this will help you to distinguish your targets from the other detected keypoints.
Finally, considering a new image dataset, do the same 1 and 2 steps (detection+description) and match your stored hand labelled target features with each the new image detected and described keypoints. You can use a descriptor matcher available from the flexible DescriptorMatcher::create method ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html?highlight=descriptor%20match#descriptormatcher-create ).
This is a kind of bag of words matching that can be really efficient... IF your features are reproducible from one image to the other. !
Check out research papers on "bag of words" to push the button further !!!
Have a nice coding and experimentation !
2 | minor correction |
Hi, since the circled areas you wanna catch have specific spatial features. And also if these features appear to be similar on all the dataset you use, then, a features detection+description process can be applied, using classical features detector/descriptor like SIFT.
A simple way to do this can be like this : Considering a dataset of image samples where these artifacts are visible, then, apply the SIFT detector (check http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=featuredetector#Ptr<featuredetector< a=""> FeatureDetector::create(const string& detectorType)> > ).
First check if the detector is able to provide some keypoints on these features (And do not care about the other detected features ! They will be distinguished later on) !. Use method the drawkeypoint method for visual check ( <http: docs.opencv.org="" modules="" features2d="" doc="" drawing_function_of_keypoints_and_matches.html?highlight="drawkeypoint#void" drawkeypoints(const="" mat&="" image,="" const="" vector<keypoint="">& keypoints, Mat& outImage, const Scalar& color, int flags)> ). You can make tests with various keypoint detector and choose the one that detects your features the more often (whatever the over detected keypoints are !).
Second step : describe features in order to distinguish them. For that use a feature descriptor, choose the most appropriate doing tests using the flexible method DescriptorExtractor::create ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html?highlight=features%20descriptor#descriptorextractor-create ) Use DescriptorExtractor::compute method to describe each previously detected keypoint.
Then, manually label the features that the detector finds and store... this will help you to distinguish your targets from the other detected keypoints.
Finally, considering a new image dataset, do the same 1 and 2 steps (detection+description) and match your stored hand labelled target features with each the new image detected and described keypoints. You can use a descriptor matcher available from the flexible DescriptorMatcher::create method ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html?highlight=descriptor%20match#descriptormatcher-create ).
This is a simple kind of bag of words spatial feature matching that can be really efficient... IF your features are reproducible from one image to the other. !
Check For more advanced features matching, check out research papers on "bag of words" to push the button further !!!
Have a nice coding and experimentation !