Ask Your Question

VHarris's profile - activity

2016-02-03 14:28:42 -0600 received badge  Student (source)
2015-05-07 12:05:50 -0600 commented question Download the Source Files for QT Framework

Wow, I'm gettin' no lovin' here! Anbody willing to point me to the link with the QT Framework source files with steps to install?

2015-05-03 13:32:27 -0600 asked a question Download the Source Files for QT Framework

I am trying to install OpenCV in Windows 7 SP1 x64 with an AMD Radeon R9 video card GPU, VisualStudio 2013. I am trying to install the source files for QT Framework, but the instructions in the OpenCV page are outdated. Here is the link:

http://docs.opencv.org/doc/tutorials/...

Scroll down to the QT Framework Section, and the QT Downloads link is broken.

I've gone to www.qt.io but the download options include downloading the installers (which the opencv page says not to download). For instance, the QT page describing how to install from source http://doc.qt.io/qt-5/windows-buildin... gives me the link to download the QT installers.

What are the current links for the QT source files and the steps to install QT Framework for OpenCV?

2015-04-21 11:17:00 -0600 commented answer De-duplicate images of faces

I have access to Amazon AWS but, if possible,would rather do the calculations on my PC and save the cost of using the Amazon servers and processors. BTW, is used this Handshake Problem formula in my calculations: generalized to n(n-1)/2 = 150,000(150,000-1)/2 = 11.25 Billion comparisons.

2015-04-21 11:11:33 -0600 commented answer De-duplicate images of faces

So, my son is working on another project and unable to tackle this one at the moment. Are you by chance, available to put together an GUI for using your code? Of course, I'd pay you for your work. I need to analyze images, store needed data, find matches, display each match one by one, and permit user to select one file to keep and one to move to another folder. If you'd be willing to help, I can message you, list the specs here, or start another thread. I am working on putting together a 1TB+1TB SSD raid0 array (which is enough storage for the images and will serve up and save the data fairly quickly). I have Win 7 / intel i7 3.4 GHz / 16 GB RAM/ 64 Bit w AMD Radion R9 Gaming card. 150,000+ images, all containing faces. The handshake problem calculation ... (more)

2015-02-04 14:47:42 -0600 received badge  Enthusiast
2015-02-02 10:25:04 -0600 commented answer De-duplicate images of faces

berak, could you give me an explanation of what the code that you provided above does? Does it use the output from Flandmark (or dlib)? Or does it depend on the result from eye_cascade? I'd like to be able to describe to my son what each step in the process, from beginning to end, does.

2015-02-02 10:20:32 -0600 commented answer De-duplicate images of faces

Sometimes a visual inspection gives clues as to which one is the flipped version. Otherwise I just keep the one with the higher dimensions or larger file size.

2015-02-02 10:02:03 -0600 commented answer De-duplicate images of faces

Hi guys, a 'flipped' image is one that is a mirror image of another. That is, the image is loaded into, say, GIMP, then that image is flipped side-to-side and stored in the collection. So, for example, after flipping Angelina's image above, we would have two images of Angelina, one with her looking off to her right and one with her looking off to her left. There are a significant number of flipped pairs in the image collection. Would using cv::flip be a good way to find these mirror images?

2015-02-01 22:34:31 -0600 commented answer De-duplicate images of faces

So have I got this right for a rough draft of one possible solution: Use opencv_traincascade to train face_cascade; Use face_cascade to find the face within each image; Use Flandmark (or dlib) to find point detectors within the face; Use FaceNormalizer to extract the face from the image(?), rotate, and resize the extracted face; Use imwrite() to save the face-file to disk; Use ? to flip the image in the horizontal; Use imwrite() to save the now-flipped face-file to disk; Repeat until all images have been processed; Use phash to find similar faces among the files.

2015-02-01 21:05:20 -0600 commented answer De-duplicate images of faces

berak: Do you call the code you provided, FaceNormalizer? Can you help me understand where to find the values for the variables in your FaceNormalizer?

2015-02-01 13:57:34 -0600 commented question De-duplicate images of faces

Just want to make sure I've got this right so far. 1) use face_cascade to find the face 2) use eye_cascade to find the eyes within the face 3) use the center point of the eye detections to align the face. How is this typically done? And once the faces are aligned, how are the faces extracted from the image for use by pHash? 4) use phash to compare the images

2015-01-31 21:03:23 -0600 commented question De-duplicate images of faces

I then rotated the image 45 degrees (a typical rotation in the project I'm working on) and got a report that the images are not similar: _pHash determined your images are not similar with PCC = 0.308770. Threshold set to 0.85.

So I believe that, as currently configured, phash will not work for my project, unless ...

"Face detection" images can be extracted from the original images I'm working with and those extracted images rotated to the horizontal on the eyes, then resized, then compared with each other using phash.

Is there any way that opencv has tools that can detect then extract faces from images so that those images can then be compared?

2015-01-31 21:02:24 -0600 commented question De-duplicate images of faces

-DCT: pHash determined your images are not similar with hamming distance = 30.000000. Threshold set to 26.00. -Marr/Mexican: pHash determined your images are not similar with normalized hamming distance = 0.468750. Threshold set to 0.40.

Then I cropped the image lightly, maybe 10% of the total area and got a report that the images are similar. Here is the result I got on just one test: -pHash determined your images are similar with PCC = 0.993786. Threshold set to 0.85.

2015-01-31 21:01:08 -0600 commented question De-duplicate images of faces

I used the phash demo on two sets (pair) of .jpg files. Each set contained the original image and a cropped image. The first pair was heavily cropped and I didn't expect good results. The second pair was a 'typical' crop, cropping off both sides of the image and leaving only the face in the middle. I got no matches either time. Here are the results from the 'typical' image:

Select 2 JPEG or BMP images to compare and click Submit. -Images may be saved for statistical analysis and to improve the pHash algorithms. Images will never be redistributed. -Algorithm: RADISH (radial hash); DCT hash; Marr/Mexican hat wavelet -RADISH: pHash determined your images are not similar with PCC = 0.366513. Threshold set to 0.85.

2015-01-31 09:13:24 -0600 received badge  Editor (source)
2015-01-31 01:35:10 -0600 asked a question De-duplicate images of faces

I have a group of images of faces (.jpg and .png). Some of the images are taken from the same source but have been cropped, rotated, flipped, resized, compressed, color modified, brightened, darkened, etc., and so are not exact duplicates. So the freeware I currently use will not find these 'transformed' near-duplicate images.

Since the images are 'almost,' but not quite, identical, I hope facial recognition software can help identify the duplicates. Also, I need a system that, once each face has been compared with each of the others, can tell me which files match so that I can compare the files and decide which one of the pair to keep and which one to discard.

I'm not a programmer, but my son can do the programming. I am trying to find possible tools he can use to implement a solution. (He says tools that can be integrated with Linux would be the easiest but that he can work with most anything.)

I've read the facerec_tutorial but don't really understand it. Can the opencv facial recognition software find the matches and then output some sort of data that can be used for human comparison of the files? If not, what tools would he need to use in order to take the output from the recognition software and make it useable so that a human could examine the images?

Ah, being a newbie, I can't yet answer my own question, so, in answer to berak's reply below, there are about 150,000 images that need to be deduplicated.