Ask Your Question

zerowords's profile - activity

2012-12-12 14:41:34 -0600 received badge  Editor (source)
2012-12-12 14:38:08 -0600 asked a question Supplying connection weights to CvANN_MLP::predict

Outside of openCV, I have trained a neural network with the following structure.

  1. 169 input nodes getting binary inputs [-1,1]
  2. 15 hidden nodes with bias
  3. 13 output nodes producing binary outputs [0,1]
  4. sigmoid (logistic) activation functions

I have all the MAT weights and biases.

How do I supply this information to CvANN_MLP?

I have been reading the docs at the following links but I don't see any opportunity to input the weights I have developed, and I don't understand C++ so the input types MAT& and CvMat* are a mystery for me, who hopes to develop this app in objective-c.

CvANN_MLP::CvANN_MLP, CvANN_MLP::create, CvANN_MLP::predict

2012-12-02 10:01:32 -0600 commented question Why am I getting error "'train' is ambiguous" when trying to train this MLP?

I have so many questions. But to start ... This question says, "This was all adapted from the included sample letter recognizer." Where can I find that included sample letter recognizer?

2012-11-29 07:53:40 -0600 asked a question Where do you put the iOS framework file

I have been following this question/answer

I have downloaded the official framework file and have also built a copy. Currently I am using the newly built copy as described below, but I have the official framework file, too, and don't know where to place it (Please see my directory structure and Xcode messages below).

The following describes my systems current state. I installed according to these instructions except that I used python2.7 instead of python on step 2 building the framework.

All seemed to go well except the final line below.

** BUILD SUCCEEDED **

lipo: can't open input file: ../build/iPhoneOS-armv7s/lib/Release/libopencv_world.a

(No such file or directory)

I then attempted to run the hello world demo at this link

It produced the following error message. Note that I am using Xcode on Lion (10.7), not (10.6) as stated in the feedback below.

Ld /Users/brian/Library/Developer/Xcode/DerivedData/bridgeduplicate-dtbofkdttzklalbrftzuowcusavn/Build/Products/Debug-iphonesimulator/bridgeduplicate.app/bridgeduplicate normal i386
cd /Users/brian/develop/bridgeduplicate
setenv MACOSX_DEPLOYMENT_TARGET 10.6
setenv PATH "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin"
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -arch i386 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator5.1.sdk -L/Users/brian/Library/Developer/Xcode/DerivedData/bridgeduplicate-dtbofkdttzklalbrftzuowcusavn/Build/Products/Debug-iphonesimulator -F/Users/brian/Library/Developer/Xcode/DerivedData/bridgeduplicate-dtbofkdttzklalbrftzuowcusavn/Build/Products/Debug-iphonesimulator -F/Users/brian/develop/bridgeduplicate/../opencv-library/ios -filelist /Users/brian/Library/Developer/Xcode/DerivedData/bridgeduplicate-dtbofkdttzklalbrftzuowcusavn/Build/Intermediates/bridgeduplicate.build/Debug-iphonesimulator/bridgeduplicate.build/Objects-normal/i386/bridgeduplicate.LinkFileList -mmacosx-version-min=10.6 -Xlinker -objc_abi_version -Xlinker 2 -fobjc-arc -Xlinker -no_implicit_dylibs -D__IPHONE_OS_VERSION_MIN_REQUIRED=50100 -framework opencv2 -framework UIKit -framework Foundation -framework CoreGraphics -o /Users/brian/Library/Developer/Xcode/DerivedData/bridgeduplicate-dtbofkdttzklalbrftzuowcusavn/Build/Products/Debug-iphonesimulator/bridgeduplicate.app/bridgeduplicate

ld: framework not found opencv2 clang: error: linker command failed with exit code 1 (use -v to see invocation)

Are the following two paths consistent?

The first path is copied from from Xcode.

FRAMEWORK_SEARCH_PATHS = $(inherited) "$(SRCROOT)/../opencv-library/ios"

The first one appears to be the following, but copies to the clipboard as above. My app name is "bridgeduplicate".

/Users/brian/develop/bridgeduplicate/../opencv-library/ios

The second path is copied from the terminal console.

server:opencv-library brian$ pwd /Users/brian/develop/opencv-library server:opencv-library brian$ ls -l ios total 0 drwxr-xr-x 5 brian staff 170 Nov 27 12:13 build drwxr-xr-x 6 brian staff 204 Nov 27 12:17 opencv2.framework server:opencv-library brian$

Most important is my question, "Where do you put the iOS framework file?"

Also, can some one tell me how to do the following, please? "(use -v to see invocation)"

And is there a way for me to check SRCROOT in FRAMEWORK_SEARCH_PATHS?

2012-11-15 08:13:10 -0600 commented answer recognizing which card from 52

Can I assume your fourth step of counting occurrences is an attempt to count spots on the card? If so I was thinking it would be better to look in the upper left corner of the card at the 2,3,...,9,10,J,Q,K,A symbol to determine the cards value. Is that different from what you have suggested and if so can you comment, please, on the revisions required and efficiency of that approach? I am not familiar with SURF and FlannBasedMatcher. Are they internal to openCV or provided by other sources?

2012-11-15 08:09:03 -0600 received badge  Scholar (source)
2012-11-15 08:08:52 -0600 received badge  Supporter (source)
2012-11-11 10:17:37 -0600 asked a question recognizing which card from 52

If you have ever seen the bridge column in the daily newspaper and wanted to deal it out, you were likely put off by the need to arrange a shuffled deck in the proper order. To reduce that step I want to hold each card, one at a time, in front of a digital camera connected to a PC, at a known distance and orientation, and for the PC to tell me where the card goes in the newspaper's deck after I have been prompted to type in the newspaper's hand in the order North's spades, North's hearts, etc.

My search for accomplishing this has suggested opencv for sure, and surf possibly. Can anyone confirm my suspicions and suggest more details, please?

The questions I have read regarding opencv which come close to this question, seem to be looking for objects somewhere in an image, but I want to make a match with one of 52 standard images.

My problem is a lot like when on the TV shows like CSI they try to match a perpetrator's face against a huge database of photos to identify the criminal, but here there are just 52 cards in the database. I don't know if opencv will work for this matching task and if it will, what are the features of opencv to apply?