Ask Your Question

Revision history [back]

Rough outline for grocery shopping Android app

Given a library of 500 or 1000 pictures of items from a grocery store shelf, and the requirement to detect any of those items using an Android device's camera, what would be some logical ways to construct this kind of detection app?

Although new pictures would be getting added every so-often, let's presume that the set of pictures in the library would be fixed. Thus would the data resulting from training reside on the Android device, and not depend upon support from a web site?

Neither the library pictures, nor the device user's aim will be perfect, but we can presume both are upright within a few degrees, and positioned generally in front of the object. So a rectangular object should appear roughly rectangular and roughly upright in the library and as the user positions the camera.

Are there any open source projects that are similar to what is described in this question? Are there any open source projects (Android or not) that load training data from a set of images into blobs in a database, then compare those blobs to the "current" image?

Would a typical Android device have enough processing power to check video frames, or would the user need to "take a picture" and then let the app churn to get a result?

Rough outline for grocery shopping Android app

Given a library of 500 or 1000 pictures of items from a grocery store shelf, and the requirement to detect any of those items using an Android device's camera, what would be some logical ways to construct this kind of detection app?

Although new pictures would be getting added every so-often, let's presume that the set of pictures in the library would be fixed. Thus would the data resulting from training reside on the Android device, and not depend upon support from a web site?

Neither the library pictures, nor the device user's aim will be perfect, but we can presume both are upright within a few degrees, and positioned generally in front of the object. So a rectangular object should appear roughly rectangular and roughly upright in the library and as the user positions the camera.camera. The library image would only include only the product packaging (nothing surrounding it). The majority of the user's image will contain the product packaging. In other words, the requirement is to have the user frame a single product (although there will be artifacts on the edges).

Are there any open source projects that are similar to what is described in this question? Are there any open source projects (Android or not) that load training data from a set of images into blobs in a database, then compare those blobs to the "current" image?

Would a typical Android device have enough processing power to check video frames, or would the user need to "take a picture" and then let the app churn to get a result?