I'm about to build an image search application for a large dataset. But many of the images will be very "bland" - ie with little variation in colour, or one colour with a subtle texture. However these are in addition to more complex images.
My initial thought was to dive with SIFT/SURF/ORB or regular feature descriptors, then simply do a Nearest Neighbour/distance calc on the descriptor vectors.
However, I'm not sure how well this will work for images which are very plain and have "no features".
Would be very grateful for any thoughts!