facenet math
Hello, i hope this is not too off topic but i want to try anyway please.
Pre conditions I have:
- database filled with 128 floating vectors for different persons
- the vector is a unit vector (length of 1)
So i am able to calculate the distance for a given vector on the database (apache solr) successfully. My naive approach calculates all distances for a request and sorts by distance - which is working correctly.
Now my question: Is it possible to limit the records the db needs to scan for a given max distance without calculating the distance for all records? I want to use fixed fields(no db functions) to make this prediction.
I tried so far : mean, variance, using anchor points but i see no correlation with the calculated distance. Sorry - i am still learning but math is always a bit difficult for me (i try to improve).
Thank you + Greetings, Holger
Some (stupid) thoughts :
Can i somehow put points(determined by the vector) close to each other(certain distance) in a cluster/cloud? And then before if make a query - i determine in which cluster i should search in to get the real distance?
http://infolab.stanford.edu/~ullman/m... Is a good explanation about cluster in euclidenean space
just curious, how that would look in code. would it be possible, to put some snippet on a gist or such ? (again, it's not important, only my own, personal curiosity)
"db functions" would mean something like this, right ? http://www.solrtutorial.com/custom-so...
are you even storing the facenet features there ? how ?
my naive idea here would be:
when adding a new entity to the db, calculate distances to all others, store in a table like [id_a, id_b, dist] , one row for each distance. easy to index, and easy to retrieve with queries (idk their syntax) like "where id=217 and dist=min" or such.
Hi Berak - i can summarize a bit why i did right now:
This is performing not bad and correct but solr is forced to to a "full table scan" to get the min distance.I want to prevent this. My aim is to give the query some hints which element are unlike to match the query("filter query
Ok let me try to simulate your suggestion berak : I insert a vector and before that - i calculate all the distances to all existing vectors. For each calculated distance, i create a new record with distance and which vector were compared.
This works great if you only compare entries from the db with each other. You can tell which person from the db is most similar to other persons in the db by min query nearly instant - well great!
But what happens if you get a vector from a random image and want to tell if that person is in the db with a certain min distance? In that case i guess you need to again compare that vector with every other vector. That's a full table scan again and what i want to prevent.
Btw: i am not working for a company ^^ Maybe i want to found one ...(more)
yea, right. that idea was a bit short-sighted ;(
So my top guess for this is right now:
Before insertion:
When doing query
In my head this makes totally sense right now, but i need to verify. God thanks i have plenty of data to test on(Thank you Wikipedia).
And thank you again for your interest - that alone somehow helps.
Alternative would be to use in memory and distribute the data over several nodes to parallelize the search. But wait - that's exactly what apache solr in cluster mode (minus the in memory thing) is doing :-)
I don't want to reinvent wheel. While writing this i get some ideas how to improve my situation (distribute data on several nodes). Thank you!
any progress ? (just curious)