Ask Your Question

lgaravaglia's profile - activity

2015-10-09 17:19:34 -0600 received badge  Enthusiast
2015-10-08 22:08:11 -0600 received badge  Editor (source)
2015-10-08 19:44:11 -0600 asked a question How to package an image for RTP

I'm trying to understand how I can separate an image into parts so that I can place it into RTP packets. I'm working with some old code that already has a custom RTP implementation that I would like to reuse as much as possible. Unfortunately, the person that originally wrote the code cut some corners and did not place actual image data in any of the packets, but rather placed junk data into the payload fields. In order to reuse this code I need to be able to split any images into parts 1024 bytes in size so that the data can fit in the payload field. Could anyone point me to any commands or tutorials that could help me learn how to do this?

2015-07-12 13:01:04 -0600 commented question Face Recognition with Large Training Set

No, no surprise that it would take days to finish with PCA. It's just what I'm most familiar with at this point and I wanted to see what would actually happen if I tried to do it that way. I haven't tried the lbph method yet, I'm less familiar with that method and I made the assumption that it would behave in the same manner as PCA in regards to training on a data set that large. I'll take a more detailed look at it to see if it'll work for my case.

Partitioning the data is another method that I've put some thought into, unfortunately my lab is running low on resources right now and that may not be an option that I can pursue further.

2015-07-11 23:34:14 -0600 asked a question Face Recognition with Large Training Set

I'm working on a face recognition program based off of the OpenCV facerec demo. The training database that I would like to use contains 75,000 images, however trying to run the code with this setup results in my computer completely locking up.

Is there any way that the OpenCV implementation of face recognition can handle a large training set?

If not is there another free API for face recognition that would be able to handle a large training set?