Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

You can use the client-server architecture for your system. Client side should be started on the Raspberry and make three things: 1) Detect and track the faces on the video; 2) When new face wil be detected, sen it's image to the recognition server; 3) Wait repeat from the server and do what you want to do on the Raspberry Whereas, the server side should be started on PC with good enough performance and just wait the recognition tasks from the clients. Some time ago I have developed all parts of very similar solution on Opencv and Qt.

You can use the client-server architecture for your system. Client side should be started on the Raspberry and make three things: things:

1) Detect and track the faces on the video; video;

2) When new face wil be detected, sen it's image to the recognition server; server;

3) Wait repeat from the server and do what you want to do on the Raspberry Raspberry.

Whereas, the server side should be started on PC with good enough performance and just wait the recognition tasks from the clients. Some time ago I have developed all parts of very similar solution on Opencv and Qt.

You can use the client-server architecture for your system. Client side should be started on the Raspberry and make three things:

1) Detect and track the faces on the video;

2) When new face wil be detected, sen send it's image to the recognition server;

3) Wait repeat from the server and do what you want to do on the Raspberry.

Whereas, the server side should be started on PC with good enough performance and just wait the recognition tasks from the clients. Some time ago I have developed all parts of very similar solution on Opencv and Qt.