Video & Image Analysis
The HD-RNN framework is optimally suited for applications involving analysis of video streams and still images in real-time. The technology is effectively deployed on both ends of the performance spectrum:
Unlike existing solutions, there is no need for application-specific, rule-based programing. The technology utilizes a self-learning mechanism for finding patterns and objects in 2D, 3D, and N-dimensional data. Fusion of data originating from different sensory modalities (e.g. visual, audio, etc.) is intrinsically supported.
A single GPU card running HD-RNN server suite handles more than 500 distinct object recognitions per second. For example, a server with 4 GPU cards can easily support over 1,000 recognitions per second, or several live video feeds.
Examples of applications to which HD-RNN was successfully applied to include: