A key element of intelligent computing is the management and recognition of sensor data. And although applications are being studied for a range of sensor modalities, the most complicated and perhaps most pervasive is that of analyzing and understanding video moving-image data.
Interpreting streams of data requires capturing multiple, hierarchical levels of information from the raw pixels of a video. Processing these data typical occurs in a pipeline. Near the end of the pipeline, the various features come together to form objects. A visual object is defined by the features and their relative (translation and scale invariant) positions with respect to each other.
Finding objects by mapping features is a complex and compute intensive task. The objects in the image are represented as graphs, with object features being the nodes in the graph, and the edges between nodes representing information on the features’ relationships to each other. The recognition task is to find the inexact subgraph isomorphism of known objects with respect to the observed features. This is a probabilistic task since there is rarely an exact match, what you seek is a best match in a Bayesian most likely sense.
The focus of my research is to explore neural inspired algorithms, loosely based on cortical structures, as new approaches to capturing the graphical structure of features in a still or moving image, and then finding the most likely matching subgraph based on Bayesian techniques. The approach is to study the use of hierarchical, modular, sparsely distributed structures loosely based the HTM family of algorithms.