A model for multimodal humanlike perception based on modular hierarchical symbolic information processing, knowledge integration, and learning

BIONETICS(2007)

Cited 11|Views9
No score
Abstract
Automatic surveillance systems as well as autonomous robots are technical systems which would profit from the ability of humanlike perception for effective, efficient, and flexible operation. In this article, a model for humanlike perception is introduced based on hierarchical modular fusion of multi-sensory data, symbolic information processing, integration of knowledge and memory, and learning. The model is inspired by findings from neuroscience. Information from diverse sensors is transformed into symbolic representations and processed in parallel in a modular, hierarchical fashion. Higher-level symbolic information is gained by combination of lower-level symbols. Feedbacks from higher levels to lower levels are possible. Relations between symbols can be learned from examples. Stored knowledge influences the activation of symbols. The model and the underlying concepts are explained by means of a concrete example taken from building automation.
More
Translated text
Key words
knowledge representation,learning (artificial intelligence),sensor fusion,surveillance,automatic surveillance system,autonomous robots,building automation,feedback,hierarchical modular fusion,knowledge integration,learning,modular hierarchical symbolic information processing,multimodal humanlike perception,multisensory data fusion,neuroscience,parallel processing,symbolic representation,Bionics,Building Automation,Humanlike Perception,Knowledge-based Systems,Learning,Multisensory Integration,Symbolic Information Processing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined