Integrated Approach of Human-Robot Interaction on Basis of Top-Down and Bottom-Up Attention Approach Using Visual and Verbal Affective Intent in Agents
Current computational models of visual attention focus on relevant information and overlook the current
situation other than its focus. However, studies in visual cognition show that humans use context to facilitate object detection
in natural scenes by directing their attention or eyes to diagnostic regions. Top-down attention is knowledge driven approach
where an idea, or decision is controlled or directed from the highest level of internally driven attention. Using visual and
verbal affective intent goals of a responsive and social agent can be achieved. Developing a social agent to perceive things
on the basis of merging the visionary system with verbal patterns and facial patterns including attention level and stress level
can lead to the required result. Essential consequences can be described as sensing a stimuli using auditory and visual
system, then make a perception (processing of stimulus) and decision using top down approach to interact with human
beings. In interaction with human beings, background knowledge and previous experience will influence on perception. This
influence is a key factor to achieve goal. The top-down component uses accumulated statistical knowledge of the visual
features of the desired search target and background clutter, to optimally tune the bottom-up maps. Testing on an artificial
and natural scene shows that the model’s predictions are consistent with a large body of available literature on human
psychophysics of visual search. These results suggest that our model may provide good approximation of how humans
combine bottom-up and top-down cues such as to optimize target detection and efficiently performing speed.
Keywords— Verbal and facial patterns, Perception, Statistical knowledge, Target Detection.