↓ Skip to main content

Object recognition combining vision and touch

Overview of attention for article published in Robotics and Biomimetics, April 2017
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (66th percentile)

Mentioned by

twitter
8 X users
facebook
1 Facebook page

Citations

dimensions_citation
14 Dimensions

Readers on

mendeley
40 Mendeley
Title
Object recognition combining vision and touch
Published in
Robotics and Biomimetics, April 2017
DOI 10.1186/s40638-017-0058-2
Pubmed ID
Authors

Tadeo Corradi, Peter Hall, Pejman Iravani

Abstract

This paper explores ways of combining vision and touch for the purpose of object recognition. In particular, it focuses on scenarios when there are few tactile training samples (as these are usually costly to obtain) and when vision is artificially impaired. Whilst machine vision is a widely studied field, and machine touch has received some attention recently, the fusion of both modalities remains a relatively unexplored area. It has been suggested that, in the human brain, there exist shared multi-sensorial representations of objects. This provides robustness when one or more senses are absent or unreliable. Modern robotics systems can benefit from multi-sensorial input, in particular in contexts where one or more of the sensors perform poorly. In this paper, a recently proposed tactile recognition model was extended by integrating a simple vision system in three different ways: vector concatenation (vision feature vector and tactile feature vector), object label posterior averaging and object label posterior product. A comparison is drawn in terms of overall accuracy of recognition and in terms of how quickly (number of training samples) learning occurs. The conclusions reached are: (1) the most accurate system is "posterior product", (2) multi-modal recognition has higher accuracy to either modality alone if all visual and tactile training data are pooled together, and (3) in the case of visual impairment, multi-modal recognition "learns faster", i.e. requires fewer training samples to achieve the same accuracy as either other modality.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 40 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Ireland 1 3%
Unknown 39 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 12 30%
Researcher 4 10%
Student > Bachelor 4 10%
Professor 3 8%
Student > Master 3 8%
Other 5 13%
Unknown 9 23%
Readers by discipline Count As %
Engineering 13 33%
Computer Science 6 15%
Arts and Humanities 2 5%
Psychology 2 5%
Business, Management and Accounting 1 3%
Other 6 15%
Unknown 10 25%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 09 June 2017.
All research outputs
#7,181,747
of 25,085,910 outputs
Outputs from Robotics and Biomimetics
#5
of 38 outputs
Outputs of similar age
#105,833
of 316,024 outputs
Outputs of similar age from Robotics and Biomimetics
#1
of 1 outputs
Altmetric has tracked 25,085,910 research outputs across all sources so far. This one has received more attention than most of these and is in the 71st percentile.
So far Altmetric has tracked 38 research outputs from this source. They receive a mean Attention Score of 1.8. This one scored the same or higher as 33 of them.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 316,024 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 66% of its contemporaries.
We're also able to compare this research output to 1 others from the same source and published within six weeks on either side of this one. This one has scored higher than all of them