Search and Compare course prices, ratings, and reviews. Over +350 Design and Technology courses in one place!

Computer Vision for Assistive Technologies

Machine Vision for Assistive Technologies

The last decade has witnessed the significant impact of Computer Vision and Robotics on real-world products. The traditional Computer Vision problems such as tracking, 3D reconstruction, detection, recognition, odometry, navigation, and ultimately, are now solved with significantly higher accuracy using Machine Learning (Farinella et al., 2020). However, most of these results have focused on constrained application scenarios that do not involve the integration of feedback from the user (Leo et al., 2019). Since these applications do not consider the user’s intentions and goals, they tend to be of limited use when it is necessary to assist humans.

With the pervasive successes of Computer Vision and Robotics and the advent of industry 4.0, it has become paramount to design systems that can truly assist humans and augment their abilities to tackle both physical and intellectual tasks. We broadly refer to such systems as “assistive technologies” (Leo et al., 2017). Examples of these technologies include approaches to assist visually impaired people to navigate and perceive the world, wearable devices which make use of artificial intelligence, mixed and augmented reality to improve perception and bring computation directly to the user, and systems designed to aid industrial processes and improve the safety of workers (Leo and Farinella, 2018). These technologies need to consider an operational paradigm in which the user is central and can both influence and be influenced by the system. Despite some examples of this approach exist (Fosch-Villaronga et al., 2021), implementing applications according to this “human-in-the-loop” scenario still requires a lot of effort to reach an adequate level of reliability and introduces challenging satellite issues related to usability, privacy, and acceptability.

The main aim of this Research Topic was to gather contributions from the diverse fields of engineering and computer science in the context of technologies involving Computer Vision and Robotics related to real-time continuous assistance and support of humans while performing any task.

At the end of a double-blind review process that involved distinguished researchers from industry and academia, four papers were accepted.

The first paper (sorted by acceptance date) is titled “Communicating Photograph Content Through Tactile Images to People With Visual Impairments (Pakenaite et al.).” It introduces an approach to make visual content accessible via touch. State-of-the-art algorithms are used to automatically process an input photograph into a collage of icons that depict the most important semantic aspects of a scene. This collage is then printed onto swell paper allowing this way visually impaired people to access photographs and better enjoy books, tourist brochures, etc.

The paper “Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography” proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography (Kim et al.).

The paper “Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks” deals with Robotic leg prostheses and exoskeletons that can provide powered locomotor assistance to older adults and/or persons with physical disabilities (Laschowski et al.). Inspired by the human vision-locomotor control system, the authors developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions.

The last paper “Recognition and Classification of Ship Images Based on SMS-PCNN Model” lies in the field of ship images recognition and classification (Wang et al.). In order to extract the ship features of different scales, the authors proposed a Multi-Scale paralleling CNN that has three characteristics. (1) Extracting image features of different sizes by parallelizing convolutional branches with different receptive fields; (2) the number of channels of the model is adjusted twice to extract features and eliminate redundant information; (3) The residual connection network is used to extend the network depth and mitigate the gradient disappearance.

This is our RSS Feed and this story was found here by our Project ADA. Make sure to visit the site and original post!

Coletividad
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart