Animal-Computer Interactions (ACI) and other technologies
As the robot industry grows, research into biomimetic robots continues to increase. Robot dogs are one of the more researched types. Unlike robotic arms and vehicles, robotic dogs emphasize interaction with people, and therefore their applications are more focused on daily life. Machine guide dogs are one application that makes good use of this feature. This paper describes the use of the robot dog DOGZILLA S1 for route patrol as well as obstacle recognition. Based on this, the robot dog will provide feedback to people, which can be used as pre-research for designing it into a complete guide dog.
No Abstract available.
This article explores how visually impaired people (VIP) navigate around (a) stationary people and (b) moving people, when guided by the Boston Dynamics’ robotic “dog” and its human operator. By focusing on the micro-spatial dimensions of human mobility while being guided by a mobile robot, the paper argues that the VIP+robodog+operator is in situ emerging as a socio-material assemblage in which agency, perception, and trust gets distributed and that this distribution enables the accomplishment of navigation. The article is based on ethnomethodology and multimodal conversation analysis (EMCA) and a video ethnographic methodology. It contributes to studies in perception, agency, human–robot interaction, space and culture, and distributed co-operative action in socio-material settings.
These studies are part of a project aiming to reveal relevant aspects of human–dog interactions, which could serve as a model to design successful human-robot interactions. Presently there are no successfully commercialized assistance robots, however, assistance dogs work efficiently as partners for persons with disabilities. In Study 1, we analyzed the cooperation of 32 assistance dog–owner dyads performing a carrying task. We revealed typical behavior sequences and also differences depending on the dyads’ experiences and on whether the owner was a wheelchair user. In Study 2, we investigated dogs’ responses to unforeseen difficulties during a retrieving task in two contexts. Dogs displayed specific communicative and displacement behaviors, and a strong commitment to execute the insoluble task. Questionnaire data from Study 3 confirmed that these behaviors could successfully attenuate owners’ disappointment. Although owners anticipated the technical competence of future assistance robots to be moderate/high, they could not imagine robots as emotional companions, which negatively affected their acceptance ratings of future robotic assistants. We propose that assistance dogs’ cooperative behaviors and problem solving strategies should inspire the development of the relevant functions and social behaviors of assistance robots with limited manual and verbal skills.
Navigation robots have the potential to overcome some of the limitations of traditional navigation aids for blind people, specially in unfamiliar environments. In this paper, we present the design of CaBot (Carry-on roBot), an autonomous suitcase-shaped navigation robot that is able to guide blind users to a destination while avoiding obstacles on their path. We conducted a user study where ten blind users evaluated specific functionalities of CaBot, such as a vibro-tactile handle to convey directional feedback; experimented to find their comfortable walking speed; and performed navigation tasks to provide feedback about their overall experience. We found that CaBot’s performance highly exceeded users’ expectations, who often compared it to navigating with a guide dog or sighted guide. Users’ high confidence, sense of safety, and trust on CaBot poses autonomous navigation robots as a promising solution to increase the mobility and independence of blind people, in particular in unfamiliar environments.
Abstract
The guide dog robot (GDR) is a low-speed companion robot that serves visually impaired people and is used to guide blind people to walk steadily, carrying a variety of intelligent technologies and needing to have the ability to guide with optimal energy consumption in specific scenarios. This paper proposes an innovative technique for virtual-real collaborative path planning and navigation of the GDR specific indoor scenarios, and designs an experimental method for virtual-real collaborative path planning of the GDR specific scenarios. The energy consumption integral equation is used to solve for the energy consumption of the GDR with virtual-real synergy, and the difference in energy consumption is compared for three different navigation directions: horizontal, vertical and oblique obstacle avoidance. The results show that the optimized GDR saves 6.91% in rectilinear movement and 10.60% in curved movement. The efficiency of planning and navigating the GDR in specific domestic scenarios is verified by a virtual-real cooperative. The realization of optimal path planning for energy consumption is instrumental in exploring many of the most significant thought in the path planning and navigation of mobile robots in indoor specific scenarios.
Dog guides are favored by blind and low-vision (BLV) individuals for their ability to enhance independence and confidence by reducing safety concerns and increasing navigation efficiency compared to traditional mobility aids. However, only a relatively small proportion of BLV individuals work with dog guides due to their limited availability and associated maintenance responsibilities. There is considerable recent interest in addressing this challenge by developing legged guide dog robots. This study was designed to determine critical aspects of the handler-guide dog interaction and better understand handler needs to inform guide dog robot development. We conducted semi-structured interviews and observation sessions with 23 dog guide handlers and 5 trainers. Thematic analysis revealed critical limitations in guide dog work, desired personalization in handler-guide dog interaction, and important perspectives on future guide dog robots. Grounded on these findings, we discuss pivotal design insights for guide dog robots aimed for adoption within the BLV community.
Working dogs have improved the lives of thousands of people throughout history. However, communication between human and canine partners is currently limited. The main goal of the FIDO project is to research fundamental aspects of wearable technologies to support communication between working dogs and their handlers. In this study, the FIDO team investigated on-body interfaces for dogs in the form of wearable technology integrated into assistance dog vests. We created five different sensors that dogs could activate based on natural dog behaviors such as biting, tugging, and nose touches. We then tested the sensors on-body with eight dogs previously trained for a variety of occupations and compared their effectiveness in several dimensions. We were able to demonstrate that it is possible to create wearable sensors that dogs can reliably activate on command, and to determine cognitive and physical factors that affect dogs’ success with body–worn interaction technology.
Acquiring a traditional service dog for visually impaired persons is expensive with long waitlist times, complicated training programs, and the responsibility of caring for a living creature. A robotic service dog alternative could be a significantly more accessible and practical option for many members of the visually impaired community. Another benefit of a robotic service dog is the ability to imbue the dog with the ability to converse with their companion in human language. This will allow human users to gather more information about their environment and make specific commands to the service dog. The focus of this project is to create a voice module that takes commands and gives verbal replies by incorporating hardware and software components as a module. This module will ultimately be installed on a Unitree Go1 robotic dog using a Raspberry Pi, microphone, speaker, and several python libraries such as PyAudio and SpeechRecognition for voice recognition and pyttsx3 for speech synthesis. These libraries rely on the Hidden Markov Model, voice activity detectors, and preinstalled TTS engines. A working prototype will be demonstrated with functional speech and audio input capabilities in English to report on the environment and to receive voice commands. Testing and validation results will also include metrics like speaking range, rate of speaking, and volume of speech synthesis. The outcomes of our research contribute to the advancement of assistive technologies for visually impaired individuals, ultimately improving their quality of life and independence.
Periodic monitoring of the training of prospective guide dogs for the blind was evaluated to determine if the monitoring is useful in gauging the potential suitability of guide dogs. We selected 8 dogs as test dogs on the basis of their medical check and pretraining evaluation. Beginning with day 1 of training, we monitored their progress every 2 weeks for 12 weeks. The evaluation was designed to assess task performance, stress, excitement, and concentration for the task. We set the test course in a residential district, but in an area that was not used for daily training. In some variables, such as tail position, duration of distraction, and effect of the training break, there were some differences between a dog that successfully completed guide training and dogs that did not.
The number of stress reactions was significantly different between successful and unsuccessful dogs. Only 1 dog out of the 8 observed became a guide dog; however, the present study suggests that it is possible to detect some traits in the early stages of training that determine whether or not a dog successfully becomes a guide dog.
This paper proposes a smart guide dog harness of a visual assistance system based on AI edge computing technology and uses technologies such as image recognition, positioning, and navigation. The operation process is to wear the auxiliary harness after the guide dog. Classify different objects, obstacles, and environmental features. For example, it can identify pedestrians, vehicles, traffic signs, etc., ahead and communicate relevant information to the user in real-time through the voice system of the wearable headset. This enables the visually impaired to perceive the surrounding environment more comprehensively, avoid collisions with obstacles, and navigate more accurately, providing spiritual support to visually impaired people and improving their willingness to integrate into society.
Guide dog robots with advanced sensing abilities could be a big boon to vision-impaired people as some of them may choose technological solutions over real-life guide dogs. In this study, we propose a method that combines a robotic guide dog sensing system with the YOLO-GUIDE framework to enable real-time indoor object detection and classification with localization. The performance was assessed using ten indoor objects. The qualitative test outcomes showed the effectiveness of the proposed method, while quantitative evaluation results with 0.76 Precision, 0.67 Recall, and a 0.71 F1-score indicate high performance. The YOLO-GUIDE proved its superiority by outperforming other relevant models.
This paper explores the intersection of assistance dog welfare and intelligent systems with a technological intervention in the form of an emergency canine alert system. We make the case that assistance dog welfare can be affected by the welfare of their human handlers, and examine the need for a canine alert system that enables the dog to take control over a potentially distressing situation thus improving assistance dog welfare. We focus on one specific subset of assistance dogs, the Diabetes Alert Dog, who are trained to warn their diabetic handlers of dangerously low or high blood sugar levels.
This study introduces “Guide Dog AR,” an AR device inspired by the handle of a guide dog’s harness. The device and its associated content aim to provide visually impaired individuals with the experience of walking with a guide dog, serving as both a practical tool and a foundation for entertainment. The device offers an augmented walking experience through tactile and auditory sensations. The device was assessed with the factors “effectiveness” and “immersiveness.” The evaluation for effectiveness involved creating a simulated virtual path. The level of immersion was evaluated based on how deeply users were engrossed in the augmented content provided by the device. The success rate was determined by how well users navigated this virtual path using the device’s feedback. In assessing immersion, in-depth interviews were conducted with visually impaired participants who experienced the virtual path. These interviews compared their experiences to actual walks with guide dogs, discussing similarities, differences, and overall immersion.
Guide dogs are excellent life assistants for the visually impaired. This article is based on the user experience perspective and is designed from the fundamental needs of the visually impaired. An electronic guide dog that can identify environmental information, automatically avoid obstacles, communicate through voice, make emergency calls, and has positioning and communication functions is necessary. This design uses an STM32 MCU, an HCSR04 ultrasonic module, an RFID radio frequency identification module, a voice module, a GPS positioning module, a GSM communication module, and a motor drive module. It realizes compliance with voice commands for the visually impaired, automatic obstacle avoidance, guidance for walking, positioning, and communication functions. This greatly facilitates the daily lives of the visually impaired and has significant practical value.
A guide dog robot system for visually impaired often needs to process many kinds of information, such as image, voice and other sensor information. Information processing methods based on deep neural network can achieve better results. However, it requires expensive computing and communication resources to meet the real-time requirement. Fog computing has emerged as a promising solution for applications that are data-intensive and delay-sensitive. We propose a fog computing framework named PEN (Phone + Embedded board + Neural compute stick) for the guide dog robot system. The robot’s functions in PEN are wrapped as services and deployed on the appropriate devices. Services are combined as an application in a visual programming language environment. Neural compute stick accelerates image processing speed at low power consumption. A simulation environment and a prototype are built on the framework. The simulated guide dog system is developed for operating in a miniature environment, including a small robot dog, a small wheelchair, model cars, traffic lights, and traffic blockage. The prototype is a full-sized portable guide system that can be used by a visually impaired person in a real environment. Simulation and experiments show that the framework can meet the functional and performance requirements for implementing the guide systems for visually impaired.

