SmileFace - Smile Classifier applied to a wheelchair control

Abstract

Assistive robotics solutions help people regain their mobility and autonomy lost in their daily lives. JoyFace is a human machine interface based on head postures and facial expressions to control a robotized wheelchair. This work presents a comparison between two smile classifiers based on machine learning techniques to be integrated into the JoyFace system.

Introduction

According to (World Health Organization, 2011), more than one billion people in the world suffer from some form of disability, being about 200 million with considerable functional difficulties. In Brazil, over 45 million of the population have some disability, among which 13 million of them suffer from severe motor disabilities (IBGE, 2010).

Recent works bring different solutions for wheelchair alternative controls. (Chauhan et al., 2016) developed a wheelchair controlled by voice commands, however this solution can have the influence of the environment’s sounds.(Kim et al., 2013) implemented a control based on commands sent through the tongue. (Rohmer, Pinheiro, Raizer, Olivi and Cardozo, 2015) proposed a control for assistive robotics vehicles using small movements of the face or limbs through Electromyograph (EMG) and signals generated by brain activity Electroencephalograph (EEG).

The JoyFace is a system based on Computer Vision developed to control a wheelchair through facial expressions. The video 1 shows a preview of the system overview, utilizing a simulation. This Human Machine Interface considers the displacement of the user’s face relative to a reference region.The face is identified by a regular webcam and verify the face positions. Each position is associated with a movement control of the wheelchair.

In this article, we present a simulation of a Machine Human Interface (HMIs) to control a robotized wheelchair using the head displacement and facial expressions. The early results have to lead us to conclude that the system needs several modifications to be considered as a safe and reliable solution for people paralyzed from down the neck.

Video 1 - See the preview simulation here


In [18]:
from IPython.display import YouTubeVideo
YouTubeVideo("uzecwOaiKik")


Out[18]:

Method

The system consists of a robotized wheelchair simulated in the V-REP software and a Human Machine Interface (HMI) called JoyFace. Figure 1 illustrates the system overview of the real robotized wheelchair. In his Master’s thesis (Júnior, 2016) documented its architecture, models, control and applications used in this wheelchair, the author has analyzed many commercial and academic developed wheelchairs and based on his research he proposed an architecture of robotic wheelchair that could be controlled by a wide range of assistive interfaces.

An Arduino Mega 2560 is responsible for connecting some sensors and providing the embedded control to actuate on the independent modules of the rear wheels whereas the front caster wheels can roll freely. This mobile robot has two emergency stop buttons (one close to each arm support), one encoder in each motor to measure the wheels’ dislocation, a laser range finder (LRF) to measure distances to obstacles (detect obstacles, mapping), limit switches to stop the wheelchair for security when colliding, infrared sensors pointed to the ground to detect abrupt irregularities, an Inertial Measurment Unit (IMU) to detect and correct motion and other components.

A Raspberry Pi Mobel B+ implements the communication between high-level applications and the low-level layer which is responsible for the control and sensing. The software embedded in this intermediate layer is a RESTful application which uses HTTP protocol. In this way, we can use any programming language that can handle HTTP requisitions to communicate with the robotized wheelchair.

JoyFace

The JoyFace HMI considers the displacement of the user’s face relative to a reference region.The face is identified by a regular webcam and verify the face positions. Each position is associated with a movement control of the wheelchair.

JoyFace was implemented in Python language and uses face detection based on the Viola-Jones classifiers incorporated into the OpenCV library (Viola and Jones, 2001).These classifiers use Haar Cascade features that are applied to images in real time (Papageorgiou et al., 1998). After detection of the user’s face, the last 40 frames are observed. From there the average face position is calculated and a reference region is de- marcated. This reference region will remain static when using JoyFace and can be viewed as a white rectangle.

The centroid of the face detection square is calculated in real time and receives a green circle to highlight the displayed image. This way the user can send commands through the displacement of his nose that has the same position of the calculated centroid. Figure 2 shows how JoyFace HMI it works. If the user positions the nose above the reference region, the wheelchair begins to move front, if the nose is positioned to right or left the wheelchair moves to the corresponding side and if it positions the nose below the reference region the wheelchair interrupts the movements.

Workflow

  1. Pegar dados em tempo real da web cam do usuário.

  2. Identificar rosto do usuário utilizando classificador Opencv

  3. Calcular centróide do rosto e setar ponto no Nariz

  4. Identificar sorriso do usuário utilizando classificador Opencv

  5. Criar Threshold do quadrado de referência

  6. Associar movimentos do rosto e expressões faciais a comandos da Cadeira de rodas

  7. Integrar a simulação do V-Rep


EXPLICAR CADA FUNÇÃO

Results

Mostrar que o classificador do opencv foi efetivo para encontrar sorrisos e consequentemente para controlar a cadeira de rodas.

Quais parametros foram utilizados

Quais parametros podem ser alterados

Conclusion

O classificador utilizado foi efetivo para controle de cadeira de rodas, mas pode ser melhorado.

References


Chauhan, R., Jain, Y., Agarwal, H. and Patil, A. (2016). Study of implementation of voice con- trolled wheelchair, Advanced Computing and Communication Systems (ICACCS), 2016 3rd International Conference on, Vol. 1, IEEE, pp. 1–4.


Júnior, A. (2016). Robotização de uma cadeira de rodas motorizada: arquitetura, modelos, controle e aplicações. Master’s thesis, School of Electrical and Computer Engineering, FEEC, UNICAMP.


Kim, J., Park, H., Bruce, J., Sutton, E., Rowles, D., Pucci, D., Holbrook, J.,Minocha, J., Nardone, B., West, D. et al. (2013). The tongue enables computer and wheelchair control for people with spinal cord injury, Science translational medicine 5(213): 213ra166–213ra166.


Rohmer, E., Pinheiro, P., Raizer, K., Olivi, L. and Cardozo, E. (2015). A novel platform supporting multiple control strategies for assistive robots, Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on, IEEE, pp. 763–769.


World Health Organization (2011). World report on disability., World Health Organization.


Viola, P. and Jones, M. (2001). Robust real time object detection, International Journal of Computer Vision 4(34–47).


Papageorgiou, C. P., Oren, M. and Poggio, T. (1998). A general framework for object detection, Computer vision, 1998. Sixth international conference on, IEEE, pp. 555–562.


IBGE (2010). Cartilha do censo 2010: Pessoas com deficiência, Brasília: Secretaria de Direitos Humanos da Presidência da República (SDH)/Secretaria Nacional de Promoção dos Direitos da Pessoa com Deficiência (SNPD).