Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from P-T

Tele-Haptics - Introduction, Haptics Applications, Haptic Rendering, Stable Haptic Interaction, Architecture of Haptics Applications, – Networked Haptics

time real device virtual

Xiaojun Shen and Shervin Shirmohammadi
University of Ottawa, Canada

Definition: Tele-haptics refers to transmission of computer generated tactile sensations over the network.

Introduction

Multimedia and Information technology is reaching limits in terms of what can be done in multimedia applications with only sight and sound. The next critical step is to bring the sense of “touch” over network connections, which is known as tele-haptics. Haptics , a term which was derived from the Greek verb “haptesthai” meaning “to touch”, introduces the sense of touch and force in human-computer interaction. Haptics enable the human operator to manipulate objects in the environment in a natural and effective way, enhance the sensation of “presence”, and provide information such as stiffness and texture of objects, which cannot be described completely with visual or audio feedback only. The technology has already been explored in contexts as diverse as modeling & animation, geophysical analysis, dentistry training, virtual museums, assembly planning, mine design, surgical simulation, design evaluation, control of scientific instruments, and robotic simulation. But its true potential in these areas has not yet been achieved, and its application to all aspects of dexterous training, for example is still untapped. Haptic devices typically consist of microcontrollers that use input information from sensors, and control effectors to create human sensations as outputs. Sensors range from pressure, temperature and kinesthetic sensing devices, to biofeedback equipment. Haptic effectors, evoking precise perceivable sensations, range from small motors, fans, heating elements, or vibrators; to micro-voltage electrodes which gently stimulate areas of the skin (creating subtle, localized, “tingling” sensations). Some Haptic devices are shown in Figure 1.

Compared with visual and audio displays, haptic displays are more difficult to build since the haptic system not only senses the physical world but also affects it. In contrast, we can not change the object while listening or just looking at it except in a magical world. In the haptic device world, tactile haptic devices provide sense of sensing the temperature and pressure, grasping, and feeling the surface textures in details of an object, while force haptic devices provide users with the sense of general shape, coarse texture and the position of an object. Apparently, tactile haptic device is more difficult to create than that of kinesthetic. Due to technical problems, currently there is no single haptic device that could combine these two sensations together although we cannot separate them in the real life.

Tele-haptics can be defined as the use of haptics in a network context; it is the science of transmitting computer generated tactile sensations over networks, between physically distant users. Applications for haptics are broad, and tele-haptics in no way constrains that potential. For every single-user desktop application of haptics technology, a tele-haptics equivalent introduces, at the very least, a richer collaborative experience. A desktop modeling package can extend to collaborative modeling or design review; a dexterous task trainer can extend to remote teaching and assessment; a surgical simulator can be coupled directly to a surgical robot.

Much of the academic research and development on tele-haptics in recent years has attempted to ease the construction of virtual environments providing increased immersion through multi-sensory feedback. These environments that support collaborative touch in virtual environments are termed Collaborative Haptic Audio Visual Environments (C-HAVE) {Figure 2), where participants may have different kinds of haptic devices, such as the Sensable’s PHANToM, the MPB Freedom6S Hand Controller, FCS Robotics’ HapticMASTER, or Immersion’s CyberGrasp; or they can just be passive observers. Some of the participants in the virtual world may only provide virtual objects as a service to the remaining users. Adding haptics to a conventional Collaborative Virtual Environment (CVE) creates additional demand for frequent position sampling, collision detection/response, and fast update. It is also reasonable to assume that in CVEs, there may be a heterogeneous assortment of haptics devices with which users interact with the system.

User scenarios for C-HAVE evolve from haptic VE applications. Collaboration may involve independent or dependent manipulation of virtual objects, or tele-mentoring. Independent manipulation of virtual objects allows multiple participants to haptically interact with separate objects. Each object is “owned” by a single participant. While other participants may feel the owner’s manipulation of an object, in a similar fashion to a virtual audience watching virtual actors in a conventional CVE, where the owner does not receive haptic feedback from those other participants. This is termed unilateral tele-haptics. Dependent manipulation introduces bilateral tele-haptics, whereby multiple participants haptically interact with identical or coupled objects. Here, each participant feels the other’s manipulation of the environment, but that sensation is indirectly perceived through the environmental modification. Tele-mentoring allows direct coupling of haptic devices over a network. Tele-mentoring is an educational technique that involves real-time guidance of a less experienced trainee through a procedure in which he or she has limited experience. A typical example of this is the education of surgeons, whereby a trainee can feel an expert’s guiding hand. Independent manipulation of virtual environments involves augmentation of a client station and may be viewed as a simple integration of conventional CVEs with conventional hapto-visual VEs. Both application families are typically implemented upon none or soft real time operating systems. By contrast, dependent manipulation and tele-mentoring impose more stringent requirements and, like tele-robotics, demand hard real time guarantees. Since Tele-Haptics are a networked extension of Haptics, let us start by having a more detailed look at Haptics and their applications.

Haptics Applications

Haptics is the study about the simulation of the touch modality and the related sensory feedback. Haptics applications are often termed “hapto-visual” or “hapto-visual-acoustic applications”, in addition to the more generic “multi-sensory” applications.

Generally there are two kinds of sensory information related with human touch: kinesthetic and tactile. Kinesthetic is the human’s perception on the relative movements among body parts and is determined by the velocity of movement of the links, therefore the touched object can be reconstructed in the mind by knowing the position of the corresponding links. Tactile is the sense of touch that comes from some sensitive nerve sensors on the surface of skin such as information about pressure and temperature. Haptic perception of touch involves both kinds of sensations.

Haptic Rendering

Haptic rendering is the process of converting haptic interface spatial information (position/orientation) to net force and torque feedback (some haptic devices may only generate force feedback to the users). Two major tasks in haptic rendering paradigms are collision detection and collision response (Figure 3). Collision detection is to detect collisions between the end point of the generic probe and the objects in the scene, while collision response is to respond to the detection of collision in terms of how the forces reflected to the user are computed.

Stable Haptic Interaction

A haptic device links a human operator with a virtual environment together where the operator feels the objects in the virtual scene with the sense of touch. There is physical energy flowing to and from the operator. Since in the haptic interaction, the physical haptic device generates force feedback, instability of system can damage the hardware or hurt the operator physically. For example, the Haptic MASTER from FCS Robotics can easily generate a few hundred Newton’s force in less than one second which is enough to hurt a person seriously. Instability also destroys the illusion of a real object immediately. Furthermore, the instabilities degrade transparency in haptic simulation. A transparent haptic interface should be able to emulate any environment, from free space to infinitely stiff obstacles.

Many researchers have studied the stability issues in haptic simulation in different ways. In a haptic simulation, there are at least three components: human operator, a haptic interface, and a computer model of the virtual environment. Figure 4 shows their relationship using the two port network model. The star superscript indicates the variable is discrete. Typically the haptic simulation can be classified into two categories: impedance control and admittance control. Impedance control means “measure position and display force” while admittance control means “mieasure force and display motion or position”. All these three components affect the stability of the system.

From the perspective of control theory, Minsky et. al.derived an expression for the guaranteed stable haptic interaction using the impedance control theory and then modified it with the experimental results. The impedance here is used to describe a generalized relationship between force and motion of the haptic device. They noted a critical tradeoff between sampling rate, virtual wall stiffness, and device viscosity and analyzed the role of the human operator in stability concern. In the experiment, they tried to simulate a virtual wall with stiffness K and viscosity B . The derived equation for a guaranteed stability is given as follows:

where T is sampling time and b is the device viscosity. The results shows that the rate to sample the haptic spatial information and output the force to haptic device is the most crucial factor that affects the highest stiffness can be achieved while maintain a stable system. At the same time, the device viscosity plays a role in the stability issue: the higher the device viscosity, the higher stiffness can be achieved. So it is possible to make the system stable by adding extra viscosity, or by reducing the stiffness of the simulated hard surface. But the human will feel resistance and sluggishness even in free space if the viscosity is too high.

Architecture of Haptics Applications

Multiple tasks, such as haptic sensing/actuation and visual updates must be accomplished in a synchronized manner in haptic applications. It becomes commonplace to separate tasks into computational threads or processes, to accommodate different update rates, distribute computation load and optimize computation. Conventionally, multithreading and multiprocessing software architectures (Figure 5 (a)) are applied to develop effective multimodal VEs and the optimal usage of the CPU capabilities. However, an important but little discussed consequence of the conventional architectures is that it makes the operating system an inherent component of the applications, with operating system scheduling algorithms limiting the application’s quality of service. The application may request a theoretical rate of force display but it is the scheduler that determines the actual rate. This scheduler is itself a complex algorithm, particularly when considered in terms of its interactions with the other services provided by the operating system.

A multi-machine solution for haptic application was addressed in as shown in Figure 5 (b). The multi-machine architecture is comprised of three parts: haptic device, Haptic Real Time Controller (HRTC) and Virtual Environment (VE) graphics station. HRTC communicates with its VE station through a local Ethernet connection. HRTC relies on hard real time operating systems (e.g. QNX Neutrino, VxWorks or Windows CE) to guarantee the stability of the control loop. The separation of functionalities of haptic and graphic rendering makes this architecture easier to extend to existing applications. Unlike conventional multithreading or multiprocessing approaches for haptics, this multi-machine model solution applies a hard real-time operating system for haptic control, while applying a mainstream OS such as Win2K or WinXP for the application and graphics.

Tele-Haptics – Networked Haptics

The concept of networking haptics is referred to as “tele-hapticss” and occurs when haptic devices are located at remote locations over a communications network. Sometimes referred to as e-touch , tele-haptics is based on the bilateral transmission of spatial information such that either end of the communication can both sense and/or impart forces. One of the stumbling blocks to enabling real-time control of haptic devices over a network is network lag: delay, jitter, and packet loss. Haptics can actually cause greater time delay issues due to instability. In other words, the mechanical devices used to impart the sense of touch could vibrate and become dangerous. For example, with Internet time delays as little as 100msec, it is possible for on-line simulated training participants to be essentially involved in two totally different scenarios. In the case where a transcontinental latency exists, there will be a mismatch between the two participant’s computers due to network time delays. If this were to be used for mission-critical simulations for pilots, for example, the effects render the training process useless.

There are, however, solutions to mitigating network lag issues. These solutions, typically implemented over the transport layer (UDP), consist of communication techniques that quickly synchronize both ends in the presence of packet loss, or reduce the jitter by short-term buffering. Another type of solution, which deals with the problem from a human-computer interaction point of view as opposed to a networking point of view, uses “decorators”: visual cues that inform the user about the presence of network lag, allowing him/her to intuitively adjust to the situation accordingly. Another concept that has been around for many years is “ dead reckoning ”. This concept has proven somewhat useful in the past, but is very susceptible to noise and becomes ineffective as time delay becomes large. Given that telehaptics is very sensitive to time delay, more advanced techniques need to be adopted to enable telehaptic applications to operate in a real-time environment, such as the Handshake method of time delay compensation. This method utilizes advanced intelligent predictive modeling techniques for real time human-in-the-loop interaction. It effectively overcomes network instability issues and provides superior tracking performance as compared to dead reckoning techniques to allow a user to interact in real-time with his/her environment.

Generic Architecture for C-HAVE

Tele-haptic platforms rely on hard real time operating systems to guarantee the stability of the control loop and to minimize the delays in the network stack. As described below, the Haptic Real Time Controller compensates for network latency, so increased or unreliable latency in the network stack provided by the host operating system will either increase the complexity of the latency compensation algorithms or decrease the effective separation of tele-haptic collaborators.

A node in a C-HAVE environment with a haptic device is comprised of three parts: haptic device, HRTC, and Virtual Environment (VE) graphics station, as illustrated in Figure 6. The HRTC controls the haptic device through a device driver. A local HRTC is linked with remote HRTC through a haptic channel (1 in Figure 6) and communicates with its VE station through a local channel (2 in Figure 6). The VE graphics stations are interconnected over CVE Middleware such as HLA/RTI (3 in Figure 6).

Tele-Haptics Platform – Haptic Real Time Controller

The Haptic Real Time Controller (HRTC), shown in Figure 7, in a C-HAVE system provides a modular and flexible platform to allow real time control of a haptic device or application device at one node of a C-HAVE network by a haptic device or application hardware at another node of C-HAVE, which is referred to as client/server mode. It is not limited to a client-server configuration, but is modular enough to support multiple clients or other network configurations.

To enable this technology, it is required that the clocks on both nodes be synchronized closely, have precise sampling periods and have the ability to perform complex control computations so that desired performance can be achieved at the client end. An important component of the HRTC is the use of hardware/software solutions to accomplish this synchronization and precision in timing. In order to enable real time control and data exchange, the software of the HRTC runs on a modular and robust real time operating system.

The time varying network delay degrades the performance of many network applications, and can cause instability in applications involving bilateral control of haptic devices. HRTC includes sophisticated algorithms to compensate for the time-varying network delays. In essence, if the compensation were perfect, the time delays would be transparent to the system. The HRTC allows the time delay compensation to vary between full delay compensation which gives higher performance but may introduce more noise and overshoot, to partial delay compensation, to no compensation which results in low performance and instability. The time delay compensation in the HRTC does not require a mathematic model of environment, and is robust to uncertainties due to the human-in-the-loop situation.

Tele-Operation [next] [back] Teisserenc de Bort, Léon Philippe

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or