In noisy environment, underwater or in space, silent speech recognition is an effective input method, which can be used by pilots, firefighters, special police and troops performing special tasks one day. The researchers also tried to control the electric wheelchair by using the silent speech recognition system. For people with language barriers, silent speech recognition technology can also help them communicate with the outside world through efficient speech synthesis. If this technology is mature, people will not have to type on the keyboard when chatting online in the future.
The Ames Research Center of NASA is developing a silent speech recognition system. Researchers say that when a person meditates or whispers, regardless of the actual lip and facial movements, corresponding biological signals will be generated. The recognition system they developed has button-sized special sensors fixed on both sides of the chin and Adam's apple, which can capture the instructions sent by the brain to the vocal organs and "read" these signals. This system will eventually be integrated into astronauts' extravehicular activity spacesuits, through which astronauts can send silent instructions to instruments or robots. Chuck Jokinson, the chief scientist of the project, said that in a few years, silent speech recognition technology will enter commercial applications. The basic working principle of eye tracking is to use image processing technology, use a special camera that can lock the eyes to continuously record the line of sight changes, track the visual gaze frequency and gaze duration, and analyze the tracked person according to this information.
More and more portals and advertisers began to pursue eye tracking technology. They can learn about users' browsing habits according to the tracking results, and arrange the layout of web pages reasonably, especially the location of advertisements, so as to achieve better delivery effect. The remote eye tracker invented by Germany's Eye Square company can be placed in front of the computer screen or embedded in it. With the help of infrared technology and sample recognition software, the user's gaze shift can be recorded. Eye tracker has been used in advertisements, websites, product catalogs, magazine utility testing and simulation research.
Because eye tracking can replace keyboard input and mouse movement, scientists have developed a special computer for the disabled. Users can select emails or instructions by focusing on specific areas on the screen. Future wearable computers can also use eye tracking technology to complete input operations more conveniently. Through electrical stimulation to achieve tactile reproduction, the blind can "see" the world around them.
The British Ministry of Defence introduced an advanced instrument called BrainPort, which can help blind people to obtain environmental information with their tongues.
BrainPort is equipped with a pair of glasses with a camera, a "lollipop" plastic sensor connected by a thin wire and a mobile phone-sized controller. The controller will convert the black-and-white image into electronic pulses and send them to the sensor in the blind user's mouth. The pulse signal will stimulate the nerves on the surface of the tongue and transmit it to the brain through the electrodes on the sensor. The brain will convert the perceived stimulation into a low-pixel image, so that the blind can clearly "see" the lines and shapes of various objects. Craig Ludberg, a British blind soldier who tried the device for the first time, has been able to walk independently and read normally without external assistance, and he has also become a member of England's national football team for the blind.
In theory, fingertips or other parts of the body can also achieve tactile reproduction like the tongue, and with the development of technology, the image clarity perceived by the brain will be greatly improved. In the future, the brain can be stimulated to form images by pulse signals outside the visible spectrum, thus creating many novel possibilities, such as scuba diving devices used in the sea with extremely low visibility. For decades, contact lenses have been used as a tool to correct vision. Scientists now hope to integrate circuits into lenses to create more powerful super contact lenses, which can not only give wearers super vision of magnifying distant objects, but also display holographic images and various stereoscopic images, and even replace computer screens, so that people can enjoy the fun of wireless Internet access at any time.
Scientists from the Department of Electronic Engineering at the University of Washington, USA, used self-assembly technology to "self-assemble" nano-fine powder metal components into microcircuits on polymer lenses, and successfully combined electronic circuits with intraocular lenses. Babak Parviz, the person in charge of the project, said that bionic contact lenses use reality enhancement technology, which can make virtual images overlap with real scenes that people can see, which will completely change the way people interact with each other and the surrounding environment. Once the final design is successful, it can enlarge distant objects, make exterior gateway protocol enter the virtual "game world" as if he were there, and also allow users to surf the Internet wirelessly through the "virtual screen" that only they can see.
Because this kind of contact lens will always keep in contact with human body fluids, it can also be used as a non-invasive human health monitor, such as monitoring the insulin level of diabetic patients. Pavitz predicts that similar monitoring instruments may appear in 5 years to 10 years. Human-computer interaction technology refers to the technology of realizing the dialogue between people and computers in an effective way through computer input and output devices.
Man-machine interface, also known as "brain-computer interface", is a direct connection path between human or animal brain (or brain cell culture) and external equipment. Even without direct language and action, the brain's thoughts and ideas can be conveyed to the outside world through this path.
Man-machine interface is divided into non-intrusive and intrusive. In the non-invasive man-machine interface, brain waves are read by external means, for example, electrodes placed on the scalp can interpret brain wave activities. In the past EEG scanning, the electrodes need to be carefully fixed with conductive adhesive, and the scanning results will be more accurate. However, after the improvement of technology, scanning can pick up useful signals even if the position of the electrode is not so accurate. Other non-invasive human-computer interfaces include magnetoencephalography and functional magnetic resonance imaging.
In order to help people with speech and movement disorders, researchers in the United States, Spain and Japan have recently developed an "ideal wheelchair". These devices use external sensors to intercept the neural signals sent by the patient's brain, and then transmit the signal codes to the computer, and then the computer analyzes and synthesizes the language or forms a menu-like control interface to "translate" the patient's needs, so that the wheelchair can serve the patients according to these needs and let them really do whatever they want.
Adam Wilson, a biomedical doctoral student at the University of Wisconsin-Madison in the United States, put on his own new brain-reading helmet, and then thought of a sentence: "Scan it with brain waves and send it to Twitter." So this sentence appeared in his Weibo. Due to technical limitations, the device can only input 10 letters per minute, but it shows considerable application prospects. Patients with atresia syndrome (conscious, unable to understand language, but often mistaken for coma because of immobility) and quadriplegia are expected to rely on the brain to "write" words and control wheelchair movement to restore some functions.
The electrodes of the invasive human-computer interface are directly connected to the brain. Up to now, the application of invasive man-machine interface in human body is limited to the repair of nervous system. Appropriate stimulation can help the injured brain recover some functions, such as the repair of retina that can reproduce light, and the repair of motor neurons that can restore motor function or assist motor. Scientists also tried to implant chips in the brains of patients with complete paralysis, and successfully used brain waves to control computers and draw simple patterns.
The University of Pittsburgh has made a major breakthrough in the development of artificial limbs directly controlled by the brain. The researchers implanted a hair-thin microchip in the motor cortex of two monkeys, which was wirelessly connected to a mechanical prosthesis in the shape of an adult arm. The pulse signals from nerve cells felt by the chip are received and analyzed by the computer, which can eventually be transformed into the motion of the robot arm. The test results show that the system is effective. Monkeys control the mechanical arm to grasp, turn and take through thinking, and move freely to complete the feeding action.
In addition to the medical field, there are many amazing applications of man-machine interface. For example, the home automation system can automatically adjust the room temperature according to the people in the room; When people fall asleep, the lights in the bedroom will dim or go out; If someone has a stroke or other sudden illness, they will immediately call the nursing staff for help.
Up to now, most human-computer interfaces are "inputs", that is, people control external machines or equipment with their thoughts. However, receiving external instructions from the human brain to form feelings, language and even thoughts still faces technical challenges.
However, some applications in nervous system repair, such as cochlear implants and artificial vision systems, may open up a new idea: one day scientists may be able to control the brain to produce sounds, images and even thoughts by connecting with our sensory organs. At the same time, however, with the various mechanical devices connected with the human nervous system becoming more and more sophisticated and widely used, and gradually having the function of remote wireless control, security experts will be worried about the occurrence of "hackers invading the brain". The origin of human-computer interaction can be traced back to Britain in A.D. 1764. With the development of commodity economy, more and more commodities need to be sold to overseas markets. In order to improve production efficiency and increase commodity output, people try their best to improve production technology. At this time, weaver james hargreaves invented Jenny's hand spinning machine, which changed the traditional industrial production mode with a revolutionary attitude. The new Jenny spinning machine can spin more than one cotton thread at a time, the spinning capacity is 8 times higher than that of the old spinning machine, the production efficiency is also developing rapidly, and a large-scale weaving factory is established. This invention not only marks the beginning of the first industrial revolution, but also the origin of human-computer interaction, which indicates that human beings take the lead in paying attention to and thinking about human-computer interaction from the field of industrial production.
1808, Italian Pellerini Turi invented the world's first mechanical typewriter. However, it was William Burt of Michigan who really created the history of typewriter and obtained the patent. He made the "typesetting machine" in 1828, which made the appearance of modern keyboard possible. American journalist C.Sholes invented the QWERTY keyboard in 1868. With the development of information technology, information processing is becoming more and more difficult, and the only keyboard can't meet people's needs quickly. 1964, the mouse invented by American Doug Engelbass made people feel the charm of free interaction for the first time. With the addition of the mouse, users can click anywhere on the screen at will, which effectively improves the experience and data processing efficiency. But the process of satisfaction is always short-lived, and the escalating "desire" breeds higher demand. Based on this, a gadget that can make the graphical human-computer interaction interface more intuitive and easy to use was born, that is, the world's first touch sensor invented by American Sam Hurst in 197 1. This also brings human-computer interaction into a new era of touch screen.
Soon, the interaction relying on mechanical one-way input can no longer meet human needs. At this time, represented by Siri introduced by Apple, voice interaction has become a new demand direction and research hotspot for its advantages of more labor-saving and lower learning cost. Following Apple Siri, Google Now records keywords searched by users on the basis of Google search function, and provides users with relevant voice services through intelligent reading. This makes the machine upgrade from "passively" answering users' questions to "actively" reminding users of their needs, that is, the service-oriented interaction mode provided by machines to human beings. Both Apple Siri and Google Now have given machines the ability to act on the basis of "independent thinking", thus opening a new era of two-way human-computer interaction. The somatosensory interaction represented by the application of Kinnect technology in games further expands and extends the scope of human-computer interaction.