What is DSP technology?

Digital Signal Processing (Digital Signal

Processing - DSP) emphasizes the use of digital signal processing theory through a special integrated circuit chip, running on the chip the target program, to achieve a certain kind of signal processing.

Digital Signal Processing (DSP) is one of the most powerful technologies that will shape science and engineering in the 21st century. It has revolutionized a wide range of fields: communications, medical imaging, radar and sonar, high fidelity music reproduction, and crude oil exploration, to name only a few. DSP technology in each of these areas has been developed to a certain depth, with their own algorithms, math, and specific techniques. The combination of breadth and depth makes it impossible for anyone to become proficient in all of the DSP techniques that have been developed. a DSP education consists of two tasks: learning general concepts that can be applied to the whole, and learning specialized techniques for your particular area of interest. This chapter begins our journey into the world of Digital Signal Processing by describing the dramatic effect that DSP has had in several different areas. The revolution has begun.

The Roots of DSP

Digital signal processing differs from other areas of computer science because of the uniqueness of the type of data it uses: signals. In most cases, these signals originate from sensory data in the real world: seismic vibrations, visual images, sound waves, etc. ....... After the signals have been converted to digital form, DSP is the math, algorithms and technology used to process these signals. This encompasses a wide variety of goals, such as enhancing visual images, recognizing and generating dialog (speech), compressing data for storage and transmission, etc. ....... Suppose we add a "compare-to-digital converter" to a computer and use it to capture a portion of real-world data. the DSP answers the question: what's next?

The origins of DSP lie in the 1960s and 1970s, when digital computers first became available. Computers were expensive in this era, and DSP was limited to a few key applications. Pioneers worked in four key areas: radar and sonar, which risked international security, crude oil exploration, which could make a lot of money, space exploration, which had irreplaceable data, and medical imaging, which could save lives. the revolution in personal computers in the 1980s and 1990s led to a sudden surge in new applications for DSPs. The motivation was not military or governmental; DSPs were suddenly driven by the commercial market. Anyone who thought they could make money in the rapidly expanding field suddenly became a DSP vendor, and DSPs became commonplace in such products as cellular telephones, CDs (compact disc players), and electronic voice mail. Figure 1-1 lists some of these applications.

The technology revolution has come from the top down. In the early 1980s, DSP in the field of electrical and electronic engineering was a course taught in graduate school programs. Ten years later, DSP had become part of the standard university curriculum. Today, DSP has become a fundamental skill needed by scientists and engineers in many fields. By analogy, DSP can be compared to "electronics" in the previous technological revolution. Although still in the field of electronics, almost every scientist and engineer has some background in basic circuit design. Without it, they could be lost in the world of technology. dSP has the same future.

Figure 1-1

DSPs have revolutionized many areas of science and engineering. Some of the diverse applications are listed here.

The recent history is even more intriguing; it has a huge impact on your ability to learn and use DSP. Suppose you have a DSP problem and turn to a textbook or other publication for an answer. What you usually find are pages and pages of equations, difficult (obscure) math symbols, and unfamiliar terminology. It's a nightmare! Much of the DSP literature is still confusing (baffling), even for those with experience in the field. This is not a fault in the literature, it is simply intended for a very specific audience. Researchers in currently developing technologies need this kind of complex (and detailed) math to understand the theoretical implications of their work.

The underlying assumption of this book is that most practical DSP techniques can be learned and used without the traditional barriers of complex math and theory. The Scientist and Engineer's Guide to Digital Signal Processing is written for those who want to use DSP as a tool, not as a new career.

The remainder of this chapter lists a number of areas where DSP has revolutionized the field. As you look at each application, notice that DSP is very interdisciplinary, relying on technical work in many neighboring fields. As Figure 1-2 suggests, the boundaries between DSP and other technical disciplines are not clear or well-defined, but rather fuzzy and overlapping. If you want to specialize in DSP, you'll need to read up on some of the related fields as well.

Figure 1-2

Digital signal processing has fuzzy and overlapping boundaries in many areas of science, engineering, and math.

Telecommunications

Telecommunications is concerned with the transmission of information from one location to another. This includes many types of information: telephone conversations, television signals, computer files, and other types of data. To transmit information, you need a channel between two locations. This could be a pair of wires, a radio broadcast signal, fiber optics, etc. ....... Telecommunications companies receive payment for transmitting information to their customers, however they must pay to establish and maintain the channel. The financial bottom line is simple: the more information they can deliver through a single channel, the more money they can make. DSPs have revolutionized the telecom industry in many areas: tone signal generation and detection, frequency band shifting, filtering to remove power line hum. power line hum, filtering to remove power line hum, etc. ....... Three particular examples from the telephone network will be discussed here: multiplexing, compression, and echo control.

Multiplexing

There are about a billion telephones in the world. With the press of a few buttons, switched networks allow anyone anywhere to be connected in just a few seconds. The task of this immensity was a hesitant one. Until 1960, the connection between two telephones required mechanical switches and amplifiers to transmit analog sound signals. A link required a pair of wires. In contrast, DSPs convert audio signals into a stream of sequential digital data. Because bits can be easily interwinded and later separated, many telephone conversations can be transmitted on a single channel. For example, the standard for telephones is known as the T-carier system, which transmits 24 voice signals simultaneously. Each sound signal is sampled 8000 times per second using 8 bit companded (logarithmic compressed) analog-to-digital conversion. The results in each audio signal are scaled to 64,000 bits/sec, and all 24 channels are contained within 1.544 megabits/sec. Using conventional 22 gauge copper telephone wire, signals can be transmitted over a typical interconnect distance of approximately 6,000 feet. The financial advantages of digital transmission are many. Wires and analog switches are expensive; digital logic gates are cheap.

Compression

When a sound signal is digitized at 8000 samples/sec, most of the digital information is redundant. That is, the information carried by any one sample is copied in large quantities by neighboring samples. Dozens of DSP algorithms have been developed to convert digitized sound signals into data streams that require fewer bits/sec. These are called data compression algorithms. Corresponding uncompression algorithms are used to return the signal to its original form. These algorithms vary in the amount of compression performed and the resulting sound quality. In general, the data rate can be reduced from 64 kilobits/sec to 32 kilobits/sec without loss of sound quality. When compressed to a data rate of 8 kilobits/sec, the sound is noticeably affected, but is still useful for long distance telephone networks. The longest achievable compression is about 2 kilobits/sec, resulting in highly distorted sound, but can be used for some applications such as military and undersea communications.

Echo control

Echo is a serious problem in long-distance telephone links. When you speak into a phone, a signal representing your voice travels to the connected receiver, and part of that signal is returned as an echo. If the connection is within a few hundred miles, the time it takes to receive the echo is only a few milliseconds. The human ear is accustomed to hearing echoes with these small time delays, and the connection appears to be fairly normal. As the distance becomes longer, the echo becomes more pronounced and irritating. For interstate communications, delays can be hundreds of microseconds and are particularly objectionable. Digital signal processing deals with such problems by measuring the incoming signals and generating the appropriate antisignal to cancel the annoying echoes. The same technology allows speaker amplifier users to listen and speak simultaneously without fighting audio feedback (long, sharp sounds, squealing). It can also reduce ambient noise by digitally generating anitnoise to cancel it out.

Audio Processing

The two primary human senses are sight and sound. Accordingly, many DSPs are related to video and audio processing. People listen to music and speech (voice), and DSPs have revolutionized both of these areas.

Music

The path from a musician's microphone to the speakers of someone who loves to play fine audio is quite long. Digital data representation is important because it prevents the degradation "generally and analogically associated with storage and processing". This will be familiar to anyone who has compared the quality of music on cassette tapes and CDs. In a general scenario a music clip is recorded in a sound studio on several channels or tracks. In some cases, this even includes recording separate instruments and singers. This is done to give the sound engineer more flexibility in producing the final product. The complex process of combining individual tracks into a final product is called mixing down, and DSPs can provide several important functions during mixing down, including filtering, signal additions and subtractions, signal editing, and so on. ......

Translation: mixing X channel audio into Y channel, where X is a number greater than Y. For example, say you have a DVD with 5.1 channels, but you only have headphones, which are only dual-channel, so you'd need to mix down to 2 channels, thanks to Jedi for the explanation.

One of the most interesting DSP applications for music preparation is artificial reverberation. If the individual channels are simply added together, the resulting fragment sounds frail and diluted, much like a musician playing outdoors. This is because the listener is greatly affected by the content of the music's echoes or reverberations, which are usually minimized in the studio, and the DSP allows the artificial reverberations and reverberations to be summed up during mix down to simulate various ideal listening environments. Echoes with hundreds of microseconds of delay give the impression of a church-like location. Adding 10-20 microseconds of delay to the echo gives the impression of being in a more appropriately sized listening space.

Speech generation

Speech generation and recognition is used to communicate between humans and machines. Not with your hands and eyes, but with your mouth and ears. This is very handy when your hands and eyes should be doing something else, such as driving a car, opening a knife, or (unfortunately) firing a weapon at an enemy. For computer-generated speech, two methods are used: digital recording and vocal tract simulation. In digital recording, the human voice is digitized and stored, usually in a compressed form. During playback, the stored data is decompressed and converted back to analog signals. An entire hour's worth of recorded speech requires only about 3 megabytes to store, even on small computer systems. This is the most common method of digital voice generation in use today.

Vocal tract simulation is more complex, and tries to mimic the physical mechanisms of the body by using the same methods that humans use to build speech. Human vocal tract simulation is an acoustic cavity with a resonant frequency determined by the size and shape of the chamber. In comparison, fricative sounds originate from noisy airborne noises under narrow compression, like teeth and lips. Channel simulation operates by generating digital signals that mimic these two excitations. ***The characteristics of a resonate chamber are simulated by transmitting the stimulus signals through a digital filter with similar *** vibrations. This approach was used in one of the very early DSP success stories, Speak & Spell, a popular electronic learning aid for children.

Speech recognition

Automatic human speech recognition is more difficult than producing speech. Speech recognition is a classic example of something that the human brain does well, but digital computers do poorly. Digital computers can store and remember very large amounts of information, perform mathematical calculations at very high speeds, and do repetitive tasks without becoming bored or inefficient. Unfortunately, today's computers perform very poorly when faced with raw sensory data. It is easy to teach a computer to send you a monthly bill. Teaching the same computer to understand your voice is a big job.

Digital signal processing generally deals with speech recognition in two steps: feature matching followed by feature extraction.

Each word in the incoming audio signal is first isolated and then analyzed to identify the type of stimulus and *** vibration frequency. These parameters are then compared with examples of previously spoken words to identify the closest pair. Often, these systems are limited to only a few hundred words and can only accept speech with discernible breaks between words; and each speaker needs to be individually retrained. While this is appropriate for many commercial applications, these limitations are humbling when compared to human hearing. There is a lot of work to be done in this area, and there will be huge monetary rewards for those who succeed with commercial products.

Echo Location

A common way to get information about a remote object is to bounce a wave off of it. For example, radar operates by transmitting pulses of radio waves and responding to each signal received from an airplane echo check. In sonar, sound waves are transmitted through water to detect submarines and other objects below the surface. Geophysicists have pinpointed the Earth by setting up long-term blasts and listening for echoes from y buried layers of rock. Although these applications share the same thread, they each have their own specific problems and needs. Digital signal processing has revolutionized all three fields.

Radar

Radar is an acronym for RAdio Detection And Ranging. In the simplest radar system, a radio transmitter generates a pulse of radio frequency energy several microseconds long. This pulse is fed into a directional antenna at high altitude, where it causes radio waves to propagate and leave at the speed of light. Aircraft in the path of the wave reflect a small portion of the energy back to a receiving antenna near the transmission station. The distance to the object is calculated by the time it takes between the transmitted pulse and the received echo. The direction of the object is easier to spot when the echo is picked up and you know where you are pointing to the directional antenna.

The operating range of a radar system is determined by two parameters: how much energy is in the initial pulse, and the noise level of the radio receiver. Unfortunately, adding energy to a pulse usually requires a longer pulse. Subsequently, longer pulses reduce the correctness and accuracy of the elapsed time measurements. This leads to a conflict between two important parameters: the ability to detect objects at a distance, and the ability to correctly determine the distance to the object.

DSPs have revolutionized radar in three areas, all of which are related to fundamental problems. The first is that DSPs can compress a pulse wave after it has been received, providing better distance determination without reducing its operating range. Second, DSPs can filter out the received signal to reduce noise. This increases the range without degrading the distance determination. Third, the DSP can quickly select and generate different pulse waveforms and lengths. Among other things, this allows the pulse waveforms to be optimized for specific detection problems. Now for the impressive part: a lot of this is done at a sampling rate that is about the same as the radio frequency used, which is about a few hundred megahertz! When it comes to this aspect of radar, the DSP is as highly relevant to high-speed hardware design as it is to algorithms.

Sonar

Sonar is an acronym for SOund NAvigation and Ranging. It is divided into two main categories, active and passive. In active sonar, sound pulses between 2 kHz and 40 kHz are transmitted into the water and the resulting echoes are detected and analyzed. The use of active sonar involves: detection and localization of objects below the surface of the water, navigation, communication and mapping to the sea floor. The typical maximum operating range is 10 to 100 kilometers. In contrast, passive sonar only listens for sounds below the surface, including natural turbulence, marine life, and mechanical sounds from submarines and surface ships. Because passive sonar doesn't cancel energy, it is ideal for switching operations. You want to detect the other guy, not him detecting you. The most important application of passive sonar is in military surveillance systems, which detect and track submarines. Passive sonar generally uses lower frequencies than active sonar because they are propagated through water with less absorption. Detection ranges can reach thousands of kilometers.

DSPs have revolutionized sonar in many of the same areas as radar: pulse generation, pulse compression, and filtering of detected signals. There is an argument that sonar is simpler than radar: because it contains lower frequencies. Another argument is that sonar is more difficult than radar because the environment is more inconsistent and unstable. Sonar systems often use expensive arrays to transmit and receive elements, rather than just a single channel. By properly controlling and mixing the signals of these many elements, the sonar system can direct the canceled pulses to the desired location and determine the direction in which the echoes will be received. To handle these many channels, a sonar system requires DSP computing power on the same scale as radar.

Reflection seismology

It was around the early 1920s that geophysicists discovered that the structure of the Earth's crust could be detected by sound. Explorers could detonate and record echoes from a boundary layer more than 10 kilometers below the surface. These seismograms were interpreted by the naked eye to correspond to the structure of the subsurface. The reflection seismic method quickly became the primary method for locating oil and mineral deposits, and remains so today.

Ideally, sound pulses transmitted to the ground produce an echo from the boundary layer through which each pulse passes. Unfortunately, the situation is usually not that simple. Each echo traveling back to the surface must pass through all the other boundary layers above (where it originated). This causes the echoes to jump from layer to layer, and the echoes that produce them are detected at the surface. These secondary echoes can make the detected signal very complex and difficult to interpret. Since the 1960s, digital signal processing has been widely used to isolate major echoes from minor echoes in reflection seismograms. How did early geophysicists manage without DSP? The answer was simple: they looked at simple places where multiple reflections were minimized.DSP allowed crude oil to be found in difficult locations, such as under the sea.

Image Processing

Images are signals with characteristics. First, they are measures of spatial (distance) parameters, although most signals are measures of temporal parameters. Second, they contain a lot of information. For example, it may take more than 10 megabytes to store one-half of a television recording. This is more than 1000 times larger than a sound signal of similar length. Third, the final quality judgment is limited by human evaluation rather than by objective criteria. These characteristics have made image processing a distinct subgroup within DSPs.

Medical

In 1895, Wilhelm Conrad R?ntgen discovered that X-rays could penetrate a significant number of real objects. Medicine was revolutionized by the ability to see inside a living human body. Medical X-ray systems spread around the world within only a few years. Despite its apparent success, until the advent of DSP and related technologies in 1970, medical X-ray imaging was limited by four problems. First, overlapping structures in the body can be hidden behind another. For example, parts of the heart behind the ribs may not be visible. Second, it is not always possible to distinguish between similar tissues. For example, it may be possible to separate bones from soft tissues, but it is not possible to distinguish tumors from liver deposits. Third, X-ray images show anatomy, the structure of the body, rather than physiology, the workings of the body. An X-ray image of a living person looks like an X-ray image of a dead person! Fourth, exposure to X-rays can cause cancer, and it needs to be used carefully (sparingly) and only with proper justification.

The problem of overlapping structures was solved in 1971 with the introduction of the first computed tomography scanner (formally known as computed axial tomography or CAT scanner). Computed tomography (CT) is a classic example of digital signal processing. X-rays from many directions penetrate the sections of the patient's body that are being examined. Instead of simply creating an image from the detected X-rays, the signals are converted into digital data and stored in a computer. This information is then used to calculate the images to be displayed as sections of the body. These images show much more detail than traditional techniques, allowing for notably better detection and treatment. the impact of CT has been almost as great as the original introduction of the X-ray image itself. Within a few years, every major hospital in the world was using CT scanners. In 1979, two of the contributors to the principles of CT, Godfrey N. Hounsfield and Allan M. Cormack,**** enjoyed the Nobel Prize in Medicine.

The last three X-ray problems have been solved by using penetrating energy sources that are not X-rays, such as radio and sound waves, and DSPs play a key role in all of these technologies. For example, Magnetic Resonance Imaging (MRI) uses a magnetic field linked to radio waves to probe the interior of the body. The strength and frequency of the magnetic field is adjusted so that the nuclei of atoms within the area of the body can oscillate between quantum energy states. This *** vibration results in the emission of secondary radio waves, which are detected by an antenna placed close to the body. The strength and other characteristics of this detected signal provide information about the localized region of the *** vibration. The magnetic field is adjusted so that the area of **** vibration scanned by the body corresponds to the internal structure. This information is usually represented as an image, as in computed tomography. In addition to providing excellent discrimination between different types of soft tissue, MRI can provide information about physiology, such as blood flow through arteries.MRI relies entirely on digital signal processing techniques, without which it would not be possible.

Space

Sometimes you just have to make the best of a bad picture. This happens time and time again because images are taken from unmanaged satellites and space exploration rockets. No one is going to send a mechanic to Mars just to twist the knobs on a camera! There are several ways that a DSP can improve the quality of an image taken under very unsuitable conditions: brightness and contrast adjustments, border detection, noise reduction, focus adjustments, motion blur reduction, etc. ....... Images with spatial distortions, such as those encountered when shooting flat images of spherical planets, can be warped into a correct representation. Many different images can be combined into a single library, allowing information to be displayed in a unique way. For example, a television image sequence simulates an airplane flying over the surfaces of different planets.

Commercial Imaging Products

The large amount of information in an image is a problem for systems that are sold in large quantities to the general public. Commercial systems must be inexpensive, and this is not the result of large amounts of memory and high data transfer rates. One answer to this theorem is image compression. Like sound signals, images contain an enormous amount of redundant information and can be transmitted back by algorithms that reduce the "number of bits needed to represent it". Television and other action movies are particularly well suited to compression because most of the image remains the same from frame to frame. Commercial image processing software utilizes this technique, including: videophones, computer programs that display moving pictures, and digital televisions.