In the “Wintel” business ecosystem literally thousands of companies work in tandem to provide ever-increasing value and utility for the users of their products. It all starts with the x86 processor from Intel. A new-generation processor supports more capable operating systems. The new-generation OSs support higher-function applications, and so on. OS/2 Warp and Windows NT wouldn’t be possible on ‘286-generation hardware and can barely run on ‘386s. Similarly, advances in computer-telephony system-resource architectures are keyed to advances in the price/performance ratios of digital signal processors (DSPs). This makes which DSP, or DSP family, to use one of the biggest decisions CT system-resource board designers face. The DSP’s architecture and performance strongly influence the board’s architecture. It’s also a strategic decision of the first order, influencing the company’s development productivity and competitiveness for years to come.

What Are DSPs, and Why Are They So Important to Computer-Telephony?

What makes the open-architecture DSP so important to the value-adding telecommunications industry is that it serves as a universal “information transducer” — a device which transforms the analog signal found on today’s public switched network to a computer-usable representation, and performs the reverse process for information to be transmitted. That’s not to say a DSP is an analog-to-digital converter. It’s not. Rather, the DSP takes the output of a codec (what we call the specialized converters used in telephony) and derives the data stream’s inherent information content, such as the encoded ones and zeros of a data modem. It’s useful in telephony applications to think of DSPs as software-defined hardware because, in effect, they simulate electrical circuits in software, and do it in real time. For example, not too long ago modems and DTMF detectors were implemented with resistors, capacitors, inductors, and operational amplifiers. That’s why most DSP programmers are electrical engineers. These EEs are really designing circuits.

Roots

The DSP’s roots go back two decades to the super-computer and the array processor (or vector processor). Because of their cost, these machines were limited to very high-value problems, such as weather forecasting and oil exploration. But in the early ’80s the architecture of these machines was scaled to the capabilities of early microprocessors, and the first commercial DSP integrated circuits came to market, along with the general-purpose or scalar microprocessor (e.g., the Intel 4004 and 8008). Among the earliest DSPs was the Texas Instruments TMS320C10. In 1984 that chip found its way into the first DSP-based PC add-in voice board, Watson, from Natural MicroSystems. But even after the mid-’80s, DSP technology still wasn’t widely understood, and most voice-processing boards continued to be based on dedicated-function ICs, such as touch-tone generation/detection and speech compression/expansion chips. Finally, in the late ’80s the first multi-line DSP-based voice-processing boards were offered to the value-adding telecom industry by Rhetorex and Natural MicroSystems. And today, all new designs of multi-line voice-processing boards are DSP-based.

Why Are They Special?

DSPs are just specialized high-speed microprocessors, optimized for performance in those applications which require repetitive arithmetic calculations performed on arrays of data. In telecommunications those arrays of data are the PCM media streams. The analog signal of the telephone line is digitized (with a codec) to create the PCM media stream, allowing the DSP to do its work. The DSP is critical to speech processing, video compression/expansion, and modem implementations. Most media-processing algorithms require the data–the media stream–to be multiplied by different constants to implement filters. For example, the current PCM sample will be multiplied by one number, the result of the previous sample’s operation will be multiplied by another, the result from two samples ago by another number, and so on. DSPs are optimized for that type of operation: perform a memory fetch, multiply by a constant, make a decision based on the result, and accumulate the result–all in one bus cycle. The newest DSPs can perform 6-8 such operations, plus others not listed, simultaneously, resulting in incredibly high MIPS ratings.

Not only are all new “voice boards” DSP-based, with their functions defined by DSP software, so are fax and data modems, text-to-speech, and speech recognition products. Today, for the most part, these “system-resource” functions are implemented on dedicated-function, closed-architecture, multi-stream boards, interconnected by industry-standard PCM highways (MVIP and SCBus). But over the next few years the dramatically increasing performance of DSPs will force these functions onto open-architecture integrated-media multi-line boards where one DSP is used to provide all media-processing.

Voice Boards and DSP-Resource Boards

Beneath what we call a “voice-processing” board, such as those produced by Dialogic and Natural MicroSystems (NMS), is really a DSP-resource board. It only becomes a voice board when the voice-processing DSP code is downloaded to the DSP from the host PC when the system is initialized. But if DSPs on the “voice board” can be programmed to process speech, why can’t they be programmed to implement fax and data modems? Well, they can and they are. After all, the modem in your PC is implemented on a DSP. With a powerful enough DSP equipped with enough memory, an open-architecture DSP can just as easily support modems as it can support speech processing, including speech recognition and text-to-speech. Today, DSP-resource boards are just beginning to support multiple media. Evidence of this is MultiFax, Commetrex’ independently developed fax software add-in to the NMS “voice boards”. Brooktrout and Linkon partnered with their DSP vendors to provide support for multiple media. And BICOM, CCS, and Pika are busy adding new media-processing capabilities to their boards.

Not Just for Telephony

But DSPs aren’t just for telephony; they are used just about everywhere there is a real-time interface between the analog and digital worlds. They’re in your car, on your boat, and in every factory. And just as DSPs are the basis of multiple-media, multi-line telephony boards, they are the silicon foundation of multimedia PC applications, a prime example is IBM’s Mwave technology and its Windsurfer open-DSP PC add-in board, with over 40 independent software vendors producing DSP-based applications. Microsoft is providing support for DSP-based functionality in Windows 95 and NT. And Intel has added DSP-like processing capability to the Pentium Processor. That’s what MMX (Multi-Media Extensions) is all about.

So Where’s It Going?

The first DSP-based voice-processing board supported a single port of voice. And, in an indication of things to come, it also provided a V.22bis (2400 bps) data modem on the same chip. In 1988 we had the first DSP-based multi-line voice boards, providing improved performance (especially in call-progress analysis) and flexibility over earlier multi-line voice-processing boards. Of course, the DSP vendors didn’t stand still. Just as in any other sector of the semiconductor industry, developers of DSPs have provided stunning improvements in price-performance. Falling prices and increasing performance have, in the mid-’90s, giving rise to the “high-density” multi-line board. Today, high-density boards simultaneously process four or more PCM streams per DSP, yielding densities of up to 60 ports of voice and fax on one board.

Further improvements in DSP price/performance will usher in the high-density “integrated-media” board which will dominate the last third of the decade. In 1998 the industry will see 60 voice streams, 48 faxes, and 10-15 high-speed data modems–all one DS. Integrated-media boards can offer higher densities, reliability, and performance, while offering lower cost and development time. However, a few problems have to be solved before this happens on a wide scale.

One problem has to do with board-level resource management. There is a wide range of DSP and RAM resources required, for example, to implement typical voice-processing functions and those needed to implement a V.34 data modem or speech recognition. It’s possible to optimize a board for voice and eliminate any capability to process other, more resource-demanding, media. So if one DSP can handle 60 voice streams and 15 high-speed data streams the board must include flexible resource management of MIPS, RAM, and media streams.

Another problem has to do with industry efficiency. Since no “voice board” vendor has the resources or competencies to internally develop all media-processing technologies, we have the possibility of all of them developing their own proprietary closed-architecture boards, and then going to the various media-processing technology vendors to port their code to the board vendor’s platform. So the technology vendors will have their developers work on porting (technology shuffling) rather than developing newer and better technology. That’s not as efficient as our industry should be. It’s certainly a lot less efficient than the PC industry.

An elegant solution for the computer telephony industry would be to define a board-level environment that would allow the technology vendors (yes, including voice technology) to do it just once. They would then offer their technology as a board-level “application” which would run on any board which provided the industry-standard environment. Just as host-level application developers access system resources through APIs, the technology vendors would do the same, only at the board level. These APIs would be for the media stream, the external system interface (host), DSPs, and so on. And just as good host-level APIs hide the operating system (for portability) from the application developer, good board-level APIs would hide the board’s operating system from the developer.

Maybe we will see shrink-wrapped media-processing technology in the not-too-distant future.