Multichannel Voice Coding System
MOTOROLA
Semiconductor Products Sector Engineering Bu lletin
AN2113/D:
Rev. 0, 3/2001
© Motorola, Inc. 2001
Multichannel Voice Coding System on
the RTXC Operating System
By Duberly Mazuelos, Felicia Benavidez, Iantha Scheiwe
DSP applications are moving away from assembly language and
home-grown scheduling kernels to systems developed using
high-level languages and running on off-the-shelf Real-Time
Operating Systems (RTOSs). Assembly programming requires
intimate knowledge of the device architecture and prohibits easy
portability to a new architecture if cost or availability change. C
programmin g is bec oming more commonpl ace in the DSP mar ket
because of pressures for a fast time-to-market, low cost, and
reusability. Also, C compi ler technology is finally maturi ng to a
point where the inherent benefits of a DSP architecture can be
realized in the C language.
Engineers designing and programming complex systems
containing DSPs have long relied on their own scheduler to
determine when tasks should be handled in an application. These
schedulers are often developed in-house and are
application-specific. As the complexity of the systems increases,
the complexity of the scheduler also increases, and the task of
designing and implementing these schedulers becomes a
significant portion of the system development time. However,
RTOSs are available to ease the task of system integration and
provide the scheduling tasks necessary to meet stringent
applic at ion requiremen ts.
Various telecommunications standards dictate specific voice
coders for each telecommunications application. Because these
voice coders are standard building-blocks in a system, third
parti es have come for w ar d t o de velop high ly opt imi zed assembly
language implementations of voice coders for customer use.
This application note describes a multichannel voice coding
system developed in C and executing on an RTOS. The voice
coding software from a third party is integrated with other tasks
under the RTXC RTOS. Topics covered inc lude the voice cod ing
application, features of the RTXC RTOS, and a methodology for
integrating this type of system. This knowledge can assist you in
developing future systems using an RTOS.
Contents
1 Project Purpose .......................... 2
2 Voice Coding .............................. 2
2.1 Encoding/Decoding .........................4
2.2 Third-Party Voice Coding Software 6
3 Multichannel Applications ........ 6
4 C Compilers ...............................7
5 Real-time Operating Systems ... 9
6 System Overview .....................10
6.1 Target Hardware............................10
6.2 Applica tio n Softwar e............. ........12
7 Software Description ...............13
7.1 Data Input/Output..........................13
7.1.1 Audio Codec Initialization.............14
7.1.2 Synchronous Interface...................15
7.2 Voice Coders .................................16
7.2.1 Double Buffering...........................16
7.2.2 Wrappers........................................18
7.3 User Interface ................................18
7.4 RTXC.............................................20
7.4.1 RTXCbug.......................................21
7.5 Integration......................................22
8 Conclusions ..............................23
9 References ................................24
Project Purpose
2Mult ichannel Voic e Codi ng Sy st em
1 Project Purpose
In this project, we explore the issues that arise when third-party software is integrated with an RTOS and
other appl icat ion soft ware in a C lan guage deve lopment environme nt. This pro ject has bette r enable d us to
support Motorola customers by providing in-house experience with the following:
Voice coders/decoders
Multichannel application
C and assembly coding and integration
RTOSs
The system described in this application note executes multiple channels of the IS-96-A voice coder on a
Motorola DSP56307EVM. Using a variety of software tools and code, we experienced the intricacies that
arise from such a n approac h. We develop ed and debu gged th e applicati on softwa re using the Taski ng 2.2r2
DSP563xx Software Development Toolset. C code snippets shown in this document adhere to guidelines
set fort h by the 2.2r2 versi on of the Tasking to ol set. Hand-coded ass embly code inte rfaces to the IS-96-A1
voice coding library supplied by Signals and Software Limited (SASL). Embedded Power Corporation’s
RTXC is the underlying RTOS. The result of this project is a demo that is shown at Motorola booths in
conferences and other technical events.
2 Voi ce Coding
Voice coders, often called vocoders, are technically a subset of the entire range of voice coding
technology. The word “vocoder” stands for voice coder/decoder. Voice coders in infrastructure systems
compress the voice data for trans mis sion. Voi ce cod ers ha ve evol ved fr om low compress ion ca pabil ities t o
high compression capabilities while retaining quality sound output. With improvements in voice coding
technology, we can now compress voice data to 6.3 kbps or even better with toll quality results. Toll
quality refers t o t he quali ty of sound tha t h umans he ar with a nal og spe ech. Ideal ly , a person ca nnot t el l the
difference between analog speech and digitized speech coming over the air interface. Through
improvements in compression technology, this is becoming a reality even with high compression ratios.
Voice coding algorithms are evaluated in terms of four metrics:
Subjective quality. Voice coders attempt to achieve toll quality.
Bit rate. The level of compression achieved.
Complexity of the algorithm. Can this algorithm be implemented using today’s technology?
End-to-e nd delay. If the algorithm adds too much delay to the signal, it becomes unusable in a
real-time voice application.
Speech i s se par at ed i nt o t hree categor ie s: voi ce d, unvoiced, and mixed . Voi ced spe ech is qua si -periodic i n
the time domain and harmonically structured in the frequency domain. Unvoiced speech is random-like.
These categories are studied to determine the most effective method of compression for each. The
char act er is tics of spe ech are cr ea ted through t he inter act io n of the vo cal box (spee ch s ource) an d t he vocal
tract that includes the mouth. This interaction creates the spectral envelope. The spectral envelope is
1. TIA/EIA/IS-96-A “Speech Services Option Standard for Wideband Spread Spectrum Digital Cellular Sys-
tem.”
Voice Coding
Multichannel Voice Coding System
3
especia lly impo rtant in re creatin g voiced sp eech. Peaks of t he spect ral envel ope are call ed formants, which
repres ent t he reson ant mod es of the vo cal t ra ct . Th e research i nt o s peech co din g and dev elo pment o f voice
coders for telecommunication standards focuses heavily on understanding formants.
Figure 1. Vocal Tract St ructure
As Figure 1 shows, many features in the vocal tract interact to form the spectral envelope and create the
formants. All of these interactions are studied to determine improved methods of compression for voice
coding. Figure 2 sh ows a more abstra ct view of the vo cal tract. Noti ce how the space for the sound is more
closed at the windpipe and lips and opens up in the vocal tract passage. Also, due to the shape of the
passage, two sounds may route ver y different ly in thei r travel to the li ps. Differ ences in th e travel aff ect the
nature of the sound. All of these aspects are studied and considered in the design of voice coders.
Figure 2. Abstract View of Vocal Tract
Nose
Palate
Teeth
Lips
Tongue
Larynx
Vocal Chords
Epiglottis
Air from lungs
Nasal
Cavity
Windpipe
Vocal Tract
Lips
Voice Coding
4Mult ichannel Voic e Codi ng Sy st em
2.1 Encoding/Decoding
Each of the many different voice coding algorithms combines knowledge of the human vocal system with
an underst anding of th e quality of speech requi red by the applic ation and the processin g power availa ble to
process the given voice coder in a system. The list of algorithm types includes the following:
Pulse Coded Modulation (PCM)
Linear Predictive Coding (LPC)
Code Excited Linear Prediction (CELP)
Regular Pulse Excitation (RPE)
Vector Sum Excited Linear Prediction (VSELP)
As Figure 3 shows , the voi ce coders in the “ voco der ” gr ouping achiev e t he l owest bi t rate but also tend to
have lower subjec tive qu alit y than th e oth er code rs. Perc eptual and Wavefo rm coders have high subjec tive
quality, but the application pays a price in bandwidth since the bit rate is not as low. Hybrid coders may
achieve lower bit rate and higher subjective quality, making them desirable coders for wireless systems.
Figure 3. Comparison of Voice Coders
The voice coder used in the project discussed here is IS-96-A, a type of hybrid coder using the CELP
algorithm. IS-96-A is used in the Code Division Multiple Access (CDMA) wireless standard and was
standar di zed i n 19 92 and uses a codebo ok t o ass is t i n th e voi ce compr ession. 2 The code book used in th es e
types of voice coders is a table of excitation parameters that define how the vocal tract is stimulated. The
decoder determines the appropriate excitation parameter for a given speech sample and usually transmits
2. Motorola provides a detailed discussion of CDMA on the following web site: h ttp://www.motor-
ola.com/NSS/Technology/cdma.html.
1
248163264
2
3
4
5
Bit Rate (kb/s)
Vocoders
Hybrid Coders
Waveform Coders
Perceptual Coders
Voice Coding
Multichannel Voice Coding System
5
both a codebook index value indicating which excitation parameter is appropriate and a codebook gain
indicating the strength of the excitation. The tables for IS-96-A require 6622 words of data space when
implemented on the DSP56300 family.
IS-96-A is a variable-rate coder, so it senses when speech activity lessens and transmits less information
during that time. This lowers the bandwidth requirements during inactive speech times and improves the
effici en cy of the syst em in general. Ther ef ore , theoreti ca ll y a gi ven ba se station can handl e mor e channels
based on the statistical knowledge that not every user is at maximum speech data capacity at a given time.
Also, depending on the transmission method, the variable rate cuts down the noise for other users. The
maximum rate for I S-9 6-A is 8 kbps and the mini mum rate s are 4 kbps (1 /2 rat e), 2 kb ps (1/ 4 rate ), and 0.8
kbps (1/8 rate). The en code r send s informa tion to the d ecoder when it ad justs the rate. It jump s only one
rate per given sample time, so the encoder does not send a full transmission rate sample and then sense
complete silence and drop down to the minimum transmission rate on the next sample. Instead, it cycles
through each rate as appropriate for a given sample time and does not adjust by more than one rate at a
time. This behavior helps to maintain the quality of the speech.
The encoder analyzes the input speech and transmits a set group of speech parameters to the decoder.
These parame ters include coefficient s relate d to the formants t hat determi ne the resona nt frequencie s of the
vocal t ract at a given t ime. The encode r als o tran smits the co debo ok gain and in dex, pitch infor mation , an d
parity check bits. The encoder determines these parameters during the process shown in Figure 4. In
IS-96-A, the encoder implements a search procedure to recreate the input speech by comparing it to the
outp ut of the synthe sizer in the encoder. For each re ceive d input seq uenc e, the encoder attempts to
synthesize the speech, comparing its output with the input speech and calculating a weighted error value.
Once this error is sufficiently minimized, the parameters that create the “best” synthesized speech are
transmitted to the decoder.
Figure 4. IS-96 Encode Block Diagram
The decoder receives the parameters transmitted by the encoder and reproduces the speech so that the
person listening on the receive end can understand the person speaking. The decoding process does not
require the analysis of the speech that the encoder must complete, so decoding requires less processing
power for a system than encoding. Figure 5 shows the steps required for decoding in IS-96-A.The first
filt er, called the long-term filter, reconstructs the long-term pitch periodicities of the speech in the
excitation signal. The second, called the sh ort- term f ilter , models the spectral shape of the speech.3
3. http://www.msdmag.com/frameindex.htm?98/9806art4.htm.
Codebook xB(z) A(Z) -W(z) Energy
Calculator
Code Gain Long-Term Short-Term
s1(n)
Weighted Error
Input Speech s(n)
Multichannel Applications
6Mult ichannel Voic e Codi ng Sy st em
Figure 5. IS-96 Decode Block Diagram
2.2 Third-Party Voice Coding Software
Since voice coders are an integral part of systems implementing various telecommunications standards, it
is ess ential t hat they be progra mmed effi ciently. Efficient programming requires extensi ve use of assembl y
language programming to improve voice coding execution time. The smaller the voice coder execution
time, the more voice the coding channels in a given system and the lower the system cost per voice
channel. Programming voice coders requires intimate knowledge of both the DSP hardware and the voice
coder algorithm itself. In order to develop efficient and timely voice coder solutions, third party companies
focus completely on implementing telecommunication standard software on various DSP architectures.
Purchasing these software modules from third parties decreases the investment an Original Equipment
Manufacturer (OEM) must make in software engineers and technology, and it improves time-to-market.
Thus, third parties are used almost exclusively to generate telecommunications standard software. One
such company that develops software for the Motorola DSP56300 architecture is Signals and Software
Limited (SASL), who developed the IS-96-A voice coder discussed here.
3 Multichann el Applications
The wireless infrastructure market requires powerful DSP architectures for processing multiple channels of
information, whether it be voice information or data streams for wireless internet applications. While a
subscriber device must process only a single user’s information, this information is then transmitted to a
base station that processes the information of many users. Therefore, a single DSP device must have the
ability to process multiple channels of information. Moreover, a “farm” of DSPs is often combined in a
system to increase channel processing capacity as standards and user applications evolve to include more
featur es for proce ssi ng.
To determine how many channels a DSP can process, the system designer must look at data I/O
capabilities, processing time for each task required by the standards involved (such as voice coding,
interleaving, and so on), and task management. Task management is handled deterministically or via an
RTOS that decides task processing based on pre-defined priority levels. While some system software
remains in assemb ly co de to opera te at maximum efficienc y on t he DSP architect ure (proba bly standard
library code such as the voice coders provided by third parties), the use of high-level languages and an
RTOS enable the system designer to modify software quickly and thus keep up with evolving standards
while managing the increased number of tasks required to implement the system design.
Codebook Long-Term
Filter Short-Term
Filter
x
Code Gain Long-Term Short-Term
Parameters Parameters
Synthetic
Speech
C Compilers
Multichannel Voice Coding System
7
4 C Compile rs
Embedded applications are written primarily in C, which is becoming mandatory due to time-to-market
constraints, reusability, and portability demands. In the quest for C programmability of DSPs, the key
concern is to optimize the DSP code. Every clock cycle is precious because of the real-time environments
in which DSP s opera te. For examp le, in a wirel ess link the DSPs handle filt ering , si gnal en coding, channel
encoding, chip rate p roces sing, and t he inv erse of all thes e step s within a man dato ry time delay that is not
noticeable to those involved in the conversation. Today’s C compilers are designed to tackle DSP
programmability constraints.
DSP C compilers must harness the power of the DSP architecture. Primary DSP features that must be
accessible to the programmer include MAC instructions, hardware DO loops, modular addressing,
efficient memory access, and parallel operation of computing units. Ideally, the compiler flexibly uses all
the DSP assembly instructions to maintain highly efficient code. DSP instructions are designed for
efficiency so a C compiler must strive to retain assembly efficiency while adding its own positive
attributes, such as programmability and portability.
One difficulty of DSP C compiler d evelo pment is th e var iabili ty of DSP instru ction sets th emselves . For
instance, each product line of DSPs has a unique instruction set, and even within a family of DSPs added
device functionality may need to be considered. For example, in the Motorola DSP56300 family, each
device has a different memory map due to differently sized memory. The DSP56301 has 24 KB of
memory; the DSP56311 has 512 KB. The compiler must be aware of the memory constraints when
building an application. Another difference among the DSP56300 family is the peripherals. Some devices
contain the EFCOP, which yields a substantial increase in processing power since it has its own MAC
(multiply-accumulate) unit and can handle its computational tasks independently from the core ALU. The
differences in peripherals make it necessary to have new compiler header files for each DSP within a
family. For this application note, we use the Tasking C compiler, which has a full line of support for the
DSP56300 family—in addition to the standard tool set consisting of the macro assembler, linker/locator,
librarie s, Cr oss View Pro Debugg er a nd Embedded Development Envi ronment (EDE). Colle ct ive ly , the se
tools are called the DSP56xxx software development tools. Examining some features of the Tasking tools
can give you an ide a of what toda y’s C compilers must provide to be usable in real-ti me DSP applica tions.
To achieve code efficiency, the compilers apply optimization techniques. Compiler extensions allow you
to program your specific applications, such as filters, in C without significant overhead. Use of the
fractional data type and memory source qualifiers helps to optimize loops and exploits the parallel
execution capability of the DSP. Bit field operations are optimized by use of the EXTRACT and
EXTRACTU operations. The compiler supports the 16- and 24-bit modes of the DSP56300. Furthermore,
the Tasking tool suite uses a calling convention that guarantees better use of registers and therefore less
function call overhead.
To further aid in optimizing DSP code, C compiler tool suites must offer other features. Tasking supports
in-line assembly functions that translate directly, without overhead, to specific DSP capabilities. Tasking
also allows for adjustable code generation with #pragmas. A pragma is a set of instructions that can be
written to control the individual compiler optimizations, to allocate character arrays, and to handle the
cache. Circular buffer-type modifiers give efficient memory access. Tasking also provides floating-point
libraries and memory models to support the different memory sizes across the DSP56xxx line.
Within the Tasking EDE, makefiles can be created directly from the GUI window. This is convenient for
code development.
C Compilers
8Mult ichannel Voic e Codi ng Sy st em
In addition to the compiler, powerful easy-to-use debuggers are required for DSP C code development.
Figure 6 shows some of the useful feat ures of Tasking ’s source window, which dis plays the progr am as C,
assembly, or mixed. Other useful debugger windows are the tool/status window and the register, trace,
stack, and memory windows. In the tool/status window, you can load, run, step, and stop code as well as
set up debugger parameters and access help files from one convenient place. In the register window, you
can display, edit, and group all registers. By grouping registers you can display a set of registers as you
need them. Highlighted registers indicate what has changed since the previous execution step. The trace
window displays the contents of the OnCE/JTAG port trace buffer of the DSP56xxx and automatically
updates each time execution halts. The stack window displays the state of the current stack frame,
including function parameters. The memory window enables you to monitor and edit the current value of
memory locations. Multiple memory windows for different ranges can be opened for convenience.
Figure 6. Tasking Debugger Code Window
Assembly, C Code,
and Mixed Code Views
availab le duri ng debug.
Mixed view is shown. C
code is bolded. Assembly
code is displayed
the associated C code.
Watch
Variable im m edi atel y ben eath
Set
breakpoints
and watches
in code for
debug.
Allows for
function
search
Variable
Searches
Menu options
provide acc es s to
all DSP registers
incl udi ng me mo ry,
core, and peripherals.
Step into
and over
functions
Real-time Operating Systems
Multichannel Voice Coding System
9
5 Real-time Operating Systems
As the use of higher-level languages becomes predominant in the DSP market, the use of RTOSs in these
applications also increases. Though some programmers assert that an RTOS is not necessary, an RTOS
eases sof twa re integrat ion and sys te m design. A natur al relation exists bet ween C and an RTOS. The s ame
features that make the C language attractive for “large-scale” DSP applications also applies to the use of a
multitasking RTOS. A mature RTOS from an established third-party vendor makes optimal use of system
resources while providing a deterministic system behavior without the resource investment necessary to
develop and support this software in-house.
The majority of off-the-shelf RTOSs that meet the real-time requirements of today’s applications use a
multitasking, prioritizable, pre-emptive model. Multitasking is a technique that allows multiple chores to
share the r esourc es on a DSP. The app licatio n is parti tioned int o smalle r tasks tha t are sched uled to ex ecute
by the RTOS kernel. In a pre-emptive system, each task is assigned a priority based on its relative
importance. When an event occurs, such as an interrupt or a signal from another task, the kernel decides,
based on t ask prior ities, which task can use t he DSP reso urces. A ta sk with a higher pr iority can “bu mp,” or
preempt, a lower-priority task.
Communication applications, whether they apply to the disparate subscriber (client) or infrastructure
(server) markets, lend themselves to a multitasking environment since these applications can be readily
partitioned into multiple, prioritizable, preemptable tasks such as I/O, data parsing, voice encoding and
decoding, protocol handling, interrupts, host communication, control, and test and debug.
The RTOS for the multichannel voice coder application described here is the Real-Time eXecutive in C
(RTXC) by Embedded Power Corporation.4 RTXC is a multitasking, prio ritizable , pree mptiv e operating
system designed for responsiveness and predictability. Figure 7 shows an overview of the RTXC
application. At design time the application is decomposed into tasks that use an Application Programming
Interface (API) to make calls to the services provided by RTXC, as shown in Figure 7.
Figure 7. RTXC Services and Task Interface
4. TRXC is the recommended RTOS for the DSP56300 family. See http://www1.motoro la-dsp.com/tools-
info/rtos-dev.html for details.
RTXC Service Calls (KS_...)
RTXC Service Calls (KS_...)
RTXC ISR Calls (KS_ISR...)
Inter-Task
Communication Event
Management Task
Management
Resource
Management Timer
Management Memory
Management
Task 1 Task... Task n
Interrupt Service Routines Device Driver Tasks
System Overview
10 Mult icha nnel Voic e Codi ng Sy st em
By selecting the tasks properly, the developer can isolate the “application” tasks (labeled Task 1 through
Task n) from the “device-specific” tasks (shown as interrupt service routines and device driver tasks),
which makes it easier to reuse or migrate the application tasks across processors or applications. The
RTXC API provides access to the following services and objects:
Task management: starting and stopping tasks, and so on.
Event management: semaphores.
Inter-task communication: mailboxes, messages, and queues.
Resource management: sharing system resources.
Time management : timers.
Memory management: memory partitions.
Though ther e is no defin ed way to des ign an appl ic ation with an RTOS, t her e ar e general guidelines. Start
by breaking the application into tasks by drawing data flow diagrams and determining the input,
processing function, and output for each task. Define which events and conditions require a task to
perform their function. Initially assign priorities based on how often each task must perform its function:
the highest priority goes to the most time-critical ta sk.
In its distribution form, RTXC is not executable. RTXC is furnished as a set of C and assembly language
source files. You must first compile RTXC source code and then link it with the object files of the
application programs and other system configuration files. Developers should treat RTXC as any other
software library. It is not necessary to know how RTXC functions internally. You need only know what
functionality RTXC provides and what RTXC kernel services achieve the desired result. Knowledge of
which input s produc e which o utputs is al l that is nee ded. Addit ional ly, not all RTXC obj ects a re need ed in
an application. In fact, an effective application can be implemented using tasks and semaphores alone.
Therefore, RTXC allows scaling of its software to include only the modules and API calls that the
application developer wishes to use, thus improving memory and execution efficiency.
6 System Overview
The system discussed here executes multiple channels of the IS-96-A voice coder on a Motorola
DSP56307EVM. This section outlines the target hardware and application software developed for this
project.
6.1 Target Hardware
The target hardware is a DSP56307EVM with a connection to a daughter board that contains four Crystal
CS4215 stereo audio codecs. All software described in this application note runs on the DSP56307EVM.
This board can be purchased for development purposes. The EVM has an on-board DSP56307 and audio
codec. You can the refore pr ogram the DSP to process voice coming fr om an audio source through the A/D
converter and then send the processed data back out through the D/A converter to hear through
headphones or speakers. The EVM also has on-board SRAM and Flash memory.
The DSP56307 is a powerful device that can process multiple channels of voice data. The audio codec on
the DSP56307EVM allows processing for two channels of data. To demonstrate the processing
capabilities of the DSP56307, more than one audio codec is required. We developed the multichannel
board primar ily to demonstr ate the executi on of multiple voice coding channels on a DSP56307EVM. The
System Overview
Multichannel Voice Coding System
11
DSP communicate s wi th the mult ic han nel board via th e Enh anced Synchronous Seri al In ter f ace 0 (ESSI0)
port on t h e EVM. Ea ch c ode c on the bo ard exe cut es A/D or D/A operations for tw o ch anne ls of audio data
(stereo audio requires two channels). Four audio codecs are provided for A/D and D/A conversion of the
audio inputs to be processed by the DSP. Therefore, up to eight audio channels can be sent through the
codec board to the DSP device. Figure 8 shows the primary multichannel board components.
Figure 8. Multichannel Board
Table 1 shows the signal connection table between the EVM and the multichannel board. Immediately
after power -up, the DSP is the mast er of the connec tion and uses its GPIO li nes to signal the codec that it is
sending control informati on. Once t he master codec ( Codec 1) receive s the c ontrol wo rds, the connecti on is
reset and the master codec provides the clock for the subsequent data transmission between the EVM and
the multichannel board.
Table 1. Signal Connection
EVM ESSI0 Signals Multichannel Board Codec Signals
SCK1 SCLK
GPIO RESET
STD1 SDIN
SRD1 SDOUT
GPIO D/C
SC11 FSYNC
GND GND
CS4215 CS4215 CS4215 CS4215
74HCT244
74HCT04
74VHC244
I/O Conn ector
Codec 1Codec 2
Codec 3
Codec 4
Stereo Out
I/O Jacks for Codec 2
I/O Jacks for
Codec 1
Stereo Left Right
I/O Jacks for Codec 3
To ESSI
Power Regulators
POWER S1
9V DC Input
In
In Out Left
In
Left
In Right
In
Right
In Stereo
Out
I/O Jacks for
Codec 4
Stereo Out
Left In
Right In
for 5 V and 3.3 V Pin 1
Pin 1
U13 U11 U12 U10
J3
J2
J4
J5
J6J7J8
J9
J10
J11
J12
J13
System Overview
12 Mult icha nnel Voic e Codi ng Sy st em
6.2 Application Software
The application software manipulates four independent audio data streams or channels: ch0, ch1, ch2
and ch3. Each audio channel is processed on the basis of one of four user-selected modes:
Off. If the channel is Off, the received audio input sample is simply ignored.
Pass. In Pass mode, each received input is immediately transmitted without any further
processing.
Delay. In Delay mode the data received is used to fill an input ‘frame’ buffer that is then copied to
an output ‘frame’ buffer for transmission. A certain amount of delay is introduced into the audio
stream that is proportional to the size of the frame buffers being used.
IS96a. In IS96a mode the operation is similar to Delay mode except that rather than copying the
data from input frame buffer directly to the output frame buffer it is encoded (compressed) and
immediately decoded (de-compressed) by the IS-96-A voice coder.
To help manage this system, a C data structure is defined that contains the information necessary to
manage an individual audio channel, as follows:
struct CHANNEL_CONFIGURATION
{
int _Y id; // channel identification
int _Y mode; // processing mode
_fract_Y *rx_fbuff_base; // receive frame buffer
int _Y rx_fbuff_ptr;
_fract_Y *in_fbuff_base; // input frame buffer
_fract_Y *out_fbuff_base;// output frame buffer
_fract_Y *tx_fbuff_base; // transmit frame buffer
int _Y tx_fbuff_ptr;
int _Y rx_fbuff_full; // receive flag
int _Y tx_fbuff_empty; // transmit flag
}ch0,ch1,ch2,ch3;
The channel iden ti fication parame te r, id, indicates whi ch channel (ch0, ch1, ch2 or ch3) is defined by
the C struct ur e par amet er s and points to a t ime sl ot in t he ESSI fra me (s ee Section 7.2.2) t h at cont ai ns the
audio data associated with the channel. The channel processing mode, mode, defines which mode is
currently used to process the channel (Off, Pass, Delay, or IS96a). The remaining elements in the
CHANNEL_CONFIGURATION C struc ture implement a dou ble buffer mechanism th at manages the frame
buffers in the Delay and IS96a modes. The IS-96-A voice coder is obtained from SASL.
A channel is a data stream of audio data that is accessed using the DSP56307 Enhanced Serial
Synchronous In terf ace (E SSI). The ESSI f irst init iali zes t he har dware d aughter boar d descr ib ed in Sect ion
6.1, Target Hardware, on page 10 and then receives and transmits the audio data sampled by the
multichannel board. The system connects to a host computer to allow a user to enter commands and
change a channel’s processing mode via a command-line interface (CLI) that provides status information
and allows configuration of the four audio channels. The Serial Communication Interface (SCI) on the
DSP56307 communicates with the host computer via an RS232 serial communication (COM) port.
Finally, the RTXC RTOS, version 3.2d, provides multi-tasking support, MIPS usage information, and
snapshot views of the RTXC state and objects (the latter two supplied via the CLI). Use of a multitasking
operating system makes it easier to integrate the various software modules (tasks, ISRs, and drivers) and
allows reuse of the code.
Software Description
Multichannel Voice Coding System
13
7 Software Description
This section addresses the following topics:
Data Input and Output (I/O)
Voice coders
User interface
•RTOS
First, we created a stand-a lone syst em for dat a I/O using t he ESSI a nd multichanne l vocod er board, anot her
system that uses the SASL voice coders (using the analog-to-digital (A/D) codec on the DSP56307EVM),
and a thi rd syste m that impl ements th e user i nterfac e. We deve loped the se syste ms without use of an RTOS
and then incor porat ed RTXC. Use of an RTOS made it much easi er to integ rate th e final system. Whe n we
used RTXC, each s tand-alo ne system co nsisted ma inly of one or more int errupt s ervice r outines ( ISRs) and
one or more tasks.
The integration work mostly involved creating an additional task to “dispatch” the appropriate task based
on the operating mode chosen by the user and selecting the appropriate task priorities. Also, the RTXC
mechanisms for task synchronization and communication were incorporated as needed.
7.1 Reentrant Requirement
At this point, the concepts of re-entrancy and the requirements of multichannel systems must be
understood. A re-entrant computer program or routine is written so that multiple users/tasks can share the
same copy of the code in memory. Re-entrant code is commonly required in RTOSs and in application
programs shared in multi-user multi-tasking systems. A programmer writes a re-entrant program by
making sure that no instructions modify the contents of variable values in other functions within the
program. Each time the program/processor is entered for a user/task, a data area is obtained in which to
keep all the variable values for that instance of the user/task. When the process is interrupted to give
another user/task a turn to use the program/processor, information about the data area associated with that
user is saved. When the interrupted user/task of the program recovers control of the program/processor,
context information in the saved data area is recovered and the module is reentered and processing
continues. If a routin e follows these ru les, it is re -entrant:
All local data is allocated on the stack.
The routine does not use any global variables.
The routine can be interrupted at any time without affecting the execution of the routine.
The routine calls only other re-entrant routines. It does not call non-re-entrant routines (for
example, standard I/O, malloc, free, and so on).
7.2 Data Input/Output
In DSP applications, input/output (I/O) handling is critical to maintaining real-time data processing.
Therefore, DSPs such as those in the DSP56300 family provide multiple I/O-handling peripherals and
extremely fast interrupt se rvi ci ng. For example, t he DSP56307 has two ESSIs, the Ser ial Communi cat io ns
Interface (SCI), and the 8-bit host interface (HI08) for efficient data I/O implementation. These features
provide optimal I/O solutions for various applications.
Software Descri ption
14 Mult icha nnel Voic e Codi ng Sy st em
The ESSI provides a full duplex serial port for serial communications with a variety of serial devices,
including industry-standard analog-to-digital codecs, other DSPs, microprocessors, and peripherals. The
SCI provides a full duplex port for serial communications with other DSPs, microprocessors, or
peripherals such as modems. The HI08 is a byte wide, full duplex, double-buffered parallel port that can
connect directly to the data bus of a host processor. For our application, we use the ESSI for obtaining
audio I/O from the multichannel codec board described in Section 6.1, Target Hardware, on page 10 and
the SCI for communicating with the host computer using the command line interface. The HI08 is not
required.
In the DSP56300 f amil y , t he I /O per ip her al s can trigger i n t err upt s t o con v eni ent l y hand le processing. The
DSP56300 processors have two types of interrupts:
Fast interrupts are two instructions long and require no overhead for jumping to the interrupt or
return ing to the normal program flow. Furt hermore, the fas t interrupt instruct ions always comp lete
without interruption. Fast interrupts are an excellent method for moving data from the I/O
peripherals.
Long interru pts are not limite d to a ce rtain n umber of inst ruction s and can be inte rrupted by hig her
priority interrupts. However, long interrupts need to save and restore the program counter, the
status register and update the stack pointer to return to the normal program flow. Long interrupts
are used when more intricate data processing is required.
The ESSI softwar e for the au dio I/O hand ling fo cuses on pro perly initi alizi ng the har dware and ef fici ently
using the ISRs and data structures. In our system, a double buffering technique implements efficient data
transfers from the codecs to the DSP56307 core for processing of the data I/O in the Delay and IS96a
modes. In the double buffering technique, two buffers and two pointers are used instead of just a single
buffer for data input or data output. A single buffer that receives data requires the input data stream to
“wait” whi le th e curr ent da ta in th e buff er is proce ssed. When two buffe rs (a nd two p oin ters) ar e used , one
buffer can be filled while the other is processed. When the receiving buffer is full and the other is
processed, the pointers are swapped so that the processing can occur with the new data and the incoming
data stream can fill the alternate buffer. Swapping pointers omits the need to copy the data does from one
buffer to the other. The pointer is then passed to the data input task or the processing task. A similar
situation occurs for output data (the resulting data from the processing) and data transmitted via the ESSI.
7.2.1 Audio Codec Initialization
Before communications can begin between the audio codec board and the DSP56307, the codec must be
initialized. There are two main steps to initialization:
1. Send control data.
2. Once the control information is sent, place the codec into data mode. Transmission may then begin.
General purpose I/O (GPIO) pins initialize the audio codec in control mode. One line sets the mode of the
audio code c: co ntrol or dat a. The second line is t ied to the c odec r eset line. These a re th e only contr ol li nes
required to initialize the CS4215 codec. When the codec comes out of reset in control mode, it waits to
receive control information for operation. The DSP56307 controls the interface and transmits the control
words to the codec. Once the codec receives all of its control words, the codec is placed into data mode.
Because the codecs on the multichannel board are daisy-chained together, one codec on the multichannel
board is initialized as the interface master. When the codec is placed in data mode, the DSP56307 is no
longer master of transmission between the devices. The master codec generates the clock for the ESSI.
Software Description
Multichannel Voice Coding System
15
7.2.2 Synchronous Interface
ESSI0 transfers the data to and from the codecs. The ESSI0 interface has three main modes for handling
synchronous data transfers: normal, on-demand, and network. We use the network mode, which allows
mul tiple time slots of da ta to be transfer red to the DSP within a given time frame. This mode is ext remely
useful when an ap plica tion re quire s the tra nsfer of multip le inde pendent channel s of data through t he same
interface. Network mode allows the ESSI to handle up to 32 time slots. This application uses two codecs,
which require a total of eight time slots per receive or transmit frame: four for audio data and four for
control data (see Figure 9).
Figure 9. Picture of ESSI Network Mode Usage in This Application
The first four sl ots con tain data from /to the first co dec (two for data for left and right channels and two for
control information for left and right channels). The last four slots contain the data from/to the second
codec. Each audio channel is sampled at 8 KHz. In total, the ESSI0 handles 128 bits every 125 µsec (16
bits per time slot for eight time slots every 8 KHz). The data received through ESSI0 is placed into a
16-element receive buffer, and the data transmitted is taken from a separate 16-element buffer.
A private (non-RTXC) ESSI receive ISR loads data/control information for each time-slot into the receive
buffer. Figure 10 sh ows the mech anism to recei ve data usi ng the ESSI. The ESS I receive ISR simply rea ds
the audio data from the ESSI receive register and writes it into the 16-element receive buffer called
RX_buff.
Figure 10. ESSI Data Receive Mechanism
Similarly, Figure 10 sho ws the me chanism use d to tra nsmit data using th e ESSI. The ESSI private t ransmit
ISR simply wri tes th e audio d ata fr om the 16-el ement re ceive bu ffer calle d TX_buff to th e ESSI transmi t
register.
Frame Sync
Data
Time Slots D1 C1 D3 C3D2 C2 D4 C4
Dn = Audio Data for channel n
Cn = Control Data for channel n
ESSI
3. Write to receive buffer
2. Read audio data input
ESSI Rx
ISR
1. ESSI Rx Event
RX_buff
Software Descri ption
16 Mult icha nnel Voic e Codi ng Sy st em
Figure 11. ESSI Data Receive Mechanism
7.3 Voice Coders
This section discusses two key features of the voice coder: the double buffering mechanism for rapidly
moving data and the wrappers for integrating the voice coder software into a system.
7.3.1 Double Buffering
Our demonstration code contains a double-buffer mechanism, also known as “ping-pong buffers.” Two
separate buffe r pairs ar e defined for each cha nnel. Poin ters an d flags for the vario us buff ers are mai ntained
in th e CHANNEL_CONFIGURATION C struc ture d escri bed in Section 6 .2 , Appli catio n Soft ware, on page
12. Each channel uses four frame buffers so that one buffer can receive data while another buffer is
processed. The third frame buffer stores the processed output, and the fourth contains the data being
transmitted. Wh en the receive frame buffer is fu ll, th e base pointers to the receive (*rx_fbuff_base)
and input (*in_fbuff_base) frame buffers are swapped. Similarly, when the transmit frame buffer is
empty, the base pointers to the output (*out_fbuff_base) and transmit (*tx_fbuff_base) fram e
buffers are swapped. The assumption is that the processing for each channel data stream completes by the
time the buffers need to swapped.
Associated with each double-buffer structure is a flag that indicates whether the receive buffer is full
(rx_fbuff_full) and another to indicate whether the transmit buffer is empty (tx_fbuff_empty).
The rx_fbuff_ptr pointer i n t he data s truct ure is an i ndex to i ndicat e the lo cat ion in the rec eiv e buffe r
to be loaded with the next data input. The tx_fbuff_ptr pointer in the data structure is an index to
indic ate th e locat ion in the transmi t bu ffer of the da ta to be tr ansmit ted next . When t he ba se poin te rs to t he
receive buf fer and the tra nsmit buff er ar e swap ped (that is, the r eceiv e bu ffer is f ull a nd the t ransmit bu ffer
is empty), the rx_fbuff_ptr and tx_fbuff_ptr inde x pointe rs are initializ ed to zero . Figure 12
shows the double buffer mechanism.
ESSI 3. Write Audio Data Output
2. Read from transmit buffer ESSI Tx
ISR
1. ESSI Tx Event
TX_buff
Software Description
Multichannel Voice Coding System
17
Figure 12. Double Buffer Diagram
P1 P2
*rx_fbuff_base
P3 P4
Processing
*in_fbuff_base
*out_fbuff_base
*tx_fbuff_base
The receive and input both use P1
and P2 buffer pointers. However,
they use them in a mutually
exclusive way. *rx_fbuff_base
switches to P2 at the same time
*in_fbuff_base switches to P1.
The output and transmit both use P3
and P4 buffer pointers. However,
they use them in a mutually
exclusive way. *out_fbuff_base
switches to P4 at the same time
*tx_fbuff_base switches to P3.
Software Descri ption
18 Mult icha nnel Voic e Codi ng Sy st em
7.3.2 Wrappers
Voice codi ng softwa re pu rch ased from a thi r d -pa rt y deve loper includes the soft ware t o i m pl ement a gi ven
telecommuni catio ns st andard on a spec ific DSP devi ce. Thi s soft ware d oes not handl e dat a inpu t/out put or
other sys tem issu es for a given a pplica tion. Al so, each softwa re module is gene rall y develo ped to handl e a
single voi ce channel. Sin ce mos t i nfr astructur e applicati ons handle multi pl e cha nnels of voice processi ng,
the voice coder software must be invoked multiple times, once for each channel of voice processing. To
address these issues, wrappers are used to integrate the voice coder software into a system. The wrapper
handles sev eral task s:
1. Sets up any initialization for data storage and pointers required by the voice coder. Generally, voice
coder software is accompanied by documentation describing the information it requires to operate
properly (see Section 2.2). The wrapper sets up this information as needed before the voice coder is
invoked.
2. Handles data input and output to the vocoder modules.
3. Calls the voice coding routine when everything is ready to process.
SASL provides an Interface Control Document (ICD) with the voice coding libraries. This document
describe s whi ch p ara mete rs mu st be pa ss ed t o th e voi ce coder, regis ter s t o co nta in the parameter s, a nd t he
form the parameters may n eed to take before the voice code r gets them. For IS -96-A, the para meter
passing is straightforward. Before the encoder initialization routine is called, the pointer to encode data in
X memory is placed into r3, and the pointer to encode data in Y memory is placed into r4. The encoder
takes a block of 160 input samples and converts it into the IS-96-A packets for transmission. r1 points to
the input buffer of 160 samples; r2 points to the output buffer. The r3 and r4 registers remain pointing to
the static memory required by the encoder. Also, x0 holds the minimum rate allowed by the voice coder,
and y0 holds t he max imum ra te all owed . Gener al ly , the maxi mu m rat e is 8 kbps (y0 =4) and t he mi nimum
rate is 0.8 kbps (x0=1). The encoder is called using a jump to subroutine instruction to a routine specified
in the ICD.
For d ecoding, the pa rameter passing is similar. The r 3 and r4 registers now point to the static memory for
the decoder. r1 points to the input buffer and r2 points to the decoder output buffer. The decoder has a
postfilter option, which is enabled by placing the value 1 in the y1 register.
7.4 User Interface
As noted earl ier, each audio channel is process ed in one of four us er-sele cted modes: Off, Pass, Delay , and
IS96a. If the channel is Off, the output does not change. In Pass mode, each input is simply sent to the
ESSI output buffer for transmission. In Delay mode, the data is first buffered into a frame of 160 inputs;
when this input frame is filled it is simply copied to the 160-element output frame buffer for later
transmission. Finally, in IS96a mode, the operation is similar to Delay mode except that the data from
input to output frame buffers it is encoded (compressed) rather than copied and immediately decoded
(decompressed) by the IS96a voice coder.
The user interfaces with the target DSP56307EVM using a command-line interface (CLI) from a dumb
terminal (hyperterminal) on a host computer (see Figure 13 ). The SCI peripheral on the DSP56307
connects to the COM port of the host computer via an RSR232 serial cable. Thus, you configure the
individual channels by typing characters that are decoded by the application’s CLI task.
Software Description
Multichannel Voice Coding System
19
Figure 13. Command Line Interface
Figure 14 shows the mecha nism for rece iving u ser co mmands o ver the SCI perip heral . The SCI p eriphe ral
signals an eve nt when it re ceives a character , causing t he SCI recei ve inter rupt ro utine t o execu te. The SCI
receive I SR reads the charac ter fr om the SCI rec eive re giste r, writes it to a buffe r; and si gnals a semaphore
that enables the SCI i nput d river task sci_drv. When the SCI input driver task executes, it can read the
character from the buffer and enqueue the character in the RTXC input queue.
Software Descri ption
20 Mult icha nnel Voic e Codi ng Sy st em
Figure 14. SCI Rec eiver
Using the mechanism shown in Figure 14, the hardware-specific ISR is isolated from the
application-specific sci_drv ta sk. This makes it easie r to reu se or migrat e the applic atio n C code across
processors or applications. A similar mechanism transmits characters from the DSP56307 SCI to the host
as shown in Figure 15. As with the receive path, the SCI peripheral signals an event when a character is
ready to be transmitted, causing the SCI transmit ISR to execute. The SCI transmit ISR signals a
semaphore that enables the SCI output driver task sci_odrv. When the SCI output task executes, it
reads the output character from the RTXC output queue, which is then written to the SCI transmitter.
Figure 15. SCI Transmitter
7.5 RTXC
RTXC is a mult it asking p rio ri ti za ble , pr ee mp ti ve oper at in g system. RTXC is treated as a sof twa re li br ary,
so the ap plicati on tasks a nd ISRs acc ess the RTXC services by making calls to the RTXC API. Knowledge
of which inputs produce which outputs is all that is needed. The general form of the RTXC API call is
KS_name ([arg1],[arg2], ... , [argN])
The character string KS_ identifies name as an RTXC kernel service. This prefix prevents name from
being mist aken ly i dentifie d by the l inker with s ome simi larl y named f uncti on in t he run -time libra ry of the
compiler. For ex ampl e, th e API call:
KS_wait(SSI_RISR)
SCI
3. Write Character
2. Read SCI receive register
SCI Rx
ISR
4. Signal Semaphore
1. SCI Rx Event
buffer
sci_idrv
Task
5. Read Character
Input
Queue
6. Enqueue Character
4. Write SCI transmit register
2. Signal Semaphore
3. Dequeue Character
SCI
SCI Tx
ISR
1. SCI Tx Event
sci_odrv
Task Output
Queue
Software Description
Multichannel Voice Coding System
21
blocks a task until a specified event occurs. The event is associated with the semaphore SSI_RISR,
which is signaled by the ESSI receive ISR. The current task thus synchronizes its execution with the
reception of a frame of data containing four audio samples, one for each of the four channels being
processed. If the ESSI ISR has already signalled the semaphore, no wait occurs nor is the task blocked.
Instead, the task resu mes .
Once the application tasks are written, the remaining C source code files necessary to correctly build the
application are automatically generated by Sysgen, a utility tool provided with RTXC for entering and
editing the system configuration for the application. SYSgen accepts the definition of the configuration
data throu gh a series of interact ive dialogs r elative to each type of control or da ta structur e (see Figure 16 ).
SYSgen uses this information to generate header files that are then included in any application code
module that uses RTXC data element to be later compiled into the application.
Figure 16. Sysgen Semaphore and Task definition windows
7.5.1 RTXCbug
RTXCbug is the RTXC system-level debugging tool. Its provides snapshots of RTXC internal data
structur es and per fo rms s ome l imi te d t ask cont r ol. RTXCbug oper ate s a s a task and is usually se t up as t he
highest-priority task. Whenever RTXCbug runs, it freezes the rest of the system, thereby permitting
coherent snapshots of RTXC components. RTXCbug is not a replacement for other debugging tools but
assists you in tuning the performance of the RTXC environment or checking out problems within it.
RTXCbug uses the input and output ports of a user-defined console device. In the application discussed
here, the host comput er inter faces with RTXC using the SCI inp ut and out put drive rs. Commands ar e given
to RTXCbug via a hype rt er minal win dow on the host c omput er and transmi tted t o t he DSP5 6307EVM via
an RS2323 interface and a UART connection. The RTXCbug output is also displayed on the host
computer’s hyperterminal window. RTXCbug is entered using two mechanisms:
1. The user enters an exclamation mark (!) on the console input.
2. A task calls a special function within RTXCbug.
Once the system enters RTXCbug, type K in the main menu to invoke the command menu.
Software Descri ption
22 Mult icha nnel Voic e Codi ng Sy st em
7.6 Integration
Once the individual tasks are defined and tested, the integration simply involves properly selecting task
priorit ies and RTXC obj ects to synch ronize and communicate t hese task s. We integra ted the ESSI data I/O
software with t he doub le buffe r and v oice c oder s oftwar e usin g an i nput di spatc h task call ed i_dispatch
and an output dispatch task called o_dispatch shown in Figure 17.
The i_dispatch task is s ign aled via t he SSI_RISR semaphor e fr om the ESSI last slot ISR, i ndi ca ti ng
that all the data for a given ESSI frame has been received. This task then reads the
CHANNEL_CONFIGURATION C structure to determine the operating mode for a given channel, and
based on this information, it determines whether to ignore the input (Off mode), copy the received data
from the receive buffer RX_buff to the transmit buffer TX_buff (Pass mode), or call double buffer
mechanism and process the input data accordingly. If the receive frame buffer is full as indicated by the
corresponding flag in the CHANNEL_CONFIGURATION C structure, an RTXC message is sent to either
the Delay mode task (passbuff) or the IS96a mode task (is96a) indicating which channel to
process.
Figure 17. Input Dispatcher Task
The input di spatcher ta sk performs the st eps described for each of th e fo ur c hannels suppor ted. The use o f
a CHANNEL_CONFIGURATION C structur e for each channe l allows the cur rent state of the channel to be
maintained independently. Thus each channel’s data stream is processed independently. The same Delay
and IS96a tasks are used for each channel: the messages sent to these tasks contain a pointer to the
CHANNEL_CONFIGURATION C structure to use and if multiple messages are sent, the RTXC queues
them. Similarly, the output dispatcher task is o_dispatch signaled by the ESSI transmit last slot ISR.
However, this task simply writes output data to the transmit buffer (TX_buff) if Delay or IS96a
modes are used for a channel. Also, the task calls the function to implement the double buffer mechanism
for the ou tput and transmit fram e buffers.
ESSI
3. Read Data Input
5. Write output or double buffer
ESSI Rx
Last Slot ISR
1. ESSI Last Slot Event
RX_buff
i_dispatch
Task
2. Signal ES SI Rx Semaphor e
Task
Task
Channel_Configuration
Structure
4. Read channel
mode
TX_buff
Double buffer s 6. Signal processi ng task
passbuff
is96a
Conclusions
Multichannel Voice Coding System
23
The integration of the command line interface task cli_drv with the SCI character I/O mechani sms
described in Section 7.4 is shown in Figur e 18. The CLI driver task cli_drv is signaled if a valid
character command is detected in the SCI input driver task (sci_idrv). The cli_drv task then
accesses the CHANNEL_CONFIGURATION C st ructure to di splay or change a cha nnel’s state information.
Messages are sent to the user from the CLI task by enqueueing characters in the RTXC output queue.
Additionally, a task that echoes characters (sci_echo) typed by the user is signaled when a character is
placed in the RTXC input queue.
Figure 18. Command Line Interface Task
We used the Tasking make utility (mk563.exe) to build the application. Integration at this level involved
proper compilation and linking of the RTXC library, the IS-96-A voice coder library, and the application
object files and other the system configuration files.
8 Conclusions
As communications systems continue to increase in complexity, equipment manufacturers rely more and
more upon third parties to develop standard software for a chosen DSP architecture, high-level languages
to easily port code between systems, and RTOSs to integrate various system tasks to guarantee task
handling. Remember that using an RTOS has the following trade-offs:
It adds overhead, but this can be kept to a minimum by choosing a well-designed RTOS such as
RTXC and designing a system efficiently.
It is not difficult to use. A different way of looking at things is necessary, but the main step in
converti ng existing code to use an RTOS i s t o di vid e the program int o t ask s th at usually f low wi th
the function organization. A robust RTOS is absolutely necessary.
It can be used in real-time DSP applications.
It is a part of the tren d tow ards sof tware port ability.
Motorol a is commit ted to continued supp ort of RTOS usage an d oth er i nnov ations in DSP technology th at
can be used to decrease the development time required of equipment manufacturers.
1. Signal sem apho r e cli_drv
Task Channel_Configuration
Structure
2. Access channel
Task
Output
Queue
sci_idrv
Task
Input
Queue
sci_echo
Task 3. Send message
sci_odrv
information
Motorola reserves the right to make changes without further notice to any products herein. Motorola makes no warranty,
representation or guarantee regarding the suitability of its products for any particular purpose, nor does Motorola assume any liability
arising out of the application or use of any product or circuit, and specifically disclaims any and all liability, including without limitation
consequential or incidental damages. “Typical” param eters which may be provided in Motorola data sheets and/or specifications c an
and do vary in different applications and actual performance may vary over time. All operating parameters, including “Typicals” must
be validated for eac h customer application by c ustomer’s technical experts. M otorola does not convey any license under its patent
rights nor the rights of others. Motorola products are not designed, intended, or authorized for use as components in systems
intended for surgical implant into the body, or other applications intended to support life, or for any other application in which the
failure of the Motorola product could create a situation where personal injury or death may occur. Should Buyer purchase or use
Motorola products for any such unintended or unauthorized application, Buyer shall indemnify and hold Motorola and its officers,
employees, subsidiaries, affiliates, and distributors harmless against all claims, costs, damages, and expenses, and reasonable
attorney fees arising out of, directly or indirectly, any claim of personal injury or death associated with such unintended or
unauthorized use, even if such claim alleges that Motorola was negligent regarding the design or manufacture of the part. Motorola
and are registered trademarks of Motorola, Inc. Motorola, Inc. is an Equal Opportunity/Affirmative Action Employer.
OnCE, Digital DNA, and the DigitalDNA logo are trademarks of Motorola, Inc.
AN2113/D
How to reach us:
USA/EUROPE
Motorola Literature Distribution
P.O. Box 5405
Denver, Colorado 80217
1-303-675-2140
1-800-441-2447
Technical InformationCenter
1-800-521-6274
JAPAN
Motorola Japan Ltd.
SPS, Technical Information Center
3-20-1, Minami-Azabu, Minato-ku
Tokyo 106-8573 Japan
81-3-3440-3569
ASIA/PACIFIC
Motorola Semiconductors H.K. Ltd.
Silicon Harbour Centre
2 Dai King Street
Tai Po Industrial Estate
Tai Po, N.T., Hong Kong
852-26668334
Home Page
http://www.mot.com/SPS/DSP
DSP Helpline
http://www.motorola.com/SPS/DSP/
support/index.html
email: dsphelp@dsp.sp s.m ot.com
9 References
1. DSP56300 Family Manual (DSP56300FM/D)
2. Real-Time Kernel User’s Manual: RTXC. Embedded System Products. Version 3.2. 1986 – 1985.
3. (MA039-002-00-00) C Cross-Compiler User’s Guide. DSP56xxx v2.3. TASKING, Inc., 1998.
4. (MA039-049-00-00) Crossview Pro Debugger User’s Guide. TASKING, Inc., 1999.
5. Standard Speech Coding Software Training Course given by Signals and Software Ltd., July 1998.