PIC Microcontrollers for Digital Filter Implementation

HAPTER-2 There are many devices using which we can implement the digital filter hardware. Gone are the days where we still use discrete components to implement multiply, accumulate and delay units. With the advent of VLSI technology, now implementations of digital filters have become easier. A single IC can provide required hardware support.



Digital filters can implemented on different platforms like general-purpose processors (e.g.: 8086 and higher versions), microcontrollers, and DSP controllers or on specially designed digital signal processors like TMS 320CXX series.




Microcontrollers can be used for low frequency applications and where there is finest response and high speed is not required. To increase the speed to medium level, for good filter response and to accomplish floating point arithmetic operations DSP controllers can be utilized. Specially designed DSP processors optimized for very high speed of operation and since it is exclusively meant for DSP applications, software instructions are available to implement many DSP operations. Thus filter implementation of DSP theories are concerned, DSP processors are well suited for all applications. But cost of DSP processor is high.




ABOUT THE PIC MICROCONTROLLER




A microcontroller is a general-purpose device but meant to read data, perform limited calculations on that data, & control its environment based on these calculations. The prime use of a microcontroller is to control operations throughout the lifetime of the system.



The microcontroller uses a much-limited set of single and double byte instruction that are used to move code from internal memory to the ALU.



HARVARD ARCHITECTURE AND PIPELINING



The PIC16F877 family of microcontroller use what is called a Harvard architecture to achieve an exceptionally fast execution speed for a given clock rate. As shown in figure, Instructions are fetched from program memory using buses that are distinct from the buses used for accessing variables in that memory, I/O ports, etc. Every instruction is coded as a single 14-bit word and fetched over a 14-bit-wide bus. Consequently, as instructions are fetched from successive program memory locations, a new instruction is fetched every cycle.





CHAPTER-2

The CPU executes each instruction during the cycle following its, pipelining instruction fetches and instruction execution to achieve the execution of one instruction every cycle. It can seen that while each instruction requires two cycles (a fetch cycle followed by an execute cycle), the overlapping of the execute cycle of one instruction with the fetch cycle of the next instruction leads to the execution of a new instruction every cycle



FEATURES

The key features or the core features of PIC16F87XA microcontroller include:

HIGH PERFORMANCE RISC CPU

  • Only 35 single word instructions to learn

  • All single cycle instructions except for program branches, which are two – cycle

  • Operating speed: DC-20MHz clock input

DC-200ns instruction cycle

  • Up to 8K*14 words of FLASH Program Memory,

Up to 368*8 bytes of Data Memory (RAM),

Up to 256*8 bytes of EEPROM Data Memory

  • Pin out compatible to other 28-pin or 40/44-pin PIC 16CXXX and PIC16FXXX

Microcontroller



PERIPHERAL FEATURES



  • Timer 0: 8-bit timer/counter with 8-bit prescalar.

  • Timer1: 16-bit timer/counter with prescaler, can be incremented during SLEEP via External crystal clock

  • Timer2: 8-bit timer/counter with 8-bit period register, prescaler and postscaler

Two Capture, Compare, PWM modules

-Capture is 16-bit, max. Resolution is 12.5ns

-Compare is 16-bit, max resolution is 200ns

-PWM max. Resolution is 10-bit

  • Synchronous Serial Port (SSP) with SPI (Master mode) and I2 C (Master/Slave)

  • Universal Synchronous Asynchronous Receiver Transmitter (USART/SCI) with 9-bit address detection

  • Parallel Slave Port (PSP) 8-bits wide, with external RD, WR and CS controls

(40/44-pin only)

  • Brown-out detection circuitry for Brown-out Reset (BOR)



ANALOG FEATURES



  • 10-bit, up to 8 channels Analog-to-Digital Converter (A/D)

  • Brown-out Reset (BOR)

  • Analog Comparator module with:

  • Two analog comparators

  • Programmable on chip voltage reference (VREF) module

  • Programmable input multiplexing from device inputs and internal voltage reference

  • Comparator outputs are externally accessible



SPECIAL MICROCONTROLLER FEATURES



  • 100,000 erase/write cycle Enhanced FLASH program memory Typical

  • 1,000,000-erase/write cycle Data EEPROM memory typical

  • Data EEPROM Retention>40 years

  • Self-reprogrammable under software control

  • In-Circuit Serial Programming (ICSP) via two pins

  • Single supply 5V In-Circuit Serial Programming

  • Watchdog Timer (WDT) with its own on-chip RC oscillator for reliable operation

  • Programmable code protection

  • Power saving SLEEP mode

  • Selectable oscillator options

  • In-Circuit Debug (ICD) via two pins





CMOS TECHNOLOGY

  • Low power, high speed FLASH / EEPROM technology

  • Fully static design

  • Wide operating voltage range (2.0V to 5.5V)

  • Commercial and Industrial temperature ranges

  • Low power consumption













CHAPTER-2

References



1. PIC 16F87XA Data Sheet

2. Embedded Control Handbook-1994/95/Microchip

3. Microchip Technical Library CD-ROM-First Edition 2002

4. “Design With PIC Microcontrollers” By John B.Peatman

5. www.microchip.com

Digital Filter Implementation Using MATLAB

MATLAB SOFTWARE



ABOUT MATLAB



The name MATLAB stands for matrix laboratory”. MATLAB is a high-performance technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include:

  • Math and computation

  • Algorithm development

  • Modeling, simulation, and prototyping

  • Data analysis, exploration, and visualization

  • Scientific and engineering graphics

  • Application development, including graphical user interface building



MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non-interactive language such as C or Fortran.



MATLAB features a family of application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.



THE MATLAB SYSTEM:



The MATLAB system consists of five main parts:



Development Environment: This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, and browsers for viewing help, the workspace, files, and the search path.



The MATLAB Mathematical Function Library: This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms.



The MATLAB Language: This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quickly and dirty throwaway programs, and "programming in the large" to create complete large and complex application programs.



Handle Graphics: This is the MATLAB graphics system. It includes high-level commands for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level commands that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.



The MATLAB Application Program Interface (API): This is a library that allows you to write C and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.



MATLAB WORKSPACE:

The MATLAB workspace consists of the set of variables (named arrays) built up during a MATLAB session and stored in memory. You add variables to the workspace by using functions, running M-files, and loading saved workspaces. For example, if you type



t = 0:pi/4:2*pi;

y = sin(t);



The workspace includes two variables, y and t, each having nine values.



ABOUT SIMULINK:



Simulink is a software package for modeling, simulating, and analyzing dynamical systems. It supports linear and nonlinear systems, modeled in continuous time, sampled time, or a hybrid of the two. Systems can also be multirate, i.e., have different parts that are sampled or updated at different rates.



For modeling, Simulink provides a Graphical User Interface (GUI) for building models as block diagrams, using click-and-drag mouse operations. With this interface, you can draw the models just as you would with pencil and paper. This is a far cry from previous simulation packages that require you to formulate differential equations and difference equations in a language or program. Simulink includes a comprehensive block library of sinks, urges, linear and nonlinear components, and connectors. You can also customize and create your own blocks.



Models are hierarchical, so you can build models using both top-down and bottom-up approaches. You can view the system at a high level, then double-click on blocks to go down through the levels to see increasing levels of model detail. This approach provides insight into how a model is organized and how its parts interact. After you define a model, you can simulate it, using a choice of integration methods, either from the Simulink menus or by entering commands in MATLAB's command window. The menus are particularly convenient for interactive work, while the command-line approach is very useful for running a batch of simulations (for example, if you are doing Monte Carlo simulations or want to sweep a parameter across a range of values). Using scopes and other display blocks, you can see the simulation results while the simulation is running. In addition, you can change parameters and immediately see what happens. The simulation results can be put in the MATLAB workspace for post processing and visualization.



Model analysis tools include linearization and trimming tools, which can be accessed from the MATLAB command line, plus the many tools in MATLAB and its application toolboxes. And because MATLAB and Simulink are integrated, you can simulate, analyze, and revise your models in either environment at any point



DESIGN AND STUDY OF FILTERS

DESIGN STEPS FOLLOWED



MATLAB offers varieties of toolboxes using that we can easily design the required digital filter and can observe its phase and magnitude characteristics; construct realization structure of the designed filter; analyze working of the filter.



To design a filter and analyze it we followed below mentioned steps:



  1. Using “Filter Designer And Analyzer” window we designed our required filter. To open this window type “fdatool” in Command Window and press “Enter”. The obtained filter coefficients (i.e. numerator and denominator coefficients) are noted.

  2. Using “Filter Realization Wizard” window we have constructed the filter structure by inputting filter coefficients and selecting the appropriate form. To open this window type “dspfwiz” on Command Window and press ‘enter’ key. Using ‘Launch Pad’ also we can open this window. Select DSP Blockset from the ‘Launch Pad’ and click on the +ve mark corresponding to it. A drop down list opens up. From this list select ‘Filter Realization Wizard’ and double click on it. This opens that window. The constructed structure appears as a subsystem model. A double click on the subsystem block opens the filter structure in Simulink window, which can be simulated.

  3. Using Simulink block library function generator, oscilloscope and MUX are connected to the filter structure. Using Simulink debugger the structure is simulated and the results are observed on the oscilloscope.



IIR FILTER DESIGN ISSUES



We restrict our discussion to the design and analysis of IIR filters only, even though FIR filters provide linear phase throughout the frequency range. In application of our project phase angle variation is of no importance; if phase angle variation is also important then automatically discussion orients towards FIR filters.

Before directly going into the analysis of IIR filters let us have a glance over Shannon’s sampling theorem.

“A signal containing maximum frequency f1Hz may be completely represented by regularly spaced samples, provided the sampling rate ‘fs’ is at least 2f1 sample per second.”

i.e. fs=2f1 Nyquist sampling rate

If signal is sampled at less than 2f1 rate, aliasing error occurs. Signal is then represented with distortion, which depends on the degree of aliasing. To avoid such distortions use antialiasing filter, a low pass filter with cutoff frequency at f1(or fs/2).

Because of above reason designing any digital filter at higher frequency side becomes difficult owing to higher sampling rate and its generation. Hence we need to restrict ourselves to lower frequency side. Further discussion on these issues will be elaborated in respective filter design studies to forth come.



IIR Low pass And High pass Filter Design Issues



The ideal response of low pass and high pass filters is as shown in Figure 1



Practically such sharp roll off is not achievable. Using MATLAB we can easily design these filters and simulate it. Even at higher frequencies (like 40KHz) “simulink” work satisfactorily. The filter nicely exhibits passband and stopband action.



In most of the applications lowpass/highpass filters are used at lower frequencies with increased order, so that sharp roll-off is achieved.



Figure 1



IIR Bandpass Filter Design And Its Implementation



Ideally bandpass filter should have a perfect passband as shown in the Figure 2



But practically such a sharp cutoff at passband edges is not possible. Therefore practically we aim towards a response, which has as much sharper role-off as possible, so that “channel selection” is performed noiselessly. Chebyshev and elliptic filters provide very good response in this regard. Hence we select Chebyshev type of filter for our study and analysis as it provides satisfactory bandpass characteristics





Figure 2





Chebyshev Type-II Filter Design



Filter parameters are as shown bellow-

Order = 2

Sampling frequency Fs = 100KHz

Fstop1 = 39KHz

Fstop2 = 41KHz

Astop= 60db



Input the above parameters to the respective places of “Filter Design & Analysis Tool”. After completing this click on ‘Design Filter’ button. This creates the required filter .To observe the response (step or frequency or impulse) click on the respective icon of the toolbar. Toolbar also has icons to observe pole-zero plot, group delay, filter coefficients etc. Thus the obtained filter coefficients are as follows:



Numerator (b) Denominator (a)

0.000062910740701 1.000000000000000

0.000000000000000 1.621131129004091

-0.000062910740701 0.999874178518599



For the above parameters magnitude response can be observed by clicking on corresponding icon of the toolbar. The corresponding frequency response is as shown bellow in Figure 3.





Figure 3



For the same above filter parameters MATLAB program is written in a M-file to create chebyshev type-II bandpass filter.



The program is as follows….



File name:cheby.m

clc;

clear all;

close all;



As=60; %stopband attenuation

wp1=3.8e+004; %lower cutoff frequency

wp2=4.2e+004; %higher cutoff frequency

n=1; %order

fs=100000; %sampling frequency

wn=[wp1 wp2]/(fs/2); %Nyquist frequency



[b,a] = cheby2(n,As,wn); %compute filter coefficients

[h,f]=freqz(b,a,fs/2); %compute frequency response



mag=20*log10(abs(h)); %get magnitude response

subplot (2,1,1);

plot(f*(fs/2)/pi, mag);grid; %plot response with grid lines



ang=angle(h); %get phase response

subplot(2,1,2);

plot(f*(fs/2)/pi,ang);grid; %plot response with grid lines



Run this program either by pressing F5 key on keyboard or select ‘Run’ from debug menu. The obtained magnitude and phase response is as shown bellow in Figure 4





Figure 4



For both of the above filter designs, the corresponding filter realization structure of Direct Form –2 is as shown in Figure 5. This can be created using “Filter Realization Wizard” by inputting filter coefficients.



From the frequency response curves (Figure 4) of the above filter we can analyze the filter.



Thus the Chebyshev type-2 filter designed with n=1(implies order 2n=2) and pass band =2KHz(implies +/- 1KHz from 40KHz) has very sharp (like a pulse) passband and also not exactly centered at 40KHz.Owing to this shift of passband either towards the left or right side of frequency domain, the center frequency may get attenuated or noise signals may creep through the filter resulting in a distorted output. Another possibility is suppose by some means frequency of input signal itself varies from its central value then filter may pass unwanted signals and it may attenuate the original signal itself creating a noisy output. This may due to the inconsistency of function generator. We can’t say even crystal oscillator can produce exactly 40KHz signal. It may vary a little (+/-100Hz) due to variation of temperature, pressure, applied voltage etc.



Figure 5

Therefore we need a filter, which has a passband, so that even though center frequency varies from its value, the signal is reproduced faithfully at the output and all other unwanted signal frequencies are completely attenuated.



We can meet the requirements by changing the filter parameters like order, pass band, edge frequency or stop band attenuation. But it is found that keeping order n=1 even though we increase pass band range and decrease stop band attenuation, don’t yield satisfactory result instead it just attenuated pass band itself.



Therefore to improve the filter performance the only left option is to increase the filter order and this is what done in most of the practical cases. We adjust the pass band frequency so that around 500 to 800 Hz pass band with minimum attenuation is obtained.



Using “Filter Design & Analysis Tool” we can do this and by noting down the filter coefficients we can realize filter structure.



CHEBYSHEV TYPE-II FILTER DESIGN WITH INCREASED ORDER



Filter parameter: n = 4 (2n = 4=> n =2)

Fs = 100000Hz.

Fstop1 = 38000Hz.

Fstop2 = 41800Hz.

Astop = 40dB.

Let the file name given to this filter be ‘filter40.fda’



Figure 6

The obtained magnitude response is as shown in Figure 6 and the corresponding filter coefficients are as follows,



Numerator (b) Denominator (a)

b(0)=0.010045097978992. a(0)=1.000000000000000.

b(1)=0.031677960303578. a(1)=3.205603560877597.

b(2)=0.044659670697279. a(2)=4.521600439016961.

b(3)=0.031677960303578. a(3)=3.129988499837994.

b(4)=0.010045097978992. a(4)=0.953386226509250.





EFFECT OF TRUNCATION OF COEFFICIENT ON FILTER RESPONSE



Digital signal processing algorithms are realized either with special purpose digital hardware or as programs for a general-purpose digital computer. In both cases the numbers and coefficients are stored in finite-length registers. Therefore, coefficients and numbers must be quantised by truncations or rounding before they can be stored.



The following errors arise due to quantisation of numbers,

  1. Input quantisation error.

  2. Product quantisation error.

  3. Coefficient quantisation error.



  1. The conversion of a continuous time input signal into digital value produces an error, which is known as input quantisation error arises due to the representation of the input signal by a fixed number of digits in the A/D conversion process.

  2. Product quatisation errors arise at the output of a multiplier. Multiplication of ‘a’ and ‘b’ bit with ‘ab’ bit coefficient results a product having ‘2b’ bits. Since ‘ab’ bit register is used, the multiplier output must be rounded or truncated to ‘b’ bits, which produces an error.

  3. The filter coefficients are computed to infinite precision in theory. If they are quantised, the frequency response of the resulting filter may differ from the desired response and sometimes the filter may fail to meet the desired specifications. If the poles of the desired filter are close to the unit circle, then those of the filter with quantized coefficients may lie just outside the unit circle, leading to the unstability.



The other errors arising from quantization are round off noise and limit cycle oscillations.



It can be understood from the above points that quantisation error due to A/D conversion process is difficult to minimize below certain limits in any sophisticated processor. Using higher sampling rate we can minimize this error.



Using processors with higher word length registers can minimize product quantisation errors. As length of the operating registers become more and more error becomes less and less. If floating point arithmetic is supported in processor then this error can be eliminated to a very large extent.

Now let us analyze the filter40.fda named filter for different truncated values.



In this file, the coefficients obtained are 64-bit length (i.e. 16 decimal number excluding decimal point). Notice that these are the coefficients, which are obtained by designing the filter in ‘ Filter design & Analysis tool’.



To find out what is the effect of truncating the coefficient data, we use M-file program ‘cheby.m’. Here we directly feed the filter coefficients to the ‘freqz’ command as shown below:



Clc;

Clear all;

Close all;

b=[ b(0) b(1) b(2) b(3) b(4) ]; %fill numerator coefficients

a=[ a(0) a(1) a(2) a(3) a(4) ]; %fill denominator coefficients

[h, f]=freqz (b,a,fs/2);



If you don’t put semicolon at the end of the line where numerator and denominator coefficients are written and you run the program, you can find that at the command window, these coefficients are rounded up to 4th decimal point. The software itself automatically does the truncation of data. The resultant frequency response by this truncated data is same as that of obtained with non-truncated data with hardly noticeable differences.



The rounding up of numerator and denominator to third decimal point causes below response:



Numerator (b) denominator (a)

0.010 1.000

0.032 3.206

0.045 4.522

0.032 3.130

0.010 0.953



Figure 7





The rounding up of numerator and denominator to second decimal point causes heavy variation in pass band attenuation as shown in Figure 8.





Numerator (b) Denominator (a)

    1. 1.00

    2. 3.21

    3. 4.52

    4. 3.13

    1. 0.95



Figure 8





Rounding up of filter coefficient to the first decimal point abruptly changes frequency characteristics of the filter.



From the above discussion it follows that truncating the filter coefficients to 4th decimal point yields good acceptable frequency characteristics and hence can be implemented in hardware.



The filter structure for the above filter is as shown below in Figure 9.



Figure 9

SIMULATING THE FILTER STRUCTURE



To simulate the above filter structure, it has to be modified a little bit.



First open “Simulink Library Browser”. Select and drag the ‘Sine Wave’ block from the source library to the window where the structure is created. Similarly ‘Scope’ is dragged from the ‘Sinks’ library and the ‘Mux’ is dragged from the ‘Signals & Systems’ library. Substitute ‘input’ by ‘sine Wave’ block, output by ‘Mux’ and ‘Scope’ block. By double clicking on each block we can get the properties of each block. For example: double click on “sine wave” block. This will open a window wherein we can write frequency of the signal, signal voltage, sampling time etc. In the same way “mux” and scope can be configured.



After connecting all these blocks click on “start” icon on the toolbar to start simulation. Options are there to “pause” the simulation, “stop” the simulation etc. We can also see the simulation status on “simulink debugger” window. Double click on the “scope” block. This opens the oscilloscope where we can observe the input and output waveforms.



The study and analysis of simulation of the above filter structure as well as many other filter structures (like Cheby type-1, 2 Butterworth –lowpass/highpass) are carried out successfully.

One thing that is particularly noticeable is that for higher frequencies simulation of filter structure takes more time compared to simulation of filter structure at lower frequencies. Effect of truncation of coeffients can also be observed on the ‘Scope’ and also works as the study has been done in earlier sections.



References



  1. Digital Signal Processing” By Sanjit K.Mitra

  2. Digital Signal Processing” By P.Ramesh Babu

  3. Digital Filters” By T.J.Terrel And E.T.Powner

  4. BASIC Digital Signal Processing” By Gordon B. Lockart And Barry M.G.Cheetham

  5. Digital Signal Processing” By Alan V.Oppenheim And Ronald W.Schafer

  6. DSP Microprocessors: Advances and Automotive Applications” By Subra Ganeshan And Dr.Gopal Arvamudhan

  7. www.mathworks.com

Digital Filters

BRIEF THEORY OF DIGITAL FILTERS

The input and output signals of a digital filter are related through the convolution summation process, which is defined as



Yk(from 0 to infinity)=summation. gi x (k-i)

Where Yk is the filter output sequence and gi is the filter impulse response sequence.

Using the standard Z-transform equation X (z)= (summation from i=0 to infinity) Xk Z-k on

above equation we obtain



Y (Z) = (summation from k=0 to infinity)[g0xk+g1x (k-1)+g2x (k-2)+............] Z-k



=g0(summation from k=0 to infinity)xkZ-k+g1(summation from k=0 to infinity)x (k-1) Z-k+g2(summation from k=0 to infinity)x (k-2)Z-k



=g0 X (z)+g1 X (z) z-1+g2 X(z)Z-2+..............

=[g0+g1z-1+g2z-2+...] X (z)



Therefore, Y (Z) = G (Z). X (Z) Where G (Z)= g0+g1z-1+g2z-2+...



Or G (Z) = Y (Z)/ X(Z) is the transfer function of the digital filter.





Digital filter transfer function is sometimes derived by z-transforming the transfer function of a known analogue filter G (S), that is G (Z)=Z [G (S)]



In general G (Z)= {a0+a1z-1+a2z-2+................+apz-p}/ { 1+b1z-1+b2z-2+...............+bqz-q }

= Y(Z) / X (Z)



Where, ai (0 <=i<= p) and bj (0 <= j <= q) are the digital filter coefficients.



It follows that,



X (Z)[a0+a1z-1+a2z-2+..........apz - p]=Y (Z)[1+b1z-1+b2z-2+.... bqz -q]



i.e. a0X(z)+a1X(z)Z-1+a2X(z)Z-2+.......apX(z)Z-p

=y (z)+b1y (z) Z-1+b2y (z) Z-2+...+aqy (z) Z-q.



But Z-k corresponds to a delay equal to k sampling periods, consequently the above equation may be written as in a linear difference equation form:



a0+xk+a1x (k-1)+a2x (k-2)+.....+ap x (k-p)



= yk+b1y (k-1)+b2y (k-2)+...+bqy(k-q)



Therefore, yk= a0 x k+a1x (k-1)+a2x(k-2)+..........+apx(k-p)-



b1y (k-1)-b2y (k-2)-..............-bq (k-q).



This equation is recursive, Where by the present output sample value, yk is computed using a scaled version of the present input sample xk and scaled version of previous input and output samples. This form corresponds to as Infinite Impulse Response (IIR) digital filters.



The poles and zeros of G (Z) may be determined by factoring the digital transfer function numerator and denominator polynomials to yield:





G (Z) = {f (z-z1)(z-z2).................(z-zp)} / {(z-p1) (z-p2) ................(z-pq)}



The multiplying factor f is a real constant and Zi (0 <= i <= p) and pj(0<= j<= q) are the zeros and poles respectively. The poles and zeros are either real or exit as complex conjugate pairs.



In general as T-->0-, the poles of G (z) migrate towards the (1+j0) point in the Z-plane, thereby making G (Z) approach a marginally stable condition.

The issue of stability is eliminated where the bj coefficients are zero-valued (no poles in G (Z)), corresponding to



G (Z)= a0+a1z-1+a2z-2+..................+apz-p / 1

= Y (Z) / X(Z)





And it follows that



Yk =a0Xk+a1X (k-1)+a2X (k-2)+...........+apX (k-p)



This equation is non-recursive, where the present output sample value is computed using a scaled version of the present input sample and version of previous input samples. This form corresponds to a Finite Impulse Response (FIR) digital filter. This form is also commonly known as transversal filter.



TYPES OF DIGITAL FILTERS

The most straightforward way to implement a digital filter is by convolving the input signal with the digital filter's impulse response. All possible linear filters can be made in this manner. When the impulse response is used in this way, filter designers give it a special name: the filter kernel.

There is also another way to make digital filters, called recursion. When a filter is implemented by convolution, weighting the samples in the input, and adding them together calculate each sample in the output. Recursive filters are an extension of this, using previously calculated values from the output, besides points from the input. Instead of using a filter kernel, recursive filters are defined by a set of recursion coefficients. The important point is that all linear filters have an impulse response, even if you don't use it to implement the filter. To find the impulse response of a recursive filter, simply feed in an impulse, and see what comes out. The impulse responses of recursive filters are composed of sinusoids that exponentially decay in amplitude. In principle, this makes their impulse responses infinitely long. However, the amplitude eventually drops below the round-off noise of the system, and the remaining samples can be ignored. Because of this characteristic, recursive filters are also called Infinite Impulse Response or IIR filters. In comparison, filters carried out by convolution are called Finite Impulse Response or FIR filters

Filter Classification

Figure (1) summarizes how digital filters are classified by their use and by their implementation. The use of a digital filter can be broken into three categories: time domain, frequency domain and custom. As previously described, time domain filters are used when the information is encoded in the shape of the signal's waveform. Time domain filtering is used for such actions as: smoothing, DC removal, waveform shaping etc. In contrast, frequency domain filters are used when the information is contained in the amplitude, frequency, and phase of the component sinusoids. The goal of these filters is to separate one band of frequencies from another. Custom filters are used when a special action is required by the filter, something more elaborate than the four basic responses (high-pass, low-pass, band- pass and band-reject). For instance, custom filters can be used for de-convolution, a way of counteracting an unwanted convolution.

Filter classifications. Filters can be divided by their use and how they are implemented.

The tree structure of different filters is as shown bellow:





BREIF CHARECTRESTICS OF EACH FILTER



IIR FILTERS



i) BUTTERWORTH FILTER: The magnitude response of butterworth filter decreases monotonically as the frequency increases from 0 to ∞. The transition band is more in Butterworth filter compared to Chebyshev filters. The poles of the filter lie on a circle.



ii) CHEBYSHEV FILTER: The magnitude response of the Chebyshev filter exhibits either in pass band or in stop band according to type. Type-I Chebyshev filters are all pole filters that exhibit equiripple behavior in the pass band and monotonic characteristics in the stop band. On the other hand, the type-II Chebyshev filter contains both poles and zeros and exhibits a monotonic behavior in the pass band and on equripple behaviors in the stop band.



For the same specification the number of poles in Butterworth is more compared to the Chebyshev filter i.e. the order of the Chebyshev filter is less than that of Butterworth. This is a great advantage because less number of discrete components will be necessary to construct the filter.



FIR FILTERS



WINDOWS



i) RECTANGULAR WINDOW: Frequency response obtained from this window differs from the desired response in many ways. It doesn’t follow quick transitions in the desired response. The frequency response changes slowly. The window response side lobes give rise to the ripples in both pass band and stop band.



ii) TRIANGULAR (BARTLETT) WINDOW: This produces smooth magnitude response in both pass band and stop band. But here transition region is more and attenuation in stop band is less.



iii) HANNING WINDOW: The main lobe width of Hanning window is twice that of the rectangular window, which results in a doubling of the transition region of the filter. The filter with this window has smaller ripples in both pass band and stop band of the filter. At higher frequencies the stop band attenuation is much greater.



iv) HAMMING WINDOW: Because the Hamming window generates less oscillation in the side lobes than the Hanning window, for the same main lobe width, the Hamming window is generally preferred.



v) BLACKMAN WINDOW: The additional cosine term (compared with Hamming and Hanning windows) reduces the side lobes, but increases the main lobe; it is just an important over hamming window.



vi) KAISER WINDOW: When the parameter α is varied, both the transition width and peak ripple in the side lobes changes. For α =0 both the numerator and denominator of the window coefficients are 1: the Kaiser window becomes rectangular window when α =5.4414, the Kaiser window sequence resemble Hamming window and for α =8.885 the Kaiser window becomes Blackman window.



From the above characteristics is can be noted that the triangular window has a transition width twice that of rectangular window. However the attenuation in stop band for triangular window is less. Therefore, it is not so popular for FIR filter design. The Hanning and Hamming windows have same transition width. But the Hamming window is most widely used, because it generates less ringing in the side lobes. The Blackman window reduces the side lobe level, at the cost of increase in transition width. The Kaiser window is superior to other windows, because for given specification its transition width is always small. By varying the parameter α desired side lobe level and main lobe peak can be achieved. Further, varying the length N can vary the main lobe width. That is why Kaiser window is the favorite window for many digital filter designers.



References



1. “
Digital Signal Processing” By Sanjit K.Mitra

2. “Digital Signal Processing” By P.Ramesh Babu

3. Digital Filters” By T.J.Terrel And E.T.Powner

4. “BASIC Digital Signal Processing” By Gordon B. Lockart And Barry M.G.Cheetham

5. “
Digital Signal Processing” By Alan V.Oppenheim And Ronald W.Schafer

6. “
DSP Microprocessors: Advances and Automotive Applications” By Subra Ganeshan And Dr.Gopal Arvamudhan

Clock Definitions

lock Definitions: Rising and falling edge of the clock

For a +ve edge triggered design +ve (or rising) edge is called ‘leading edge’ whereas –ve (or falling) edge is called ‘trailing edge’.

For a -ve edge triggered design –ve (or falling) edge is called ‘leading edge’ whereas +ve (or rising) edge is called ‘trailing edge’.

basic clock
Minimum pulse width of the clock can be checked in PrimeTime by using commands given below:

set_min_pulse_width -high 2.5 [all_clocks]

set_min_pulse_width -low 2.0 [all_clocks]

These checks are generally carried out for post layout timing analysis. Once these commands are set, PrimeTime checks for high and low pulse widths and reports any violations.

Capture Clock Edge

The edge of the clock for which data is detected is known as capture edge.


Clock Definitions:

Launch Clock Edge

This is the edge of the clock wherein data is launched in previous flip flop and will be captured at this flip flop.

launch clock and capture clock

Skew

Skew is the difference in arrival of clock at two consecutive pins of a sequential element is called skew. Clock skew is the variation at arrival time of clock at destination points in the clock network. The difference in the arrival of clock signal at the clock pin of different flops.

Two types of skews are defined: Local skew and Global skew.

Local skew

Local skew is the difference in the arrival of clock signal at the clock pin of related flops.

Global skew

Global skew is the difference in the arrival of clock signal at the clock pin of non related flops. This also defined as the difference between shortest clock path delay and longest clock path delay reaching two sequential elements.

local and global skew

Skew can be positive or negative. When data and clock are routed in same direction then it is Positive skew. When data and clock are routed in opposite direction then it is negative skew.

Positive Skew

If capture clock comes late than launch clock then it is called +ve skew.

Clock and data both travel in same direction.

When data and clock are routed in same direction then it is Positive skew.

+ve skew can lead to hold violation.

+ve skew improves setup time.

positive skew negative skew


Negative Skew

If capture clock comes early than launch clock it is called –ve skew. Clock and data travel in opposite direction. When data and clock are routed in opposite then it is negative skew. -ve skew can lead to setup violation. -ve skew improves hold time. (Effects of skew on setup and hold will be discussed in detail in forthcoming articles)

Uncertainty

Clock uncertainty is the time difference between the arrivals of clock signals at registers in one clock domain or between domains.

Pre-layout and Post-layout Uncertainty

Pre CTS uncertainty is clock skew, clock Jitter and margin. After CTS skew is calculated from the actual propagated value of the clock. We can have some margin of skew + Jitter.

timing diagram depicting skew, latency, jitter

Clock Definitions:

Clock latency

Latency is the delay of the clock source and clock network delay.

Clock source delay is the time taken to propagate from ideal waveform origin point to clock definition point. Clock network latency is the delay from clock definition point to register clock pin.

Pre CTS Latency and Post CTS Latency

Latency is the summation of the Source latency and the Network latency. Pre CTS estimated latency will be considered during the synthesis and after CTS propagated latency is considered.

Source Delay or Source Latency

It is known as source latency also. It is defined as "the delay from the clock origin point to the clock definition point in the design".

Delay from clock source to beginning of clock tree (i.e. clock definition point).

The time a clock signal takes to propagate from its ideal waveform origin point to the clock definition point in the design.


Network Delay (latency)
or Insertion Delay

It is also known as Insertion delay or Network latency. It is defined as "the delay from the clock definition point to the clock pin of the register".

The time clock signal (rise or fall) takes to propagate from the clock definition point to a register clock pin.

Figure below shows example of latency for a design without PLL.


latency for a design without PLL

Clock Definitions:

The latency definitions for designs with PLL are slightly different.

Figure below shows latency specifications of such kind of designs.

Latency from the PLL output to the clock input of generated clock circuitry becomes source latency. From this point onwards till generated clock divides to flops is now known as network latency. Here we can observe that part of the network latency is clock to q delay of the flip flop (of divide by 2 circuit in the given example) is known value.

latency for a design with PLL

Clock Definitions:

Jitter

Jitter is the short-term variations of a signal with respect to its ideal position in time.

Jitter is the variation of the clock period from edge to edge. It can vary +/- jitter value.

From cycle to cycle the period and duty cycle can change slightly due to the clock generation circuitry. Jitter can also be generated from PLL known as PLL jitter. Possible jitter values should be considered for proper PLL design. Jitter can be modeled by adding uncertainty regions around the rising and falling edges of the clock waveform.

Sources of Jitter Common sources of jitter include:

  • Internal circuitry of the phase-locked loop (PLL)

  • Random thermal noise from a crystal

  • Other resonating devices

  • Random mechanical noise from crystal vibration

  • Signal transmitters

  • Traces and cables

  • Connectors

  • Receivers

  • Click here to read more about jitter from Altera.

Multiple Clocks

If more than one clock is used in a design, then they can be defined to have different waveforms and frequencies. These clocks are known as multiple clocks. The logics triggered by each individual clock are then known as “clock domain”.


If clocks have different frequencies there must be a base period over which all waveforms repeat.

Base period is the least common multiple (LCM) of all clock periods


Asynchronous Clocks

In multiple clock domains, if these clocks do not have a common base period then they are called as asynchronous clocks. Clocks generated from two different crystals, PLLs are asynchronous clocks. Different clocks having different frequencies generated from single crystal or PLL are not asynchronous clocks but they are synchronous clocks.


Gated clocks

Clock signals that are passed through some gate other than buffer and inverters are called gated clocks. These clock signals will be under the control of gated logic. Clock gating is used to turn off clock to some sections of design to save power. Click here to read more about clock gating.

Generated clocks

Generated clocks are the clocks that are generated from other clocks by a circuit within the design such as divider/multiplier circuit.


Static timing analysis tools such as PrimeTime will automatically calculate the latency (delay) from the source clock to the generated clock if the source clock is propagated and you have not set source latency on the generated clock.


generated clock
Clock Definitions:

‘Clock’ is the master clock and new clock is generated from F1/Q output. Master clock is defined with the constraint ‘create_clok’. Unless and until new generated clock is defined as ‘generated clock’ timing analysis tools won’t consider it as generated clock. Hence to accomplish this requirement use “create_generated_clock” command. ‘CLK’ pin of F1 is now treated as clock definition point for the new generated clock. Hence clock path delay till F1/CLK contributes source latency whereas delay from F1/CLK contributes network latency.


Virtual Clocks

Virtual clock is the clock which is logically not connected to any port of the design and physically doesn’t exist. A virtual clock is used when a block does not contain a port for the clock that an I/O signal is coming from or going to. Virtual clocks are used during optimization; they do not really exist in the circuit.


Virtual clocks are clocks that exist in memory but are not part of a design. Virtual clocks are used as a reference for specifying input and output delays relative to a clock. This means there is no actual clock source in the design. Assume the block to be synthesized is “Block_A”. The clock signal, “VCLK”, would be a virtual clock. The input delay and output delay would be specified relative to the virtual clock.