Home Software Recommended practices for the creation of prototypes in FPGAs of algorithms of...

Best practices for FPGA prototyping of MATLAB and Simulink algorithms

As the complexity of modern FPGAs and ASICs increases, engineers are becoming aware that verification using HDL simulators is not sufficient by itself to fully verify system-level design requirements in an efficient and tailored manner. the deadlines.

 

Many engineers today use FPGAs for algorithm prototyping and acceleration. Using FPGAs to process large amounts of data allows engineers to quickly assess necessary tradeoffs in algorithms and architecture, as well as test designs in real-world situations, avoiding the time toll of HDL simulators. System-level design and test tools such as MATLAB and Simulink allow engineers to take advantage of these benefits by rapidly prototyping their algorithms on FPGAs.

This paper details best practices for model-based design in relation to prototyping on FPGAs with MATLAB and Simulink. Best practices are listed below and highlighted in Figure 1. 

(1) Analyze the effect of fixed-point quantization early in the design process and optimize wordlength in order to produce smaller, more energy-efficient implementations.

(2) Use automatic HDL code generation to create prototypes on FPGAs faster.

(3) Reuse system-level benchmarks with HDL cosimulation to analyze HDL implementations using system-level metrics.

(4) Speed ​​up verification with FPGA-in-the-loop simulation.

 

Why create prototypes in FPGAs?

 

Prototyping the algorithms on an FPGA increases the confidence that such algorithms will work in a real situation. In addition to running test vectors and simulation scenarios at high speed, engineers can use prototyping on FPGAs to test software functionality and adjacent system-level functions, such as RF and analog subsystems. 

Furthermore, because FPGA prototypes run faster, larger data sets can be used, offering the potential to uncover flaws that a simulation model would miss. 

Model-Based Design using HDL code generation allows teams to produce the first prototype sooner than with a manual workflow, as seen in Figure 2. 

In addition, this approach gives engineers the ability to make changes to algorithms at the system level rather than at the implementation level, which speeds up hardware iterations.

 

Case Study: Digital Down Converter

 

In order to illustrate best practices for prototyping on FPGAs using Model-Based Design, a Digital Down Converter (DDC) will be used as a case study. A DDC is a common part of many communications systems (see Figure 3). 

This element transforms a high-speed passband input, which requires considerable computational resources to process, into a low-speed baseband output, which can be easily processed with DSP algorithms that require less computational power. 

 The main components of a DDC are (see Figure 4):

– Numerical Controlled Oscillator (NCO)

– Mixer

– Digital filter chain

 

 

Best Practice #1: Analyze the effect of fixed-point quantization early in the design process 

Engineers often test new ideas and develop initial algorithms using floating-point data types. However, the hardware implementation on FPGAs and ASICs requires conversion to fixed-point data types, which often causes quantization errors. In a manual workflow, fixed-point quantization is typically performed during the HDL encoding process. In this workflow, the engineer cannot easily discern the effect of fixed-point quantization by comparing the fixed-point representation to a floating-point reference. It is also not easy to parse the HDL implementation for overflows. 

To make correct decisions about the required slice lengths, engineers need a way to compare floating-point simulation results with fixed-point simulation results before starting the HDL encoding process. Increasing the fraction length reduces quantization errors; however, such an increase does not imply that the word length must also be increased (more surface area and more energy consumption).

 For example, Figure 5 illustrates the differences between floating-point and fixed-point simulation results for stage 1 of the low-pass filter in the DDC filter chain. These differences are due to fixed-point quantization. The figure at the top shows both floating point and fixed point simulation results superimposed. The lower figure shows the quantization error at each point on the plot. Depending on the design specification, engineers may need to increase the fractional lengths to reduce the introduced quantization error.

In addition to selecting a fraction length, engineers must optimize the wordlength to achieve low-power, footprint-efficient designs. In the DDC case study, Fixed-Point Designer was used to reduce the word length of some parts of the digital filter chain by up to 8 bits (see Figure 6). 

 

Best Practice #2: Use automatic HDL code creation to produce prototypes on FPGAs faster

 

HDL code is needed to produce a prototype on FPGA. Traditionally, Verilog or VHDL code has been written by hand. However, the alternative of generating HDL code automatically using HDL Coder offers significant advantages. Engineers can:

– Quickly assess whether the algorithm can be implemented in hardware. 

– Quickly evaluate different implementations of algorithms and choose the best one.

– Create algorithm prototypes on FPGAs faster.

In the DDC use case, we generated 5780 lines of HDL code in 55 seconds. The code is readable and understandable without any drawbacks for engineers (see Figure 7). Automatic code generation allows engineers to make system-level model changes and produce an updated HDL implementation in a matter of minutes by regenerating the HDL code.

 

Best Practice #3: Reuse Benchmarks at the System Level for HDL Verification with HDL Cosimulation

 

HDL cosimulation allows engineers to reuse Simulink models to send stimuli to the HDL simulator and perform system-level analysis of the simulation output interactively (Figure 8).

While HDL simulation provides only digital waveform output, HDL cosimulation provides full visibility into the HDL code, as well as access to Simulink's full suite of system-level analysis tools. 

When a difference is observed between the predicted results and the HDL simulation results, cosimulation allows engineers to better understand the effect of the discrepancy at the system level. 

For example, in Figure 9 the spectrum display allows the engineer to make an informed decision and ignore the discrepancy between the predicted results and those of the HDL simulation, as the differences lie in the attenuated band. 

The digital waveform output, by contrast, only indicates the discrepancy between the expected results and the HDL simulation as an error. 

Using only the HDL simulation, the engineer could reach the same conclusion, but it would take more time to perform the necessary analysis. 

Best practice #4: Speed ​​up verification with FPGA-in-the-loop simulation

 

After being verified by HDL simulations or HDL cosimulations, the DDC algorithm is now ready for deployment on an FPGA board. FPGA-based verification (also known as “FPGA-in-the-loop simulation”) of the algorithm increases confidence that the algorithm will work in a real-world situation. It also allows engineers to run test situations faster than with traditional HDL simulation. 

In the case of the DDC algorithm, the Simulink model is used to send input stimuli to the FPGA and to analyze the FPGA output (Figure 10). As with HDL cosimulation, the results are available in Simulink for analysis. 

 Table 1 compares the two verification methods (HDL cosimulation and FPGA-in-the-loop simulation) used for the design of the DDC. 

In this case, FPGA-in-the-loop simulation was 23 times faster than HDL cosimulation. Such speed increases allow engineers to run larger sets of test cases and perform regression testing on their designs. This allows them to identify potential problem areas that need further analysis. 

Although slower, HDL cosimulation provides more visibility into the HDL code. P

Therefore, it is more suitable for detailed analysis of problem areas that are detected during FPGA-in-the-loop simulation.

 

Summary

 

By following the four best practices outlined in this article, engineers can prototype FPGAs much faster and with a higher degree of confidence than using the traditional manual workflow. Additionally, engineers can continue to refine their models throughout development and quickly regenerate code for implementation on FPGAs. 

This capability allows design iterations to be much shorter than in a traditional workflow, which relies on handwritten HDL code. 

To learn more about the workflow described here or to download a technical kit, visit:

http://www.mathworks.com/programs/techkits/techkit_asic_response.html.