Towards fixed-point formats determination for Faust programs

Agathe Herrou

Florent de Dinechin, Stéphane Letz, Yann Orlarey, Anastasia Volkova

An introductory example

	    
import("stdfaust.lib");
	      
N = 100;
freq = 100;
process = sum(i, N, os.osc(freq*(i+1))/(i+1));
	    
	  
		  
		    faust saw.dsp
		  
		
		  
		    faust -double saw.dsp        
		  
		

The Émeraude team

  • INSA Lyon/Inria/Grame CNCM
  • Embedded Audio
  • FPGAs
  • Syfala: Faust to FPGAs compiler

FPGAs

  • Reconfigurable circuits ✅
  • Very low latency (one-sample buffer) ✅
  • Difficult to program ❌
    $\Ra$ Faust

CPU vs FPGA

Format optimisation for FPGAs

  • Limited space on FPGAs: optimise resources
  • One optimisation avenue: number formats
  • Use the least bits without sacrificing audio quality

Outline

  • Basics of numerical formats
    • Floating-point vs fixed-point
  • Real-to-fixed-point conversion
    • Pseudo-injectivity
    • Interval library

Basics of numerical formats

Floating-point numbers

  • Equivalent to scientific notation:
    • $\pi \approx 3.14\times 10^0 \approx 1.5 \times 2^1$
    • General formula: $(-1)^s\times 1.F\times 2^{E-E_0}$


  • Range: $x \in [-(1+F_{max})\cdot2^{E_{max} - E_0}; (1+F_{max})\cdot2^{E_{max} - E_0}]$
  • Precision: $F_{min}\cdot2^{E_{min}}$

Fixed-point numbers

  • $\pi \approx 3.125$
  • Two parameters:
    • $m$: Most Significant Bit
    • $l$: Least Significant Bit



  • Range: $x \in [-2^m; 2^m -1]$
  • Precision: $2^l$

Comparison


  • Dynamic range
  • Values repartition

Comparison

Floating-point numbers Fixed-point numbers
Big dynamics ✅ Small dynamics ❌
Expensive on FPGA ❌ Cheap on FPGA ✅

$\Ra$ We use fixed-point numbers for hardware audio programs

Physical interpretation

  • Extreme floating point values have no physical sense
  • No use for such a wide range of values

Fixed-point format determination

The Faust compilation chain

  • Interval library
  • Operations on signals in normal form

The interval library

  • Compute the range of values a variable can take
  • Inductive computation of properties: from inputs to outputs
  • Our addition: precision computation

Range

  • Intervals analysis
  • If $x$ is represented with a MSB of $m$: $x \in [-2^m; 2^m -1]$
  • $x \in [\underline{x}; \overline{x}] \Ra f(x) \in f([\underline{x}; \overline{x}])$

Precision

  • Approximation error:
    • $\pi \approx \pi_1 = 3$
      Error: $\epsilon_1 = |\pi - \pi_1| \approx 0.1415\ldots \lt 1$
    • $\pi \approx \pi_2 = 3.14$
      Error: $\epsilon_2 = |\pi - \pi_2| \approx 0.0015\ldots \lt 0.01$
    • $x\in [0;1] \approx \hat{x} \in [0;1]_{-2}$
      Error: $\epsilon = |x - \hat{x}| \lt 2^{-2}$
  • Error propagation:
    if $\hat{x} = x + \epsilon$,
    $f(\hat{x}) \approx f(x) + \epsilon \cdot f'(x)$

Pseudo-injectivity

  • Input and output precisions are chosen so that distinguishable input values $\Ra$ distinguishable output images
  • No two output images are rounded to the same value
  • $\lfloor x \rfloor_{l_{in}} \neq \lfloor y \rfloor_{l_{in}} \Ra \lfloor f(x) \rfloor_{l_{out}} \neq \lfloor f(y) \rfloor_{l_{out}} $

Complete compilation example

	      
process = +(1/64) ~ %(1) : *(2*ma.PI) : sin;
	      
	    

  • Recursive loop +(1/64) ~ %(1):
    • Incremented by 1/64
    • Constrained in [0; 1]
    • $\Ra$ unipolar sawtooth from 1/64 to 1
  • Multiply the output by $2\pi$
  • Feed into the sine function
  • Computation graph annotated with:
    • Intervals
    • Precisions
  • Output precision depends on input interval and input precision
Automatic fixed-point C++ code generation
	      
fRec0[0] = sfx_t(31,-24)(sfx_t(1,-38)(
          sfx_t(0,-32)(fmodfx(fRec0[1], sfx_t(0,-32)(1.0)))
        + sfx_t(-6,-38)(0.015625)));
output0[i0] = FAUSTFLOAT(sfx_t(0,-109)(
          sinfx(sfx_t(31,-54)(sfx_t(3,-30)(6.2831855)
        * fRec0[0]))));
fRec0[1] = fRec0[0];
	      
	    

Results

Program Signal-to-noise ratio
Sine with increment $\frac{1}{64}$ 32
Sine with increment 0.01 25
Karplus-Strong 33
$$SNR = \log_{10}\left(\frac{\lVert s\rVert_2^2}{\lVert s - \hat{s}\rVert_2^2}\right)$$ where $s$ is the floating-point signal (considered as ground truth)
and $\hat{s}$ is the fixed-point signal.

Conclusion

Results

  • Automation of analysis and code generation
  • Promising preliminary results
  • Preserved audio quality
  • Formats still too large
  • Recursions are hard to deal with

Code on branch master-dev of
https://github.com/grame-cncm/faust/

Future work

  • Output-to-input propagation of precision: tighten the intervals
  • Targeted optimisations
    • C++ function sinPi
    • LTI filters: dedicated methods for determining precision in the loop
  • Probabilistic pseudo-injectivity

Thanks for your attention!

Details on pseudo-injectivity

  • Definition: $\lfloor x \rfloor_{l_{in}} \neq \lfloor y \rfloor_{l_{in}} \Ra \lfloor f(x) \rfloor_{l_{out}} \neq \lfloor f(y) \rfloor_{l_{out}} \vee f(x) = f(y)$
  • Closed formula for the tightest output precision: $$l_{out} = \lfloor \min\limits_{x \in X} \log_2(|f(x+u) - f(x)|)\rfloor$$ where $u = 2^{l_{in}}$
  • Can be approximated by $$l_{out} = \lfloor \min\limits_{x \in X} \log_2(|f'(x)|)\rfloor$$

Audio results

  • Sine with $\frac{1}{64}$ increment:
  • Sine with 0.01 increment:
  • Karplus-Strong: