# Lecture 5 - Wavelet Compression of 1D Signals

Abstract : This lecture allows to manipulate an entropic coder (arithmetic coding) in order to compress signals using a transform/quantization/coding scheme. It looks forward explaining the difference between wavelet approximation (transform and then thresholding) and wavelet coding (transform then quantization then coding).

## Entropy and entropic coding.

• Entropic coding a 1D signal (download functions perform_arithmetic_coding.m  and perform_arithmetic_coding_mex.mexglx or perform_arithmetic_coding_mex.dll if you are working under windows). The input x of the coding process is a vector of symbols, which are just regular integers. The output is a flow of bits (0 or 1). This bits are packed by bytes (groups of eight bits) which are store as integer in the range 0...255 as a Matlab array y.

n = 512;
% creation of a binary signal with probabilityp=P(x=1)
p = 0.1;
x = (rand(n,1)>p);
% coding
[y,nb] = perform_arithmetic_coding(x,+1);
h1 = nb/n; % number of bit per symbol
% comparison with entropy bound
h = ...; % you have to use here the formula of the entropy

• Comparison between the coder and the entropy for a more complex signal (download the function rand_discr.m).

n = 1024; % size of the signal
p = [10 2 1 1 1]; p = p/sum(p); % this is an example of probability distribution
x = rand_discr(p, n); % draw according to the distribution p
% check we have the correct distribution
v = hist(x, 1:length(p))/n;
% coding
[y,nb] = perform_arithmetic_coding(x,+1);
h1 = nb/n; % number of bit per symbol
% borne de shanon
h = ...; % use here the entropy formula, implemented using sum in Matlab
h1 - h

• Display the behavior of the coder when the size grows.

% Shannon bound of entropy
h = -sum( p.*log2(p) );
% compute the differencial of coding for a varying length signal
err = []; a = 4:14;
for n = 2.^a
x = rand_discr(p, n);
% coding
option.use_mex = 1;
[y,nb] = perform_arithmetic_coding(x,+1);
h1 = nb/n; % number of bits per symbol
err(end+1) = h1 - h;
end
plot(a, err, '.-');
xlabel('log2(size)');
ylabel('|entropy-nbr.bits|');
 Comparison between the number of bits/symbol for the arithmetic coding, and for an ideal scheme that would reach the entrpy bound. The coder get closer to the entropy bound for signals that are long enough.

## Quantization of a wavelet transform.

f = load_signal('Piece-Regular', n); % signal of size n
Jmin = 2; % minimum scale of the transform
fw = perform_wavelet_transform(f, Jmin, 1);

• Display the difference between thresholding and quantizing a decomposition (download the function  perform_quantization.m).

% thresholding
T = 5; % it is up to you to test various thresholds
fw_seuil = fw .* (abs(fw)>T);
% quantization
[fw_quant, fw_quant_val] = perform_quantization(fw, T);
% reconstruction
f_seuil = perform_wavelet_transform(fw_seuil,Jmin,-1); % inverse transform
f_quant = perform_wavelet_transform(fw_quant,Jmin,-1); % inverse transform
% computation of the error
...
% display
clf; subplot(3,1,1);
plot(f); axis tight; title('Original signal');
subplot(3,1,2);
plot(f_seuil); axis tight; title('Approximation using thresholding');
subplot(3,1,3);
plot(f_quant); axis tight; title('Approximation using quantization');
 Comparison between the signal reconstructed after thresholding (we are making approximation) and after quatization (we are making compression). Both look prety similar, the reconstruction is a little worse after quantization because we are also modifying a little the coefficient above the threshold.

T = 2; % use a threshold >1 in spacial coordinates
[hs,xs] = compute_histogram(f,T);
T = 0.2; % use a finer scale in the wavelet domain
[hw,xw] = compute_histogram(fw,T);
% display the histograms
q = 15; % number of coefficients to display
subplot(1,2,1);
plot( xs((end-1)/2-q:(end-1)/2+q), hs( (end-1)/2-q:(end-1)/2+q ) ); axis tight;
subplot(1,2,2);
plot( xw((end-1)/2-q:(end-1)/2+q), hw( (end-1)/2-q:(end-1)/2+q ) ); axis tight;
% entropy
h_spacial = sum( -hs.*log2(hs) )
h_wavelet = sum( -hw.*log2(hw) )
 Comparison of the image histogram and wavelet histograms. Wavelet histogram is very sparse (most of the coefficients are null), so the resulting entropy of these coefficient is very low. It will thus requires fewer bits to code the wavelet decomposition than the original image pixels.

## Compression of the wavelet decomposition for various thresholds.

T_list = 0.3 * 2.^(0:1:12); % list of thresholds
nbr_bit_list = []; psnr_list = [];
for T = T_list
disp( sprintf('Coding for T=%.2f', T) );
% quantization
...
% arithmetic coding of quantized values
...
% reconstructed function
...
% recording of the number of bits and the error
nbr_bit_list(end+1) = ?;
psnr_list(end+1) = psnr(?,?);
end
plot(nbr_bit_list, psnr_list, '.-');
 Distorsion (measured using the PSNR) of a wavelet coder. There are two different behaviors : for a number of bits/symbol above 1, we are in the high resolution quantization area. Theory predict that the distorsion curve is a straight line. For a number R of bits/symbol below 1, we are in the non-linear approximation area. Theory predict that the error |f-fR|² decays like (1/R*log(R))^alpha, where alpha is the regularity of the signal outside the singularities.

• Try with other signals (with more discontinuities, more smoothing, etc.).