# What is this type of average called?

Watch
I was wondering if you could tell me what this average is called.

You start with 1 number, you add a number and then divide the total by 2. You then add your next value onto the total, then divide it by 2, and so on and so forth.

For example you have 5,2,6,2,6:

(5+2)/2=3.5

(3.5+6)/2 = 4.75

(4.75+2)/2= 3.375

(3.375 + 6)/2 = 4.6875

(i dont know if i have made a mistake in the calculations i did it very quickly lol)

You start with 1 number, you add a number and then divide the total by 2. You then add your next value onto the total, then divide it by 2, and so on and so forth.

For example you have 5,2,6,2,6:

(5+2)/2=3.5

(3.5+6)/2 = 4.75

(4.75+2)/2= 3.375

(3.375 + 6)/2 = 4.6875

(i dont know if i have made a mistake in the calculations i did it very quickly lol)

0

reply

1

reply

Report

#4

a 2 point moving average for 5,2,6,2,6 goes....

3.5

4

4

4

you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.

3.5

4

4

4

you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.

0

reply

Report

#5

(Original post by

A 2-point moving average.

**VirgoStrain**)A 2-point moving average.

0

reply

Report

#6

(Original post by

It's not a 2-point moving average. It may be some kind of smoothing algorithm though. Maybe one for the signal processing specialists. Any of them around?

**atsruser**)It's not a 2-point moving average. It may be some kind of smoothing algorithm though. Maybe one for the signal processing specialists. Any of them around?

(Original post by

a 2 point moving average for 5,2,6,2,6 goes....

3.5

4

4

4

you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.

**the bear**)a 2 point moving average for 5,2,6,2,6 goes....

3.5

4

4

4

you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.

(Original post by

i know, there is a name for it though, something along the lines of a specific type of moving average????

**wagwanpifftingg**)i know, there is a name for it though, something along the lines of a specific type of moving average????

1

reply

(Original post by

Looks like a 2-tap moving average Finite Impulse Response FIR filter.

**uberteknik**)Looks like a 2-tap moving average Finite Impulse Response FIR filter.

0

reply

Report

#8

(Original post by

Looks like a 2-tap moving average Finite Impulse Response FIR filter.

**uberteknik**)Looks like a 2-tap moving average Finite Impulse Response FIR filter.

0

reply

Report

#9

(Original post by

Looks like a 2-tap moving average Finite Impulse Response FIR filter.

**uberteknik**)Looks like a 2-tap moving average Finite Impulse Response FIR filter.

1, 0.5, 0.25, 0.125, ...

which is clearly not going to be zero at any time in the future.

0

reply

Report

#10

(Original post by

thank you, are you able to break down each part into english for me!

**wagwanpifftingg**)thank you, are you able to break down each part into english for me!

https://www.dsprelated.com/freebooks...e_Digital.html

It's implemented as a so-called "tapped delay line" i.e. a line of memory elements that store the last N values, whose values we can "tap off" to add together.

These things are called "filters" for this reason: your data stream is described in terms of its values at different times. However you can also represent it in terms of its Fourier series i.e. the set of sine and cosine signals at various frequencies that can be added to reconstruct it. A filter removes or alters some of those frequencies, and an IIR or FIR filter does just that e.g. we can see that your algorithm reduces the distance between large adjacent values so it is removing some high frequency components from the Fourier series.

However, the IIR/FIR terminology only describes how the filter is implemented rather than what it does to the frequencies so it isn't actually particularly useful information here - we really want to characterise the algorithm by what precise effect it has on the data, not by its inner workings.

It's clear that yours is some kind of low pass filter (i.e. it allows lower frequencies to remain in the o/p) but apart from that I don't know if it is common enough to have a particular name.

0

reply

Report

#11

(Original post by

It's clear that yours is some kind of low pass filter (i.e. it allows lower frequencies to remain in the o/p) but apart from that I don't know if it is common enough to have a particular name.

**atsruser**)It's clear that yours is some kind of low pass filter (i.e. it allows lower frequencies to remain in the o/p) but apart from that I don't know if it is common enough to have a particular name.

It's perhaps worth saying that this kind of filter is the kind of "good enough" solution that gets hacked up all the time by programmers who need

*some*level of filtering on an input, but don't want to spare the time and effort or CPU cycles to do anything complicated.

This is particularly true in the context of calling it an "average" (rather than a filter). If you want to show the average of the last 200 values, you need to store the last 200 values, which means you need to implement some kind of ring buffer, and you've either got to make an incremental tally of the average (and potentially worry about accumulation of round off error), or you have to add up 200 values every time you want the new average.

Instead, you can just take "old_average * .99 + new_value * 0.01". It doesn't behave exactly the same (one could argue whether it is better or worse, depending on the task), but it's quick and easy.

The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.

0

reply

Report

#12

(Original post by

I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

**DFranklin**)I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.

0

reply

Report

#13

**have**first order IIRs, and it's hard to see how you could have any meaningful IIR with one less term.

or maybe just hacked up by someone who never cracked open a DSP book in their lives...

Edit: to be clear, I don't know much about formal DSP.

0

reply

(Original post by

I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

It's perhaps worth saying that this kind of filter is the kind of "good enough" solution that gets hacked up all the time by programmers who need

This is particularly true in the context of calling it an "average" (rather than a filter). If you want to show the average of the last 200 values, you need to store the last 200 values, which means you need to implement some kind of ring buffer, and you've either got to make an incremental tally of the average (and potentially worry about accumulation of round off error), or you have to add up 200 values every time you want the new average.

Instead, you can just take "old_average * .99 + new_value * 0.01". It doesn't behave exactly the same (one could argue whether it is better or worse, depending on the task), but it's quick and easy.

The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.

**DFranklin**)I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

It's perhaps worth saying that this kind of filter is the kind of "good enough" solution that gets hacked up all the time by programmers who need

*some*level of filtering on an input, but don't want to spare the time and effort or CPU cycles to do anything complicated.This is particularly true in the context of calling it an "average" (rather than a filter). If you want to show the average of the last 200 values, you need to store the last 200 values, which means you need to implement some kind of ring buffer, and you've either got to make an incremental tally of the average (and potentially worry about accumulation of round off error), or you have to add up 200 values every time you want the new average.

Instead, you can just take "old_average * .99 + new_value * 0.01". It doesn't behave exactly the same (one could argue whether it is better or worse, depending on the task), but it's quick and easy.

The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.

0

reply

Report

#16

(Original post by

It's a flux capacitor.

**Doonesbury**)It's a flux capacitor.

1

reply

Report

#17

(Original post by

I thought the order was based on the index shift; certainly you

**DFranklin**)I thought the order was based on the index shift; certainly you

**have**first order IIRs, and it's hard to see how you could have any meaningful IIR with one less term.
Was talking about resizing an image from 100 to 101% with someone, and they explained how they would first create an image 101 times larger before filtering. In the real world, you don't do that. (To be fair, you don't do that in real world DSP either, it's a conceptual idea. But 'tis a wierd one, if you ask me).

Under some circumstances which I completely forget about the frequencies in the original image (Nyquist, Shannon etc are your friends) this gives you the same result as if you sampled it at twice the rate in the first place. And otherwise you get at least a viewable image. It can be done at reasonable frame rates with suitable hardware implementations.

0

reply

Report

#18

(Original post by

In fact, believe it or not, you can do precisely that, for small images at least, to "upsample" an image. You take the pixels of the image, insert say one (or more) 0 pixel between each, then perform a low pass filter via a 2D convolution over the image to "fill in" the zeros.

**atsruser**)In fact, believe it or not, you can do precisely that, for small images at least, to "upsample" an image. You take the pixels of the image, insert say one (or more) 0 pixel between each, then perform a low pass filter via a 2D convolution over the image to "fill in" the zeros.

If you think about it, none of the zero pixels actually contribute anything to the output (after convolution), so you're basically massively increasing the resolution of the image simply in order to determine the convolution weights of the pixels you

**do**want.

What you actually do is perform a weighted sum of the pixels in the original image where the coefficents depend on the 'subpixel position' you're effectively waning to sample. This does essentially the same thing but without the huge increase in image size.

Under some circumstances which I completely forget about the frequencies in the original image (Nyquist, Shannon etc are your friends) this gives you the same result as if you sampled it at twice the rate in the first place.

And otherwise you get at least a viewable image. It can be done at reasonable frame rates with suitable hardware implementations.

*conceptually*do what you say, but in practice it will do what I describe.

Note: I work in the film+video industry, my company makes a hardware upscaling product (although it's more a lot more complicated than this, using motion estimation to combine images from multiple frames. I don't work on it myself).

Edit: for context, I first saw the DSP approach in the context of an engineer with a DSP background explaining to me how he planned to upscale an image from 1998 to 2048 by first creating a roughly 40000 x 40000 image. The chorus of "er, don't do that" was pretty deafening...

0

reply

Report

#19

(Original post by

As I understand it, a high performance DSP algorithm may

**DFranklin**)As I understand it, a high performance DSP algorithm may

*conceptually*do what you say, but in practice it will do what I describe.
0

reply

Report

#20

(Original post by

I don't know what the state of the art is, but hardware implementations have been developed to do precisely that, though it is certainly not efficient in software. It's relatively straightforward in dedicated hardware with suitable frame buffer manipulation logic to insert 0s and dedicated hardware convolvers. Many years ago, I in fact wrote the firmware for such a device.

**atsruser**)I don't know what the state of the art is, but hardware implementations have been developed to do precisely that, though it is certainly not efficient in software. It's relatively straightforward in dedicated hardware with suitable frame buffer manipulation logic to insert 0s and dedicated hardware convolvers. Many years ago, I in fact wrote the firmware for such a device.

*conceptualised*as you describe, but in practice you're going to virtualize the zeroes (i.e. they don't take up storage, they don't take up convolution (multiply-add) slots, because there's no point in mulitplying by 0), and you end up with the weighted averages.

e.g. Image after upsampling looks like

0 0 0 0 A 0 0 0 0 0 0 B 0 0 0 0 0 0

kernel looks like (not a realistic kernel as I'm being lazy):

1 2 3 4 5 6 7 6 5 4 3 2 1

then the calculation for a typical pixel might look like:

1 * 0 + 2 * 0 + 3 * 0 + 4 * 0 + 5 * A + 6 * 0 + 5 * 0 ... 2 * B + 1 * 0 = 5 * A + 2 * B (actual values will depend on the pixel being sampled).

I've never seen an implementation that wouldn't essentially short-circuit almost all of this and just calculate 5 * A + 2 * B.

0

reply

X

### Quick Reply

Back

to top

to top