The Student Room Group

What is this type of average called?

I was wondering if you could tell me what this average is called.
You start with 1 number, you add a number and then divide the total by 2. You then add your next value onto the total, then divide it by 2, and so on and so forth.

For example you have 5,2,6,2,6:
(5+2)/2=3.5
(3.5+6)/2 = 4.75
(4.75+2)/2= 3.375
(3.375 + 6)/2 = 4.6875

(i dont know if i have made a mistake in the calculations i did it very quickly lol)

Scroll to see replies

you get a different answer if the numbers are reversed... 6,2,6,2,5

:dontknow:
Original post by the bear
you get a different answer if the numbers are reversed... 6,2,6,2,5

:dontknow:


i know, there is a name for it though, something along the lines of a specific type of moving average????
a 2 point moving average for 5,2,6,2,6 goes....

3.5

4

4

4

you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.
Original post by VirgoStrain
A 2-point moving average.


It's not a 2-point moving average. It may be some kind of smoothing algorithm though. Maybe one for the signal processing specialists. Any of them around?
Original post by atsruser
It's not a 2-point moving average. It may be some kind of smoothing algorithm though. Maybe one for the signal processing specialists. Any of them around?


Original post by the bear
a 2 point moving average for 5,2,6,2,6 goes....

3.5

4

4

4

you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.


Original post by wagwanpifftingg
i know, there is a name for it though, something along the lines of a specific type of moving average????


Looks like a 2-tap moving average Finite Impulse Response FIR filter.
Original post by uberteknik
Looks like a 2-tap moving average Finite Impulse Response FIR filter.


thank you, are you able to break down each part into english for me!
Reply 7
Original post by uberteknik
Looks like a 2-tap moving average Finite Impulse Response FIR filter.

That's what I thought.
Original post by uberteknik
Looks like a 2-tap moving average Finite Impulse Response FIR filter.


Isn't it an IIR filter? - we use the output to generate the input, and its impulse response would be:

1, 0.5, 0.25, 0.125, ...

which is clearly not going to be zero at any time in the future.
Original post by wagwanpifftingg
thank you, are you able to break down each part into english for me!


In fact the algorithm you've described is a so-called IIR filter, since it uses feedback - it uses the last output value to compute the next. An FIR filter uses no feedback, and essentially just stores the last N values of the data stream and adds them together with some weighting to produce the output. Here's a brief overview:

https://www.dsprelated.com/freebooks/filters/Finite_Impulse_Response_Digital.html

It's implemented as a so-called "tapped delay line" i.e. a line of memory elements that store the last N values, whose values we can "tap off" to add together.

These things are called "filters" for this reason: your data stream is described in terms of its values at different times. However you can also represent it in terms of its Fourier series i.e. the set of sine and cosine signals at various frequencies that can be added to reconstruct it. A filter removes or alters some of those frequencies, and an IIR or FIR filter does just that e.g. we can see that your algorithm reduces the distance between large adjacent values so it is removing some high frequency components from the Fourier series.

However, the IIR/FIR terminology only describes how the filter is implemented rather than what it does to the frequencies so it isn't actually particularly useful information here - we really want to characterise the algorithm by what precise effect it has on the data, not by its inner workings.

It's clear that yours is some kind of low pass filter (i.e. it allows lower frequencies to remain in the o/p) but apart from that I don't know if it is common enough to have a particular name.
Original post by atsruser
It's clear that yours is some kind of low pass filter (i.e. it allows lower frequencies to remain in the o/p) but apart from that I don't know if it is common enough to have a particular name.
I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

It's perhaps worth saying that this kind of filter is the kind of "good enough" solution that gets hacked up all the time by programmers who need some level of filtering on an input, but don't want to spare the time and effort or CPU cycles to do anything complicated.

This is particularly true in the context of calling it an "average" (rather than a filter). If you want to show the average of the last 200 values, you need to store the last 200 values, which means you need to implement some kind of ring buffer, and you've either got to make an incremental tally of the average (and potentially worry about accumulation of round off error), or you have to add up 200 values every time you want the new average.

Instead, you can just take "old_average * .99 + new_value * 0.01". It doesn't behave exactly the same (one could argue whether it is better or worse, depending on the task), but it's quick and easy.

The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.
Original post by DFranklin
I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).


Isn't it second order? We have y[n]=12y[n1]+12x[n1]y[n]=\frac{1}{2}y[n-1]+\frac{1}{2}x[n-1] where x is the input, y the output.

The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.


or maybe just hacked up by someone who never cracked open a DSP book in their lives...
Original post by atsruser
Isn't it second order? We have y[n]=12y[n1]+12x[n1]y[n]=\frac{1}{2}y[n-1]+\frac{1}{2}x[n-1] where x is the input, y the output.
I thought the order was based on the index shift; certainly you have first order IIRs, and it's hard to see how you could have any meaningful IIR with one less term.

or maybe just hacked up by someone who never cracked open a DSP book in their lives...
DSP people have their own way of looking at things, which I have to say comes across as extremely weird to people in other areas of the computing industry. Was talking about resizing an image from 100 to 101% with someone, and they explained how they would first create an image 101 times larger before filtering. In the real world, you don't do that. (To be fair, you don't do that in real world DSP either, it's a conceptual idea. But 'tis a wierd one, if you ask me).

Edit: to be clear, I don't know much about formal DSP.
(edited 6 years ago)
Reply 13
It's a flux capacitor.
Original post by DFranklin
I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

It's perhaps worth saying that this kind of filter is the kind of "good enough" solution that gets hacked up all the time by programmers who need some level of filtering on an input, but don't want to spare the time and effort or CPU cycles to do anything complicated.

This is particularly true in the context of calling it an "average" (rather than a filter). If you want to show the average of the last 200 values, you need to store the last 200 values, which means you need to implement some kind of ring buffer, and you've either got to make an incremental tally of the average (and potentially worry about accumulation of round off error), or you have to add up 200 values every time you want the new average.

Instead, you can just take "old_average * .99 + new_value * 0.01". It doesn't behave exactly the same (one could argue whether it is better or worse, depending on the task), but it's quick and easy.

The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.


im only doing a gcse in computer science. The easy way is good enough for me! I just need the name of it so i can write about it!
Original post by Doonesbury
It's a flux capacitor.


I have just implemented this smoothing algorithm on one of my transistorised computing devices, and it is now drawing 1.21 jiggawatts.
Original post by DFranklin
I thought the order was based on the index shift; certainly you have first order IIRs, and it's hard to see how you could have any meaningful IIR with one less term.

I think that the term "order" for filters originally comes from the order of the DE required to model an analogue RLC filter circuit. I'm not entirely sure if it is used very rigorously in DSP, so you may be right (or wrong...)

Was talking about resizing an image from 100 to 101% with someone, and they explained how they would first create an image 101 times larger before filtering. In the real world, you don't do that. (To be fair, you don't do that in real world DSP either, it's a conceptual idea. But 'tis a wierd one, if you ask me).

In fact, believe it or not, you can do precisely that, for small images at least, to "upsample" an image. You take the pixels of the image, insert say one (or more) 0 pixel between each, then perform a low pass filter via a 2D convolution over the image to "fill in" the zeros.

Under some circumstances which I completely forget about the frequencies in the original image (Nyquist, Shannon etc are your friends) this gives you the same result as if you sampled it at twice the rate in the first place. And otherwise you get at least a viewable image. It can be done at reasonable frame rates with suitable hardware implementations.
Original post by atsruser
In fact, believe it or not, you can do precisely that, for small images at least, to "upsample" an image. You take the pixels of the image, insert say one (or more) 0 pixel between each, then perform a low pass filter via a 2D convolution over the image to "fill in" the zeros.
Yes, you *can* do that, but it is horrendously inefficient (particularly in the +1% scenario I described).

If you think about it, none of the zero pixels actually contribute anything to the output (after convolution), so you're basically massively increasing the resolution of the image simply in order to determine the convolution weights of the pixels you do want.

What you actually do is perform a weighted sum of the pixels in the original image where the coefficents depend on the 'subpixel position' you're effectively waning to sample. This does essentially the same thing but without the huge increase in image size.

Under some circumstances which I completely forget about the frequencies in the original image (Nyquist, Shannon etc are your friends) this gives you the same result as if you sampled it at twice the rate in the first place.
As long as there are no frequencies above the Nyquist limit, a sinc kernel will theoretically restore the original image at arbitrary resolution (but note the infinite support!). Again, however, you don't need to upsample into a largely null image in order to do this.

And otherwise you get at least a viewable image. It can be done at reasonable frame rates with suitable hardware implementations.
As I understand it, a high performance DSP algorithm may conceptually do what you say, but in practice it will do what I describe.

Note: I work in the film+video industry, my company makes a hardware upscaling product (although it's more a lot more complicated than this, using motion estimation to combine images from multiple frames. I don't work on it myself).

Edit: for context, I first saw the DSP approach in the context of an engineer with a DSP background explaining to me how he planned to upscale an image from 1998 to 2048 by first creating a roughly 40000 x 40000 image. The chorus of "er, don't do that" was pretty deafening...
(edited 6 years ago)
Original post by DFranklin

As I understand it, a high performance DSP algorithm may conceptually do what you say, but in practice it will do what I describe.

I don't know what the state of the art is, but hardware implementations have been developed to do precisely that, though it is certainly not efficient in software. It's relatively straightforward in dedicated hardware with suitable frame buffer manipulation logic to insert 0s and dedicated hardware convolvers. Many years ago, I in fact wrote the firmware for such a device.
Original post by atsruser
I don't know what the state of the art is, but hardware implementations have been developed to do precisely that, though it is certainly not efficient in software. It's relatively straightforward in dedicated hardware with suitable frame buffer manipulation logic to insert 0s and dedicated hardware convolvers. Many years ago, I in fact wrote the firmware for such a device.
As I say, in actual implementations I've seen, it could be conceptualised as you describe, but in practice you're going to virtualize the zeroes (i.e. they don't take up storage, they don't take up convolution (multiply-add) slots, because there's no point in mulitplying by 0), and you end up with the weighted averages.

e.g. Image after upsampling looks like

0 0 0 0 A 0 0 0 0 0 0 B 0 0 0 0 0 0

kernel looks like (not a realistic kernel as I'm being lazy):
1 2 3 4 5 6 7 6 5 4 3 2 1

then the calculation for a typical pixel might look like:

1 * 0 + 2 * 0 + 3 * 0 + 4 * 0 + 5 * A + 6 * 0 + 5 * 0 ... 2 * B + 1 * 0 = 5 * A + 2 * B (actual values will depend on the pixel being sampled).

I've never seen an implementation that wouldn't essentially short-circuit almost all of this and just calculate 5 * A + 2 * B.

Quick Reply