Hey there! Sign in to join this conversationNew here? Join for free

What is this type of average called? Watch

    • Thread Starter
    Offline

    9
    ReputationRep:
    I was wondering if you could tell me what this average is called.
    You start with 1 number, you add a number and then divide the total by 2. You then add your next value onto the total, then divide it by 2, and so on and so forth.

    For example you have 5,2,6,2,6:
    (5+2)/2=3.5
    (3.5+6)/2 = 4.75
    (4.75+2)/2= 3.375
    (3.375 + 6)/2 = 4.6875

    (i dont know if i have made a mistake in the calculations i did it very quickly lol)
    Offline

    20
    ReputationRep:
    you get a different answer if the numbers are reversed... 6,2,6,2,5

    :dontknow:
    • Thread Starter
    Offline

    9
    ReputationRep:
    (Original post by the bear)
    you get a different answer if the numbers are reversed... 6,2,6,2,5

    :dontknow:
    i know, there is a name for it though, something along the lines of a specific type of moving average????
    Offline

    18
    ReputationRep:
    A 2-point moving average.
    Offline

    20
    ReputationRep:
    a 2 point moving average for 5,2,6,2,6 goes....

    3.5

    4

    4

    4

    you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.
    Offline

    11
    ReputationRep:
    (Original post by VirgoStrain)
    A 2-point moving average.
    It's not a 2-point moving average. It may be some kind of smoothing algorithm though. Maybe one for the signal processing specialists. Any of them around?
    • Study Helper
    Offline

    21
    ReputationRep:
    (Original post by atsruser)
    It's not a 2-point moving average. It may be some kind of smoothing algorithm though. Maybe one for the signal processing specialists. Any of them around?
    (Original post by the bear)
    a 2 point moving average for 5,2,6,2,6 goes....

    3.5

    4

    4

    4

    you find the average of 1st & 2nd, then 2nd & 3rd, then 3rd & 4th, then 4th & 5th.
    (Original post by wagwanpifftingg)
    i know, there is a name for it though, something along the lines of a specific type of moving average????
    Looks like a 2-tap moving average Finite Impulse Response FIR filter.
    • Thread Starter
    Offline

    9
    ReputationRep:
    (Original post by uberteknik)
    Looks like a 2-tap moving average Finite Impulse Response FIR filter.
    thank you, are you able to break down each part into english for me!
    • TSR Support Team
    • Study Helper
    Offline

    20
    ReputationRep:
    (Original post by uberteknik)
    Looks like a 2-tap moving average Finite Impulse Response FIR filter.
    That's what I thought.
    Offline

    11
    ReputationRep:
    (Original post by uberteknik)
    Looks like a 2-tap moving average Finite Impulse Response FIR filter.
    Isn't it an IIR filter? - we use the output to generate the input, and its impulse response would be:

    1, 0.5, 0.25, 0.125, ...

    which is clearly not going to be zero at any time in the future.
    Offline

    11
    ReputationRep:
    (Original post by wagwanpifftingg)
    thank you, are you able to break down each part into english for me!
    In fact the algorithm you've described is a so-called IIR filter, since it uses feedback - it uses the last output value to compute the next. An FIR filter uses no feedback, and essentially just stores the last N values of the data stream and adds them together with some weighting to produce the output. Here's a brief overview:

    https://www.dsprelated.com/freebooks...e_Digital.html

    It's implemented as a so-called "tapped delay line" i.e. a line of memory elements that store the last N values, whose values we can "tap off" to add together.

    These things are called "filters" for this reason: your data stream is described in terms of its values at different times. However you can also represent it in terms of its Fourier series i.e. the set of sine and cosine signals at various frequencies that can be added to reconstruct it. A filter removes or alters some of those frequencies, and an IIR or FIR filter does just that e.g. we can see that your algorithm reduces the distance between large adjacent values so it is removing some high frequency components from the Fourier series.

    However, the IIR/FIR terminology only describes how the filter is implemented rather than what it does to the frequencies so it isn't actually particularly useful information here - we really want to characterise the algorithm by what precise effect it has on the data, not by its inner workings.

    It's clear that yours is some kind of low pass filter (i.e. it allows lower frequencies to remain in the o/p) but apart from that I don't know if it is common enough to have a particular name.
    Offline

    17
    ReputationRep:
    (Original post by atsruser)
    It's clear that yours is some kind of low pass filter (i.e. it allows lower frequencies to remain in the o/p) but apart from that I don't know if it is common enough to have a particular name.
    I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

    It's perhaps worth saying that this kind of filter is the kind of "good enough" solution that gets hacked up all the time by programmers who need some level of filtering on an input, but don't want to spare the time and effort or CPU cycles to do anything complicated.

    This is particularly true in the context of calling it an "average" (rather than a filter). If you want to show the average of the last 200 values, you need to store the last 200 values, which means you need to implement some kind of ring buffer, and you've either got to make an incremental tally of the average (and potentially worry about accumulation of round off error), or you have to add up 200 values every time you want the new average.

    Instead, you can just take "old_average * .99 + new_value * 0.01". It doesn't behave exactly the same (one could argue whether it is better or worse, depending on the task), but it's quick and easy.

    The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.
    Offline

    11
    ReputationRep:
    (Original post by DFranklin)
    I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).
    Isn't it second order? We have y[n]=\frac{1}{2}y[n-1]+\frac{1}{2}x[n-1] where x is the input, y the output.

    The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.
    or maybe just hacked up by someone who never cracked open a DSP book in their lives...
    Offline

    17
    ReputationRep:
    (Original post by atsruser)
    Isn't it second order? We have y[n]=\frac{1}{2}y[n-1]+\frac{1}{2}x[n-1] where x is the input, y the output.
    I thought the order was based on the index shift; certainly you have first order IIRs, and it's hard to see how you could have any meaningful IIR with one less term.

    or maybe just hacked up by someone who never cracked open a DSP book in their lives...
    DSP people have their own way of looking at things, which I have to say comes across as extremely weird to people in other areas of the computing industry. Was talking about resizing an image from 100 to 101% with someone, and they explained how they would first create an image 101 times larger before filtering. In the real world, you don't do that. (To be fair, you don't do that in real world DSP either, it's a conceptual idea. But 'tis a wierd one, if you ask me).

    Edit: to be clear, I don't know much about formal DSP.
    • Section Leader
    • Clearing and Applications Advisor
    Offline

    21
    ReputationRep:
    It's a flux capacitor.
    • Thread Starter
    Offline

    9
    ReputationRep:
    (Original post by DFranklin)
    I've seen it called a first order low pass filter. (Which as you know, basically just says what we know, single tap from the previous result, and it's low pass).

    It's perhaps worth saying that this kind of filter is the kind of "good enough" solution that gets hacked up all the time by programmers who need some level of filtering on an input, but don't want to spare the time and effort or CPU cycles to do anything complicated.

    This is particularly true in the context of calling it an "average" (rather than a filter). If you want to show the average of the last 200 values, you need to store the last 200 values, which means you need to implement some kind of ring buffer, and you've either got to make an incremental tally of the average (and potentially worry about accumulation of round off error), or you have to add up 200 values every time you want the new average.

    Instead, you can just take "old_average * .99 + new_value * 0.01". It doesn't behave exactly the same (one could argue whether it is better or worse, depending on the task), but it's quick and easy.

    The point on this is that if you see this average in code (or a spreadsheet or whatever), before going to great details on analysing why this exact filter response was chosen, it's definitely worth trying to check if it was just done because it was the easy option.
    im only doing a gcse in computer science. The easy way is good enough for me! I just need the name of it so i can write about it!
    Offline

    11
    ReputationRep:
    (Original post by Doonesbury)
    It's a flux capacitor.
    I have just implemented this smoothing algorithm on one of my transistorised computing devices, and it is now drawing 1.21 jiggawatts.
    Offline

    11
    ReputationRep:
    (Original post by DFranklin)
    I thought the order was based on the index shift; certainly you have first order IIRs, and it's hard to see how you could have any meaningful IIR with one less term.
    I think that the term "order" for filters originally comes from the order of the DE required to model an analogue RLC filter circuit. I'm not entirely sure if it is used very rigorously in DSP, so you may be right (or wrong...)

    Was talking about resizing an image from 100 to 101% with someone, and they explained how they would first create an image 101 times larger before filtering. In the real world, you don't do that. (To be fair, you don't do that in real world DSP either, it's a conceptual idea. But 'tis a wierd one, if you ask me).
    In fact, believe it or not, you can do precisely that, for small images at least, to "upsample" an image. You take the pixels of the image, insert say one (or more) 0 pixel between each, then perform a low pass filter via a 2D convolution over the image to "fill in" the zeros.

    Under some circumstances which I completely forget about the frequencies in the original image (Nyquist, Shannon etc are your friends) this gives you the same result as if you sampled it at twice the rate in the first place. And otherwise you get at least a viewable image. It can be done at reasonable frame rates with suitable hardware implementations.
    Offline

    17
    ReputationRep:
    (Original post by atsruser)
    In fact, believe it or not, you can do precisely that, for small images at least, to "upsample" an image. You take the pixels of the image, insert say one (or more) 0 pixel between each, then perform a low pass filter via a 2D convolution over the image to "fill in" the zeros.
    Yes, you *can* do that, but it is horrendously inefficient (particularly in the +1% scenario I described).

    If you think about it, none of the zero pixels actually contribute anything to the output (after convolution), so you're basically massively increasing the resolution of the image simply in order to determine the convolution weights of the pixels you do want.

    What you actually do is perform a weighted sum of the pixels in the original image where the coefficents depend on the 'subpixel position' you're effectively waning to sample. This does essentially the same thing but without the huge increase in image size.

    Under some circumstances which I completely forget about the frequencies in the original image (Nyquist, Shannon etc are your friends) this gives you the same result as if you sampled it at twice the rate in the first place.
    As long as there are no frequencies above the Nyquist limit, a sinc kernel will theoretically restore the original image at arbitrary resolution (but note the infinite support!). Again, however, you don't need to upsample into a largely null image in order to do this.

    And otherwise you get at least a viewable image. It can be done at reasonable frame rates with suitable hardware implementations.
    As I understand it, a high performance DSP algorithm may conceptually do what you say, but in practice it will do what I describe.

    Note: I work in the film+video industry, my company makes a hardware upscaling product (although it's more a lot more complicated than this, using motion estimation to combine images from multiple frames. I don't work on it myself).

    Edit: for context, I first saw the DSP approach in the context of an engineer with a DSP background explaining to me how he planned to upscale an image from 1998 to 2048 by first creating a roughly 40000 x 40000 image. The chorus of "er, don't do that" was pretty deafening...
    Offline

    11
    ReputationRep:
    (Original post by DFranklin)
    As I understand it, a high performance DSP algorithm may conceptually do what you say, but in practice it will do what I describe.
    I don't know what the state of the art is, but hardware implementations have been developed to do precisely that, though it is certainly not efficient in software. It's relatively straightforward in dedicated hardware with suitable frame buffer manipulation logic to insert 0s and dedicated hardware convolvers. Many years ago, I in fact wrote the firmware for such a device.
 
 
 
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • Poll
    What newspaper do you read/prefer?
    Useful resources

    Make your revision easier

    Maths

    Maths Forum posting guidelines

    Not sure where to post? Read the updated guidelines here

    Equations

    How to use LaTex

    Writing equations the easy way

    Student revising

    Study habits of A* students

    Top tips from students who have already aced their exams

    Study Planner

    Create your own Study Planner

    Never miss a deadline again

    Polling station sign

    Thinking about a maths degree?

    Chat with other maths applicants

    Can you help? Study help unanswered threads

    Groups associated with this forum:

    View associated groups
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

    Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

    Quick reply
    Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.