The Student Room Group
Reply 1
That is hard. I think A-level math students would know. Can someone tell me how would these kinds of math stuff would be related to IT type jobs in the future?

It's quite interesting that you have to do math stuff in your course. I thought multimedia and internet technology course won't have that hard math content in it.
Reply 2
Umm, I can help you wid understanding what all those symbols mean if you want, cos i did Shannons Law and the information content in my first and second years. Its really pretty basic and this stuff isnt really taught in GCSE anywayz.

What specifically do u need to know?
Reply 3
That is the way they like to market it becuase it would put students of it they thought it had any signifiecent maths content, however there is less maths content than the computer science.

That course is almost A level maths, in fact you have to sit an A level maths standard module in the first year in compu science if you haven't done it at university.

Up and till now this is the most advanced maths we have done, and I am not sure we need to understand those forumula's we still have to apply the theory to our own encoding system.

I know a lot of logertithims are involved it in but they are supposed to be quite easy. There is a lot of basic physics on my course, i.e you need to understand stuff like fret you don't need to understand all the maths behind it.
Reply 4
KingsComp
Umm, I can help you wid understanding what all those symbols mean if you want, cos i did Shannons Law and the information content in my first and second years. Its really pretty basic and this stuff isnt really taught in GCSE anywayz.

What specifically do u need to know?


Well basicaly what exactly is entropy? From what I gathered from that it is the amount of data that needs to be sent before it becomes information. (i.e enough to be encoded).

Just seen the rest of it, it looks like I need to go tot he library and read up on Huffman because there is too much stuff I need to know.

I didn't understand the first thing about that entropy formula btw. He might explain it all to use next week, its jsut background reading at this stage.
Reply 5
AT82
That is the way they like to market it becuase it would put students of it they thought it had any signifiecent maths content, however there is less maths content than the computer science.

That course is almost A level maths, in fact you have to sit an A level maths standard module in the first year in compu science if you haven't done it at university.


Some uni's (especially the good ones) have 2 math modules in the first year. One math module is for people who done GCSE math and have a grade C or above, while the second one is for people who have done A-level maths. Nevertheless, some uni's have A-level math stuff in the first year, and the entry requirments are sometimes either GCSE maths and/or A-level maths.

I compared the maths stuff in computing type courses. Most of them don't have calculus in it. It's stuff like set theory, discrete mathematics, etc... I think these math stuff would be useful.
Oh god... formulae... damn. I used to be able to do this sort of thing, and I'm going to have to again when I go back to Uni. I've just realised that I've completely forgotten everything about maths I've ever learned. Uh-oh. And... err... pretty much everything else I learned at school. Taking a year out destroys your brain! AAH!
Reply 7
oooh Huffman coding (that takes me back :biggrin:)

Ok lets see if i remember it all, basically most data will not have random distribution in the frequency count of the symbols. For instance in the English language the letter 'e' will occur more frequently than the letter 'h' which in turn will occur more frequently than 'q' and so on.
The Information theory (entropy linked) basically means that if you have say N symbols and a probablilty that symbol Si occuring in the data is Pi then in information content is bits of each symbol is equal to :

- sigma (i = 1, limit N) Pi log 2 Pi

remember that if all the symbols occur with equal probablility then Pi = 1/N and the sum is then -log 2 1/N = log2 N (to the upper bound).

Also remember that if the freq distribution in the data is non-random, then the information conveyed decreases, even if you are using the same number of bits to store the data.

Hope that makes sense, soz if i havent made it clear, and I hope i havent missed anything out, cos i did do it quite a while ago. You will def have to check out some books as well to give u a firm grounding in the topic, but overall its not too bad and is pretty basic once u get the overall concepts.
Reply 8
Thanks, I think I get it, I still don't get the forumula though, what does stigma mean?

I guess its like the Morse code example we were given in the lecture, the letter E appears more than the letter Z, so the letter E takes up less space to transmit.
Reply 9
AT82
Thanks, I think I get it, I still don't get the forumula though, what does stigma mean?

I guess its like the Morse code example we were given in the lecture, the letter E appears more than the letter Z, so the letter E takes up less space to transmit.


http://en.wikipedia.org/wiki/Stigma
Reply 10
hehe, ok what ill do is, give u an eg,

assume u have a question:
the code set {A, B, C, D} and their respective probabilities are {0.5, 0.25, 0.125, 0.125}

then the information content will be:

using information content = - Sigma Pi log2 Pi

-(0.5 log2 (0.5) + 0.25 log2 (0.25) + 0.125 log2 (0.125) + 0.125 log2 (0.125))

which should equal in the region of 1.5 to 2.0 bits i think (soz dont have a calc on me) :biggrin:

sigma is basically the sum of, it has lower and upper bound limits which are shown at the bottom and above the sigma sign
Reply 11
Also just to add, the huffman coding is basically to increase the efficiency of the encoding. For instance if you have a non-random frq distribution (such as the alphabet - which i said earlier) then u use huffman coding which will limit the number of bits used to encode the letter depending on its probabillites.

For instance, since E occurs very freq in the alphabet, it would be better to encode it with few bits, as it will occur frequently. While on the other hand Z occurs quite infrequenlty so its better to encode it with more bits as there is much smaller probiliity. And using this concept of probabilities the huffmann coding will allocate the bits to represent each symbol.

You shud also read up on run length encoding as its on the same lines, but it encodes series of data using bits themselves (same way fax machines work - 0 representing white space)
Reply 12
AT82
Thanks, I think I get it, I still don't get the forumula though, what does stigma mean?

Don't you mean 'sigma'? Sigma basically means the sum of all the values, or in other words, you add everything together (the stuff after the sigma symbol).
Reply 13
Chris87
Don't you mean 'sigma'? Sigma basically means the sum of all the values, or in other words, you add everything together (the stuff after the sigma symbol).


Ok thanks, that sounds easy enough, its amazing what you don't get taught at school (or at least what I didn't).

Latest

Trending

Trending