Hey there! Sign in to join this conversationNew here? Join for free
Turn on thread page Beta
    • Study Helper
    Online

    21
    ReputationRep:
    Study Helper
    (Original post by atsruser)
    I don't know what the state of the art is, but hardware implementations have been developed to do precisely that, though it is certainly not efficient in software. It's relatively straightforward in dedicated hardware with suitable frame buffer manipulation logic to insert 0s and dedicated hardware convolvers. Many years ago, I in fact wrote the firmware for such a device.
    The current vogue is to use a Field Programmable Gate Array (FPGA) mainly for the best compromise between performance and cost especially for low volume OEM applications.

    If ultimate performance is a target and cost not an issue, then ASICS do the job since performance is only limited by the device physics and architecture.

    In between are discrete designs built around RISC devices (ASM and machine code) which offer flexibility in choosing the highest performance A/D and D/A conversion etc., whilst maintaining a hold on production costs.

    As always, the actual application determines the chosen implementation where performance, design and development costs, size, power requirements, reliability, longevity, production costs, etc. etc. are all factored and important.

    Such is the life of an engineer!
    • Section Leader
    Offline

    21
    ReputationRep:
    Section Leader
    (Original post by wagwanpifftingg)
    im only doing a gcse in computer science. The easy way is good enough for me! I just need the name of it so i can write about it!
    I suspect some of this thread has gone slightly beyond GCSE Comp Sci...

    I hope you are taking notes.



    Posted from TSR Mobile
    Offline

    11
    ReputationRep:
    (Original post by DFranklin)
    As I say, in actual implementations I've seen, it could be conceptualised as you describe, but in practice you're going to virtualize the zeroes (i.e. they don't take up storage, they don't take up convolution (multiply-add) slots, because there's no point in mulitplying by 0), and you end up with the weighted averages.
    I can see that you're a hard guy to convince so I won't try too much more, but, in practice, it has been done precisely as I described. It's not an approach to be recommended in software, but it's suitable for a high-performance (by the standards of the day) hardware implementation on smallish images since it's relatively easy to implement in hardware and for a dedicated device we don't care if it's memory-inefficient - the memory won't be used for anything else anyway.
    Offline

    11
    ReputationRep:
    (Original post by DFranklin)
    As long as there are no frequencies above the Nyquist limit, a sinc kernel will theoretically restore the original image at arbitrary resolution (but note the infinite support!).
    This is true, but I was being a bit circumspect about the conditions, as there is a whole theory dedicated to reconstruction from a sub-Nyquist sampled signal, which can be done if more info is available about the signal. Unfortunately I can remember almost nothing about this, though I once had a long discussion with a DSP specialist about how it could be implemented - ISTR I spent most of the time nodding and smiling and not understanding much.

    Anyway this thread has gone way off topic, so that's my final contribution.
    Online

    18
    ReputationRep:
    (Original post by atsruser)
    I can see that you're a hard guy to convince so I won't try too much more, but, in practice, it has been done precisely as I described. It's not an approach to be recommended in software, but it's suitable for a high-performance (by the standards of the day) hardware implementation on smallish images since it's relatively easy to implement in hardware and for a dedicated device we don't care if it's memory-inefficient - the memory won't be used for anything else anyway.
    Sorry if I've seemed stubborn, it's just that I had a similar discussion (well, started failrly similar) with the DSP guy just out of uni, and when I skimmed through some DSP stuff online trying to find out where he was coming from, the algorithms they gave ended up as I described (the summations were reworked to avoid summing over zeroes).

    If you're sure it's actually been done as you describe, then I obviously stand corrected. (I could see it making sense if you, for example, had a literal "hardware convolver" that had no ability to do anything else).
    Offline

    11
    ReputationRep:
    (Original post by DFranklin)
    If you're sure it's actually been done as you describe, then I obviously stand corrected.
    Yes, it's been done, I promise, cubs honour. As I said earlier, I worked on the development of such a project.

    (I could see it making sense if you, for example, had a literal "hardware convolver" that had no ability to do anything else).
    Such things existed at one time e.g. LSI Logic L64240. However I suspect that their heyday has long gone since:

    a) Moore's law has meant that it's feasible to do a lot more in nice, rewritable, easily(ish) changeable software developed in a comfortable high level language with a nice development system.

    b) more general purpose DSP devices exist (and existed) anyway e.g. Texas Instruments TMS320 and whatever modern variants there are now, so dedicated hardware only made sense for a relatively small number of applications.
    Online

    18
    ReputationRep:
    (Original post by atsruser)
    Yes, it's been done, I promise, cubs honour. As I said earlier, I worked on the development of such a project.
    Wasn't doubting that, just that sometimes "firmware support" means "not actually that involved with the algorithm side of things". (This is largely how it works where I am, added to which the main algorithm guy is extremely secretive, so there's rather more "no-one knows how this actually works but him" than is perhaps ideal).

    As far as "moving technology" goes, for an awful lot of image related things now, it's hard to beat GPUs. They may not work exactly how you'd want (massively parallel), but they have so much more raw performance / $ that often even a "bad" (*) algorithm will work perfectly well. (Which is perhaps not dissimilar to the situation you've described).

    (*) bad in terms of "there are much better ways of doing this if you are working serially".
 
 
 
Reply
Submit reply
Turn on thread page Beta
Updated: October 24, 2017
Poll
“Yanny” or “Laurel”
Useful resources

Make your revision easier

Maths

Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here

Equations

How to use LaTex

Writing equations the easy way

Student revising

Study habits of A* students

Top tips from students who have already aced their exams

Study Planner

Create your own Study Planner

Never miss a deadline again

Polling station sign

Thinking about a maths degree?

Chat with other maths applicants

Can you help? Study help unanswered threads

Groups associated with this forum:

View associated groups

The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

Write a reply...
Reply
Hide
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.