Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine

Re^5: Image Analysis

by BrowserUk (Patriarch)
on Oct 26, 2010 at 11:08 UTC ( #867440=note: print w/replies, xml ) Need Help??

in reply to Re^4: Image Analysis
in thread Image Analysis

Are you thinking strobe?

No. It's well known that if you take two images taken from the same viewpoint but with minor scene changes (egthese two), and subtract them one from the other, if they are well registered, the you isolate the differences. Like this.

If you then convert that to greyscale and then stretch the contrast either side of some median value, and then eliminate noise, you get a clear B&W picture of the differences: so

A long time ago I discovered a twist on this. If there is (or you introduce) a slight offset in registration between two such images, then it acts as a very effective edge detector.

So, starting with these two images, which have such a slight regstration error, and you do the substraction, grayscaling contrasting and denoising.

That final image was the basis of the idea. If your snapshots could be so processed against a 'blank' base image of the same scene--this came before you mentioned the sludge--then you end up with an image suitable for edge detection and then a line discrimination algorithm like the Hough Transform.

Even with the rippling sludge, if your frame-rate and conveyor rates were synchronised, the it might be possible to re-register subsequent images sufficiently to allow the subtraction to eliminate the vast majority of the background (sludge) through averaging, thereby making any sharp-edged objects stand out. But with the variability of the procession rates, achieving close enough registration for the subtraction to be effective would be very difficult.

I realise that you said you've already managed to process the images with edge detection, and are only looking to be able to determine whether the image actually contains any hard lines. To that extent, the Hough Transform is the baby. But Hough is very processor intensive, as are most edge detection algorithms. The nice thing about the above algorithm is, although there are 4 passes, those passes are very fast and at least 3, if not all 4 can be combined into a single pass. That would have left more time for the Hough.

If you decide to look at the Hough, see if you can find the original 1962 version. It only looks for straight lines, where the later version look for ellipses as well, and hence are slower. In any case, if you're going to attempt this at 5 fps on 6400x1200 images, then you will certainly need to invest in some serious hardware. Very powerful GPUs are available for very reasonable prices now. The biggest problem will be finding libraries that uses them well, and either has the algorithms you need, or some well thought through primitives that allow you to construct your own. <P<Good luck.

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://867440]
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others browsing the Monastery: (10)
As of 2023-12-06 20:41 GMT
Find Nodes?
    Voting Booth?
    What's your preferred 'use VERSION' for new CPAN modules in 2023?

    Results (31 votes). Check out past polls.