Not strictly Java, but any Java code would be fine as an answer.
How do you move from using an edge detection kernel which highlights edges within an image, to being able to store within a program values for where a circle is within the image? Or a line if that's simpler.
There are several posible approaches. It gets trickier depending on how ambiguous the image is likely to be.
One I've used recently is to scan a 4 x 4 box over the image classifying each box into states like "top left corner", "left vertical side", "edge crossing" etc. and hooking them up into polygons accordingly. Or you could have a small window actually follow an edge arround the picture.
Inevitably though you're going to wind up with polygons or paths which you will then probably have to further classify.
Thanks, I suppose it'd help to say then that it would be an iris - so I'm looking for the pupil outer edge, the iris outer edge and enough width of the upper and lower eyelid to extend across the iris should they cover the iris in part.
for finding circles/ elipses you may want to consider performing a Circle Hough transform (either
in the real or frequency domain). This is a very robust algorithm that can handle partial occlusions.
Its a bit slow though. (Basically it can be implemented as a convolution). Better performance is
often obtained if yourun an edge enchancement filter (a high pass filter) over your image before
performing the circle hough transform.
The result of the Hough transform is an image in which the largest values correspond to the
centre points of circles that match your template/pattern and their mangatude is proportional to
the size of the circle that was matched.
I've been working on cleaning up the image before looking for the circles and testing it against an applet (mark someone's... it's easy enough to find in a google).
It's great for what I want, but now I'd like to move past this stage which means I need code to incorporate into my own app. I don't really want to use the code from the applet - it doesn't start with a buffered image, it uses a mediatracker which requires a component and as I understand it that means I can't use it in a console app... etc and basically his code is too bloated for me to understand and slim down.
Can you (anyone) offer me a simple implementation of a circle Hough transform starting with a BufferedImage? I'm really after something that's specifically just the HT so I can get my head around the basic concepts before I start optimising it.
Hey. I saw your posts about adge detection and detecting objects in an image back in 2004. I am working in a similar project were i have to detect the pupil and the iris from an image. Did you have any progress with your project. Did you aquired any useful knowledge that you could tell me?