A14: Pattern Recognition

In this activity, we're to classify different objects based on features that may be extracted from images.

Features are first extracted from a training set (Figure 1) composed of images representative of each class or object type. In our example, in distinguishing between dry roasted sea salt and chocolate-coated macadamia nuts, the roughness of the surface and the color of the objects were determined.


Figure 1. Left: dry roasted sea salt macadamia. Right: Maltesers (eep we ran out of chocolate-coated macadamia nuts but they look the same)

To quantify surface roughness, the contour of each object was determined using Scilab's follow command. From the contour we determine the position of the object's centroid (x0,y0). We then determine the distance of each point in the contour from the centroid and take average of their variance as a measure of roughness. This is implemented in the following function, which takes in the raw image of the object:
function [r]=checkrough(image),
imagebw = im2bw(max(image)-image,0.2); //binarize
[imageconx,imagecony] = follow(imagebw); //determine contour points
sizecon = size(imageconx); //no. of contour points
//determine centroid: get major and minor "radii" by taking half of the distance between the maximum and minimum values (for x and y separately). It is assumed (and is true in my data set) that the objects are elliptical. The centroid coordinates then are the radii shifted by the minimum x or y value.
imagectrx(1:sizecon(1)) = min(imageconx)+(max(imageconx)-min(imageconx))/2;
imagectry(1:sizecon(1)) = min(imagecony)+(max(imagecony)-min(imagecony))/2;
//basic euclidean distance of each contour point from centroid
imagesurfpxs = sqrt((imagectrx-imageconx).*(imagectrx-imageconx) + (imagectry-imagecony).*(imagectry-imagecony));
r = mean(variance(imagesurfpxs)); //avarage of variances
endfunction;

For quantifying color
function [c]=checkcolor(image),
imagebw = im2bw(max(image)-image,0.2); //binarize
[imagecontourx,imagecontoury] = follow(imagebw); //determine contour points
ctr = 1;
for j = [min(imagecontourx):max(imagecontourx)], //only determine color for ROI
for i = [min(imagecontoury):max(imagecontoury)],
if imagebw(i,j)=1,
cs(ctr) = image(i,j,1)+image(i,j,2)+image(i,j,3); //compute for grayscale (pwede din convert to grayscale nalang the image before pa)
ctr = ctr + 1;
end;
end;
end;
c = mean(cs); //mean color within boundary
endfunction;

Figure 2 shows the color vs. roughness plot for the training set. The classes appear to be fairly distinguishable both in color and roughness.


Figure 2.

Features are also extracted from images in the test set and the distance of their values from the mean values from the training set allows us to classify the object in the image. The object gets tagged as a member of a class with minimum distance from it. Figure 3 shows the color vs. roughness plot for the test set. The classes appear distinguishable in color but not in roughness. Three of the chocolate-coated kind are closer to the dry roasted sea salt cluster or two of the choco are too far from its cluster.


Figure 3.

For my set, the method classified 70% of the test objects correctly. I checked the wrongly classified images and found that the objects there were almost as smooth as the other class but were still significantly higher in roughness than all test and train sets. Applying thresholding, 100% correct classification is attained.

I give myself a grade of 9 because the method implemented scored better than a random classifier.

I would like to thank my mom, Mrs. Adoracion Monsanto, for letting me use her stash of macadamia nuts and for reimbursing the Maltesers I bought (and ate) for this activity. :)


0 comments:

Post a Comment