Degree Granting Department
Computer Science and Engineering
grouping, spectral methods, graph cut measures, learning automata
Grouping is a vital precursor to object recognition. The complexity of the object recognition process can be reduced to a large extent by using a frontend grouping process. In this dissertation, a grouping framework based on spectral methods for graphs is used. The objects are segmented from the background by means of an associated learning process that decides on the relative importance of the basic salient relationships such as proximity, parallelism, continuity, junctions and common region. While much of the previous research has been focussed on using simple relationships like similarity, proximity, continuity and junctions, this work differenciates itself by using all the relationships listed above. The parameters of the grouping process is cast as probabilistic specifications of Bayesian networks that need to be learned: the learning is accomplished by a team of stochastic learning automata. One of the stages in the grouping process is graph partitioning.
There are a variety of cut measures based on which partitioning can be obtained and different measures give different partitioning results. This work looks at three popular cut measures, namely the minimum, average and normalized. Theoretical and empirical insight into the nature of these partitioning measures in terms of the underlying image statistics are provided.
In particular, the questions addressed are as follows: For what kinds of image statistics would optimizing a measure, irrespective of the particular algorithm used, result in correct partitioning? Are the quality of the groups significantly different for each cut measure? Are there classes of images for which grouping by partitioning is not suitable? Does recursive bi-partitioning strategy separate out groups corresponding to K objects from each other? The major conclusion is that optimization of none of the above three measures is guaranteed to result in the correct partitioning of K objects, in the strict stochastic order sense, for all image statistics. Qualitatively speaking, under very restrictive conditions when the average inter-object feature affinity is very weak when compared to the average intra-object feature affinity, the minimum cut measure is optimal.
The average cut measure is optimal for graphs whose partition width is less than the mode of distribution of all possible partition widths. The normalized cut measure is optimal for a more restrictive subclass of graphs whose partition width is less than the mode of the partition width distributions and the strength of inter-object links is six times less than the intra-object links. The learning framework described in the first part of the work is used to empirically evaluate the cut measures. Rigorous empirical evaluation on 100 real images indicates that in practice, the quality of the groups generated using minimum or average or normalized cuts are statistically equivalent for object recognition, i.e. the best, the mean, and the variation of the qualities are statistically equivalent.
Another conclusion is that for certain image classes, such as aerial and scenes with man-made objects in man-made surroundings, the performance of grouping by partitioning is the worst, irrespective of the cut measure.
Scholar Commons Citation
Soundararajan, Padmanabhan, "Core issues in graph based perceptual organization: Spectral cut measures, learning" (2004). Graduate Theses and Dissertations.