GNM- The Gradient Network Method

GNM – The Gradient Network Method
Most color image segmentations struggle to identify objects formed by slowly varying shades of color. However, such objects, usually found in outdoors scenes or scenes affected by environment luminance, are easily distinguished as unique by the human eye.

Here we present a generic segmentation technique robust to this type of problem called Gradient Network Segmentation (GNM). The GNM segmentation achieves that by looking for a higher degree of organization in the structure of the scene of the images through the search and identification of continuous and smooth color gradients. Our method works from a starting point from a super-segmented image constructing a network of gradients that is solved to group gradient paths with logical/inductive connection.

Our algorithm already is on its second version. This new version, called GNM2, presents some new features as: a new similarity evaluation function, now using the CIE L*ab to deal with perceptive difference between colors in a more automatic way; taking shared borders size in consideration when calculating the cost of a merge; a now iterative algorithm looking to improve the quality of segmentation gradually.

Besides providing the image results obtained with both our most recent algorithm, GNM2, and the original one, GNM, we aim to objectively compare its quality with other state-of-art algorithms. For this goal, we have selected two distance measures developed for image segmentation evaluation. These measures, Rand [Rand71] and Bipartite Graph Matching [Cheng et. al.01], result in float values are in the [0,1] interval, where the closest to 0 the better the segmentation is.

Both Rand and BGM are ground-truth based evaluation measures. So, every image set selected has to have a group of ground-truth images available for the evaluation of the experiments. For this reason, we selected our test sets from the Berkeley Segmentation Dataset and Benchmark [Martin et.al.01], a well-known natural images dataset that has at least 5 and up to 7 ground-truths for every image in its dataset.

We compare our results with those obtained with the following techniques:

  • CSC [RehrmannPriese98]
  • Mumford-Shah [MumfordShah89] functional-based segmentation method [Megawave06].
  • EDISON [ComaniciuMeer02]
  • Watershed [VincentSoille91]
  • JSEG [DengManjunath01]
  • Blobworld [Carson et. al.02]
  • RHSEG [Tilton06]
An example
A color image After pre-segmentation After GNM2 segmentation

Some images representing the stages a segmentation with GNM2 goes through. The column in the left is the original image, a couple of flowers. The image in the center is a segmentation done with very careful parameters, creating what is expected from a pre-segmentation by the GNM2 algorithm. The column in the right is the result of a GNM2 segmentation with the pre-segmented image. It is easy to notice that the final segmentation is able to separate the flowers from the background while keeping some other relevant features of the image intact.

Browsing ( Under Construction – Adding images )

Dataset

  • By images: a list of the segmentation image results for all the tested images and algorithms. Also every image set displays graphs containing its obtained Rand and BGM scores.

Evaluation results

  • By graphs: several graphs comparing the results from all tested algorithms over the selected image dataset.
  • By tables: several tables showing the measures results for all tested algorithms over the selected image dataset.

The smaller images shown in these pages are all linked to their full-size versions.

Downloads

Algorithm

A more detailed description about the original GNM algorithm can be found in our paper that has already been published on Pattern Recognition Letters. This paper can be accessed through this link.

Results

We provide our obtained results for each selected dataset to allow the comparison with other algorithms. They can be downloaded as image files or as segmentation files following the format described here. The tables containing all the evaluated data can also be downloaded, as a xls file.

If you test GNM and compare it with other algorithms not listed here, we would like to see these results, so, if possible,contact us.

References

[Cheng et. al.01] H.D. Cheng, X.H. Jiang, Y. Sun and J. Wang. Color image segmentation: advances and prospects. Pattern Recognition 34 (2001), pp. 2259-2281.

[Rand71] W. M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association. Vol. 66. pp 846/850, 1971.

[ComaniciuMeer02] D. Comaniciu, P. Meer. Mean Shift: A Robust Approach Toward Feature Space Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence; 2002; 24 (5); 603-619.

[VincentSoille91] L. Vincent and P. Soille. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. In Transactions on Pattern Analysis and Machine Inteligence, volume 9, pages 735-744. IEEE, 1991.

[Carson et. al.02] C. Carson, S. Belongie, H. Greenspan, and J. Malik. Blobworld: image segmentation using expectation-maximization and its application to image querying. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(8):1026-1038, August 2002. [DengManjunath01] Deng, Y., Manjunath B.S. Unsupervised segmentation of color-texture regions in images and video. IEEE Transactions on Pattern Analysis and Machine Intelligence; 2001; 23(8):800-810.

[Martin et.al.01] Martin, D., Fowlkes, C., Tal, D., Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proc. 8th Int’l Conf. Computer Vision. vol. 2; 2001. p. 416-423.

[MumfordShah89] Mumford, D. and Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math., vol. 42, 1989: 577-684.

[RehrmannPriese98] Rehrmann, V. and Priese, L. Fast and Robust Segmentation of Natural Color Scenes. ACCV (1), 1998: 598-606.

[Megawave06] http://www.cmla.ens-cachan.fr/Cmla/Megawave/. Access in: 14 September 2006.

[Tilton06] Tilton, J.C. D-dimensional formulation and implementation of recursive hierarchical segmentation, Disclosure of Invention and New Technology: NASA Case No. GSC 15199-1, May 26, 2006.

About admin

info biografia