Abstract for kim_bmvc06

Proc. British Machine Vision Conference 2006


Tae-Kyun Kim, Josef Kittler, Roberto Cipolla

Sept 2006

Orthogonal subspaces are effective models to represent object image sets (generally any high-dimensional vector sets). Canonical correlation analysis of the orthogonal subspaces provides a good solution to discriminate objects with sets of images. In such a recognition task involving image sets, an efficient learning over a large volume of image sets, which may be increasing over time, is important. In this paper, an incremental method of learning orthogonal subspaces is proposed by updating the principal components of the class correlation and total correlation matrices separately, yielding the same solution as the batch computation with far lower computational cost. A novel concept of local orthogonality is further proposed to cope with non-linear manifolds of data vectors and find a more effective solution of orthogonal subspaces for a certain neighbouring object image sets. In the experiments using 700 face image sets, the locally orthogonal subspaces outperformed the orthogonal subspaces as well as relevant state-of-the-art methods in accuracy. Note that the locally orthogonal subspaces are also amenable to incremental updating due to their linear property.

(ftp:) kim_bmvc06.pdf (http:) kim_bmvc06.pdf

If you have difficulty viewing files that end '.gz', which are gzip compressed, then you may be able to find tools to uncompress them at the gzip web site.

If you have difficulty viewing files that are in PostScript, (ending '.ps' or '.ps.gz'), then you may be able to find tools to view them at the gsview web site.

We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.