# Model-based Transductive Learning of the Kernel Matrix

### Z. Zhang, J.T. Kwok, D.-Y. Yeung

**Abstract:**
This paper addresses the problem of transductive learning of the kernel matrix
from a probabilistic perspective. We define the kernel matrix as a Wishart process prior
and construct a hierarchical generative model for kernel matrix learning. Specifically, we
consider the target kernel matrix as a random matrix following the Wishart distribution with
a positive definite parameter matrix and a degree of freedom. This parameter matrix, in
turn, has the inverted Wishart distribution (with a positive definite hyperparameter matrix)
as its conjugate prior and the degree of freedom is equal to the dimensionality of the feature
space induced by the target kernel. Resorting to a missing data problem, we devise an
expectation-maximization (EM) algorithm to infer the missing data, parameter matrix and
feature dimensionality in a maximum a posteriori (MAP) manner. Using different settings
for the target kernel and hyperparameter matrices, our model can be applied to different types
of learning problems. In particular, we consider its application in a semi-supervised learning
setting and present two classification methods. Classification experiments are reported on
some benchmark data sets with encouraging results. In addition, we also devise the EM
algorithm for kernel matrix completion.
*Machine Learning*, 63(1):69-101, Apr 2006.

Pdf:
http://www.cs.ust.hk/~jamesk/papers/ml06.pdf

Back to James Kwok's home page.