Multiple kernel learning algorithms

Mehmet Gönen, Ethem Alpaydin

Research output: Contribution to journalReview articlepeer-review

1543 Scopus citations


In recent years, several methods have been proposed to combine multiple kernels instead of using a single one. These different kernels may correspond to using different notions of similarity or may be using information coming from multiple sources (different representations or different feature subsets). In trying to organize and highlight the similarities and differences between them, we give a taxonomy of and review several multiple kernel learning algorithms. We perform experiments on real data sets for better illustration and comparison of existing algorithms. We see that though there may not be large differences in terms of accuracy, there is difference between them in complexity as given by the number of stored support vectors, the sparsity of the solution as given by the number of used kernels, and training time complexity. We see that overall, using multiple kernels instead of a single one is useful and believe that combining kernels in a nonlinear or data-dependent way seems more promising than linear combination in fusing information provided by simple linear kernels, whereas linear methods are more reasonable when combining complex Gaussian kernels. Keywords: support vector machines, kernel machines, multiple kernel learning

Original languageEnglish (US)
Pages (from-to)2211-2268
Number of pages58
JournalJournal of Machine Learning Research
StatePublished - Jul 2011
Externally publishedYes


  • Kernel machines
  • Multiple kernel learning
  • Support vector machines

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'Multiple kernel learning algorithms'. Together they form a unique fingerprint.

Cite this