### Abstract

Tuning hyper-parameters is a necessary step to improve learning algorithm performances. For Support Vector Machine classifiers, adjusting kernel parameters increases drastically the recognition accuracy. Basically, cross-validation is performed by sweeping exhaustively the parameter space. The complexity of such grid search is exponential with respect to the number of optimized parameters. Recently, a gradient descent approach has been introduced in [1] which reduces drastically the search steps of the optimal parameters. In this paper, we define the LCCP (Log Convex Concave Procedure) optimization scheme derived from the CCCP (Convex ConCave Procedure) for optimizing kernel parameters by minimizing the radius-margin bound. To apply the LCCP, we prove, for a particular choice of kernel, that the radius is log convex and the margin is log concave. The LCCP is more efficient than gradient descent technique since it insures that the radius margin bound decreases monotonically and converges to a local minimum without searching the size step. Experimentations with standard data sets are provided and discussed.

Original language | English |
---|---|

Title of host publication | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |

Pages | 589-594 |

Number of pages | 6 |

Volume | 3697 LNCS |

Publication status | Published - 2005 |

Externally published | Yes |

Event | 15th International Conference on Artificial Neural Networks: Biological Inspirations - ICANN 2005 - Warsaw, Poland Duration: 11 Sep 2005 → 15 Sep 2005 |

### Publication series

Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|

Volume | 3697 LNCS |

ISSN (Print) | 03029743 |

ISSN (Electronic) | 16113349 |

### Other

Other | 15th International Conference on Artificial Neural Networks: Biological Inspirations - ICANN 2005 |
---|---|

Country | Poland |

City | Warsaw |

Period | 11/9/05 → 15/9/05 |

### Fingerprint

### ASJC Scopus subject areas

- Biochemistry, Genetics and Molecular Biology(all)
- Computer Science(all)
- Theoretical Computer Science

### Cite this

*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*(Vol. 3697 LNCS, pp. 589-594). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 3697 LNCS).

**The LCCP for optimizing kernel parameters for SVM.** / Boughorbel, Sabri; Tarel, Jean Philippe; Boujemaa, Nozha.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).*vol. 3697 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 3697 LNCS, pp. 589-594, 15th International Conference on Artificial Neural Networks: Biological Inspirations - ICANN 2005, Warsaw, Poland, 11/9/05.

}

TY - GEN

T1 - The LCCP for optimizing kernel parameters for SVM

AU - Boughorbel, Sabri

AU - Tarel, Jean Philippe

AU - Boujemaa, Nozha

PY - 2005

Y1 - 2005

N2 - Tuning hyper-parameters is a necessary step to improve learning algorithm performances. For Support Vector Machine classifiers, adjusting kernel parameters increases drastically the recognition accuracy. Basically, cross-validation is performed by sweeping exhaustively the parameter space. The complexity of such grid search is exponential with respect to the number of optimized parameters. Recently, a gradient descent approach has been introduced in [1] which reduces drastically the search steps of the optimal parameters. In this paper, we define the LCCP (Log Convex Concave Procedure) optimization scheme derived from the CCCP (Convex ConCave Procedure) for optimizing kernel parameters by minimizing the radius-margin bound. To apply the LCCP, we prove, for a particular choice of kernel, that the radius is log convex and the margin is log concave. The LCCP is more efficient than gradient descent technique since it insures that the radius margin bound decreases monotonically and converges to a local minimum without searching the size step. Experimentations with standard data sets are provided and discussed.

AB - Tuning hyper-parameters is a necessary step to improve learning algorithm performances. For Support Vector Machine classifiers, adjusting kernel parameters increases drastically the recognition accuracy. Basically, cross-validation is performed by sweeping exhaustively the parameter space. The complexity of such grid search is exponential with respect to the number of optimized parameters. Recently, a gradient descent approach has been introduced in [1] which reduces drastically the search steps of the optimal parameters. In this paper, we define the LCCP (Log Convex Concave Procedure) optimization scheme derived from the CCCP (Convex ConCave Procedure) for optimizing kernel parameters by minimizing the radius-margin bound. To apply the LCCP, we prove, for a particular choice of kernel, that the radius is log convex and the margin is log concave. The LCCP is more efficient than gradient descent technique since it insures that the radius margin bound decreases monotonically and converges to a local minimum without searching the size step. Experimentations with standard data sets are provided and discussed.

UR - http://www.scopus.com/inward/record.url?scp=33646227182&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33646227182&partnerID=8YFLogxK

M3 - Conference contribution

SN - 3540287558

SN - 9783540287551

VL - 3697 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 589

EP - 594

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -