๐Ÿ’Summary

The experiments here represent just the tip of the iceberg, as different hyperparameter optimization (HPO) tools can perform differently across various datasets. Let's summarize the pros & cons of these tools.

Optuna

Pros

  • Has flexible modularized design

  • Supports both classicial machine learning and deep learning, easy to learn

  • Provides optimization insights in visuals

Cons

  • Comparing with FLAML, Keras Tuner, it appears to be less efficient

  • Incomplete integration, such as XGBoost integration, pruning in cross validation, etc.

  • Confusing errors, such as setting log=True in use trial.suggest_int() for parameters like num_leaves, max_depth, max_bin may get confusing errors

FLAML

Pros

Cons

  • Hard to use for deep learning HPO

  • Incomplete documentation, such as available parameter values, deep learning HPO, etc.

  • Challenging to customize the Objective function

Keras Tuner

Pros

  • Has great documentation, also supports keywords search

  • Has efficient search strategy

  • Has flexible modularized design

  • Easy to learn and use

Cons

Haven't found yet, if you know any weakness, feel free to share it here!

Stories Behind the Scenes!

Sometimes, when we're working hard on something, unexpected surprises can arise!

Lady H. was thrilled to receive a notification that FLAML had published their latest release, recognizing her as one of the contributors. This was due to her insightful questions that encouraged the team to make further improvements! ๐Ÿ’–

Last updated