๐Summary
The experiments here represent just the tip of the iceberg, as different hyperparameter optimization (HPO) tools can perform differently across various datasets. Let's summarize the pros & cons of these tools.
Optuna
Pros
Has flexible modularized design
Supports both classicial machine learning and deep learning, easy to learn
Supports multi-objective optimization
Provides optimization insights in visuals
Cons
Comparing with FLAML, Keras Tuner, it appears to be less efficient
Incomplete integration, such as XGBoost integration, pruning in cross validation, etc.
Confusing errors, such as setting
log=Truein usetrial.suggest_int()for parameters likenum_leaves,max_depth,max_binmay get confusing errors
FLAML
Pros
Has automation intelligence, nice choice for fast prototyping
Has simple but intelligent search strategy
Has efficient time complexity, not affected by the number of trials
Cons
Hard to use for deep learning HPO
Incomplete documentation, such as available parameter values, deep learning HPO, etc.
Challenging to customize the Objective function
Keras Tuner
Pros
Has great documentation, also supports keywords search
Has efficient search strategy
Has flexible modularized design
Easy to learn and use
Cons
Haven't found yet, if you know any weakness, feel free to share it here!
Stories Behind the Scenes!

Sometimes, when we're working hard on something, unexpected surprises can arise!
Lady H. was thrilled to receive a notification that FLAML had published their latest release, recognizing her as one of the contributors. This was due to her insightful questions that encouraged the team to make further improvements! ๐

Last updated