Currently, hyperparameter tuning can tune only (1) optimizer and (2) learning rate parameters.
It should support more optimizer's hyperparameters. Now It will crash if we use a hyperparameter that is specific to any optimizer. (e.g. trying to use Adam doesn't have momentum but SGD has)