You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[refactor] Format evaluators (mainly tae, abstract_evaluator, evaluator)
* [refactor] Refactor __init__ of abstract evaluator
* [refactor] Collect shared variables in NamedTuples
* [fix] Copy the budget passed to the evaluator params
* [refactor] Add cross validation result manager for separate management
* [refactor] Separate pipeline classes from abstract evaluator
* [refactor] Increase the safety level of pipeline config
* [test] Add tests for the changes
* [test] Modify queue.empty in a safer way
[fix] Find the error in test_tabular_xxx
Since pipeline is updated after the evaluations and the previous code
updated self.pipeline in the predict method, dummy class only needs
to override this method. However, the new code does it separately,
so I override get_pipeline method so that we can reproduce the same
results.
[fix] Fix the shape issue in regression and add bug comment in a test
[fix] Fix the ground truth of test_cv
Since we changed the weighting strategy for the cross validation in
the validation phase so that we weight performance from each model
proportionally to the size of each VALIDATION split.
I needed to change the answer.
Note that the previous was weighting the performance proportionally
to the TRAINING splits for both training and validation phases.
[fix] Change qsize --> Empty since qsize might not be reliable
[refactor] Add cost for crash in autoPyTorchMetrics
[fix] Fix the issue when taking num_classes from regression task
[fix] Deactivate the save of cv model in the case of holdout
0 commit comments