feat: enhance precision optimization in model training
- Introduced a new plan to modify the Optuna objective function to prioritize precision under a recall constraint of 0.35, improving model performance in scenarios where false positives are costly. - Updated training scripts to implement precision-based metrics and adjusted the walk-forward cross-validation process to incorporate precision and recall calculations. - Enhanced the active LGBM parameters and training log to reflect the new metrics and model configurations. - Added a new design document outlining the implementation steps for the precision-focused optimization. This update aims to refine the model's decision-making process by emphasizing precision, thereby reducing potential losses from false positives.
This commit is contained in:
File diff suppressed because it is too large
Load Diff
@@ -401,5 +401,30 @@
|
||||
"reg_lambda": 0.80039
|
||||
},
|
||||
"weight_scale": 0.718348
|
||||
},
|
||||
{
|
||||
"date": "2026-03-03T00:39:05.427160",
|
||||
"backend": "lgbm",
|
||||
"auc": 0.9436,
|
||||
"best_threshold": 0.3041,
|
||||
"best_precision": 0.467,
|
||||
"best_recall": 0.269,
|
||||
"samples": 1524,
|
||||
"features": 23,
|
||||
"time_weight_decay": 0.5,
|
||||
"model_path": "models/lgbm_filter.pkl",
|
||||
"tuned_params_path": "models/active_lgbm_params.json",
|
||||
"lgbm_params": {
|
||||
"n_estimators": 221,
|
||||
"learning_rate": 0.031072,
|
||||
"max_depth": 5,
|
||||
"num_leaves": 20,
|
||||
"min_child_samples": 39,
|
||||
"subsample": 0.83244,
|
||||
"colsample_bytree": 0.526349,
|
||||
"reg_alpha": 0.062177,
|
||||
"reg_lambda": 0.082872
|
||||
},
|
||||
"weight_scale": 1.431662
|
||||
}
|
||||
]
|
||||
Reference in New Issue
Block a user