今天在用LGBM算法对一个二分类的数据进行训练时:
lgbm1 = lgbm.LGBMClassifier(num_leaves=10, learning_rate=0.05, n_estimators=2000, ) # n_estimators是循环次数,或者说是树的数目
lgbm1.fit(X_train1, y_train1, eval_set=[(X_test1, y_test1)], eval_metric='auc', early_stopping_rounds=500, verbose=20)
设置总迭代次数为2000,早停周期为500,每20次迭代打印一次日志。
训练结果如下:
Training until validation scores don't improve for 500 rounds
[20] valid_0's auc: 0.49375 valid_0's binary_logloss: 0.601725
[40] valid_0's auc: 0.50625 valid_0's binary_logloss: 0.716622
[60] valid_0's auc: 0.5125 valid_0's binary_logloss: 0.687962
[80] valid_0's auc: 0.525 valid_0's binary_logloss: 0.684701
[100] valid_0's auc: 0.4625 valid_0's binary_logloss: 0.715965
[120] valid_0's auc: 0.45 valid_0's binary_logloss: 0.743966
[140] valid_0's auc: 0.45 valid_0's binary_logloss: 0.757433
[160] valid_0's auc: 0.45 valid_0's binary_logloss: 0.800245
[180] valid_0's auc: 0.45 valid_0's binary_logloss: 0.840092
[200] valid_0's auc: 0.425 valid_0's binary_logloss: 0.872955
[220] valid_0's auc: 0.425 valid_0's binary_logloss: 0.892881
[240] valid_0's auc: 0.45 valid_0's binary_logloss: 0.881704
[260] valid_0's auc: 0.45 valid_0's binary_logloss: 0.874494
[280] valid_0's auc: 0.4625 valid_0's binary_logloss: 0.873715
[300] valid_0's auc: 0.4625 valid_0's binary_logloss: 0.880587
[320] valid_0's auc: 0.4625 valid_0's binary_logloss: 0.890775
[340] valid_0's auc: 0.4625 valid_0's binary_logloss: 0.910106
[360] valid_0's auc: 0.4625 valid_0's binary_logloss: 0.939372
[380] valid_0's auc: 0.4625 valid_0's binary_logloss: 0.963717
[400] valid_0's auc: 0.4875 valid_0's binary_logloss: 0.973739
[420] valid_0's auc: 0.4875 valid_0's binary_logloss: 0.984934
[440] valid_0's auc: 0.4875 valid_0's binary_logloss: 0.990671
[460] valid_0's auc: 0.4875 valid_0's binary_logloss: 0.99936
[480] valid_0's auc: 0.5 valid_0's binary_logloss: 0.994803
[500] valid_0's auc: 0.5 valid_0's binary_logloss: 1.00327
Early stopping, best iteration is:
[1] valid_0's auc: 0.45 valid_0's binary_logloss: 0.551583
发现,随着auc逐渐变好,binary_logloss却在越来越差,模型最后给出的最好的一代为第【1】代,这时候auc还很差只有 0.45,但可以看出binary_logloss是最好的。
所以我就很奇怪为什么两个指标没有同时变好呢。
通过查阅资料,参考(https://blog.csdn.net/weixin_34023863/article/details/89130052),原因如下:
首先要知道auc的计算原理,auc 更多的关注的是排序的结果。
logloss 则是越小越好。
假如 1 1 0 1 预测值 为 0.5 0.5 0.3 0.5
那么 auc 是 1,可由于预测结果(概率值)都是在0.5附近,所以虽然分类是对的,但和真实值(0和1)的误差很大,所以binary_logloss很大
我们提升预测值到 0.7 0.7 0.4 0.7
那么 auc 依然是1
但是由于预测的概率值更接近0和1两个边界,所以 logloss 就有了很大的提升。
至于lgbm算法在评估最好代数的时候是怎么融合这两个指标的,还不太清楚。