LibAUC 1.3.0
Introducing LibAUC 1.3.0
We are thrilled to release LibAUC 1.3.0! In this version, we have made improvements and brought new features to our library. We have released a new documentation website at https://docs.libauc.org/, where you can access our code and comments. We are also happy to announce that our LibAUC paper has been accepted by KDD2023!
Major Improvements
- Improved the implementations for
DualSampler
andTriSampler
for better efficiency. - Merged
DataSampler
forNDCGLoss
withTriSampler
and added a new string argumentmode
to switch between classification mode for multi-label classification and ranking mode for movie recommendations. - Improved
AUCMLoss
and included a new version v2 (required DualSampler) that removes the class prior p required in the previous version v1. To use different version, you can setversion='v1'
orversion='v2'
inAUCMLoss
. - Improved
CompositionalAUCLoss
, which now allows multiple updates for optimizing inner loss by settingk
in the loss. Similar toAUCMLoss
, we introduced v2 version in this loss without using the class priorp
. By default, k is 1 and version is v1. - Improved code quality for
APLoss
andpAUCLoss
includingpAUC_CVaR_Loss
,pAUC_DRO_Loss
,tpAUC_KL_Loss
for better efficiency and readability. - API change for all
optimizer
methods. Please passmodel.parameters()
to the optimizer instead ofmodel
, e.g.,PESG(model.parameters())
.
New Features
- Launched an official documentation site at http://docs.libauc.org/ to access source code and parameter information.
- Introduced a new library logo for X-Risk designed by Zhuoning Yuan, Tianbao Yang .
- Introduced MIDAM for multi-instance learning. It supports two pooling functions,
MIDAMLoss('softmax')
for using softmax pooling andMIDAMLoss('attention')
for attention-based pooling. - Introduced a new
GCLoss
wrapper for contrastive self-supervised learning, which can be optimized by two algorithms in the backend: SogCLR and iSogCLR. - Introduced iSogCLR for automatic temperature individualization in self-supervised contrastive learning. To use
iSogCLR
, you can setGCLoss('unimodal', enable_isogclr=True)
andGCLoss('bimodal', enable_isogclr=True)
. - Introduced three new multi-label losses:
mAPLoss
for optimizing mean AP,MultiLabelAUCMLoss
for optimizing multi-label AUC loss, andMultiLabelpAUCLoss
for multi-label partial AUC loss. - Introduced
PairwiseAUCLoss
to support optimization of traditional pairwise AUC losses. - Added more evaluation metrics:
ndcg_at_k
,map_at_k
,precision_at_k
, andrecall_at_k
.
Acknowledgment
Team: Zhuoning Yuan, Dixian Zhu, Zi-Hao Qiu, Gang Li, Tianbao Yang (Advisor)
Feedback
We value your thoughts and feedback! Please fill out this brief survey to guide our future developments. Thank you for your time! For other questions, please contact us @ Zhuoning Yuan [[email protected]] and Tianbao Yang [[email protected]].