Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 86) Show all publications
Hallberg Szabadváry, J. & Löfström, T. (2026). Beyond conformal predictors: Adaptive Conformal Inference with confidence predictors. Pattern Recognition, 170, Article ID 111999.
Open this publication in new window or tab >>Beyond conformal predictors: Adaptive Conformal Inference with confidence predictors
2026 (English)In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 170, article id 111999Article in journal (Refereed) Published
Abstract [en]

Adaptive Conformal Inference (ACI) provides finite-sample coverage guarantees, enhancing the prediction reliability under non-exchangeability. This study demonstrates that these desirable properties of ACI do not require the use of Conformal Predictors (CP). We show that the guarantees hold for the broader class of confidence predictors, defined by the requirement of producing nested prediction sets, a property we argue is essential for meaningful confidence statements. We empirically investigate the performance of Non-Conformal Confidence Predictors (NCCP) against CP when used with ACI on non-exchangeable data. In online settings, the NCCP offers significant computational advantages while maintaining a comparable predictive efficiency. In batch settings, inductive NCCP (INCCP) can outperform inductive CP (ICP) by utilising the full training dataset without requiring a separate calibration set, leading to improved efficiency, particularly when the data are limited. Although these initial results highlight NCCP as a theoretically sound and practically effective alternative to CP for uncertainty quantification with ACI in non-exchangeable scenarios, further empirical studies are warranted across diverse datasets and predictors.

Place, publisher, year, edition, pages
Elsevier, 2026
Keywords
Adaptive Conformal Inference (ACI), Confidence predictors, Conformal prediction (CP), Coverage guarantee, Non-exchangeable data, Computational efficiency, Sampling, Uncertainty analysis, Adaptive conformal inference, Confidence predictor, Conformal prediction, Conformal predictions, Conformal predictors, Finite samples, Performance, Property, Forecasting
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-69345 (URN)10.1016/j.patcog.2025.111999 (DOI)001521413200005 ()2-s2.0-105008929689 (Scopus ID)HOA;;1026806 (Local ID)HOA;;1026806 (Archive number)HOA;;1026806 (OAI)
Funder
Knowledge Foundation, 20220187
Available from: 2025-07-15 Created: 2025-07-15 Last updated: 2025-10-13Bibliographically approved
Yapicioglu, F. R., Aksoy, M., Löfström, T., Vitali, F. & Rigenti, A. (2026). ConformaSegment: A conformal prediction-based, uncertainty-aware, and model-agnostic explainability framework for time-series forecasting. In: Riccardo Guidotti, Ute Schmid & Luca Longo (Ed.), Explainable Artificial Intelligence: Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part IV. Paper presented at Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025 (pp. 218-242). Cham: Springer
Open this publication in new window or tab >>ConformaSegment: A conformal prediction-based, uncertainty-aware, and model-agnostic explainability framework for time-series forecasting
Show others...
2026 (English)In: Explainable Artificial Intelligence: Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part IV / [ed] Riccardo Guidotti, Ute Schmid & Luca Longo, Cham: Springer, 2026, p. 218-242Conference paper, Published paper (Refereed)
Abstract [en]

Time-series forecasting is crucial for data-driven decisions across finance, healthcare, and environmental monitoring. Despite technological advances, identifying significant temporal segments impacting predictions remains challenging. We introduce ConformaSegment, a model-agnostic explainability framework that enhances time-series interpretability by identifying critical segments while quantifying prediction uncertainty. The framework integrates conformal prediction to generate reliable prediction intervals with guaranteed coverage rates, enabling users to understand which temporal segments most significantly influence forecasting outcomes. Our approach was validated across diverse real-world datasets using LSTM, RNN, and GRU models, demonstrating substantial performance improvements over existing techniques such as Saliency Maps and Integrated Gradients. ConformaSegment achieved mean R2 improvements of 42% and 18% respectively over these methods, while enhancing prediction interval coverage by 25.73% and 40.15%. These results demonstrate that ConformaSegment effectively identifies critical time segments in forecasting tasks, improving both interpretability and uncertainty quantification, thus enhancing model trustworthiness for applications in healthcare, industrial maintenance, and other time-sensitive domains. 

Place, publisher, year, edition, pages
Cham: Springer, 2026
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2579
Keywords
Conformal Prediction, Model-Agnostic Explainability, Reliable Time Series Explainability, Uncertainty-Aware Explainability, Prediction models, Time series, Uncertainty analysis, Conformal predictions, Interpretability, Prediction interval, Temporal segments, Time series forecasting, Times series, Uncertainty, Forecasting
National Category
Computer Sciences Probability Theory and Statistics
Identifiers
urn:nbn:se:hj:diva-70148 (URN)10.1007/978-3-032-08330-2_11 (DOI)2-s2.0-105020237686 (Scopus ID)978-3-032-08329-6 (ISBN)978-3-032-08330-2 (ISBN)
Conference
Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025
Projects
PREMACOP
Funder
Knowledge Foundation, 20220187
Note

Copyright year: 2026. Published online: October 2025.

Available from: 2025-11-10 Created: 2025-11-10 Last updated: 2025-11-10Bibliographically approved
Löfström, T., Yapicioglu, F. R., Stramiglio, A., Löfström, H. & Vitali, F. (2026). Fast calibrated explanations: Efficient and uncertainty-aware explanations for machine learning models. In: Riccardo Guidotti, Ute Schmid & Luca Longo (Ed.), Explainable Artificial Intelligence: Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part V. Paper presented at Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025 (pp. 340-363). Cham: Springer
Open this publication in new window or tab >>Fast calibrated explanations: Efficient and uncertainty-aware explanations for machine learning models
Show others...
2026 (English)In: Explainable Artificial Intelligence: Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part V / [ed] Riccardo Guidotti, Ute Schmid & Luca Longo, Cham: Springer, 2026, p. 340-363Conference paper, Published paper (Refereed)
Abstract [en]

This paper introduces Fast Calibrated Explanations, an extension of an existing explanation method, Calibrated Explanations, designed for generating rapid, uncertainty-aware explanations for machine learning models. By incorporating perturbation techniques from ConformaSight, a global explanation method, into the core elements of Calibrated Explanations, we achieved significant speedups. These core elements include local feature importance with calibrated predictions, both of which retain uncertainty quantification. While the extension sacrifices some degree of detail, it excels in computational efficiency, making it ideal for high-stakes, real-time applications. Fast Calibrated Explanations applies to probabilistic explanations in classification and thresholded regression tasks, providing the probability of a target being above or below a user-defined threshold. This approach maintains the versatility of Calibrated Explanations for both classification and thresholded regression, making it suitable for a range of predictive tasks where uncertainty quantification is crucial.

Place, publisher, year, edition, pages
Cham: Springer, 2026
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2580
Keywords
Calibrated Explanations, ConformaSight, Uncertainty Quantification, Explainable AI, ConformalPredictiveSystems, Venn-Abers
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-70151 (URN)10.1007/978-3-032-08333-3_16 (DOI)2-s2.0-105020736421 (Scopus ID)978-3-032-08332-6 (ISBN)978-3-032-08333-3 (ISBN)
Conference
Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025
Projects
PREMACOPETIAI
Funder
Knowledge Foundation, 20220187, 20230040
Available from: 2025-11-11 Created: 2025-11-11 Last updated: 2025-11-17Bibliographically approved
Löfström, T., Löfström, H., Johansson, U., Sönströd, C. & Matela, R. (2025). Calibrated explanations for regression. Machine Learning, 114(4), Article ID 100.
Open this publication in new window or tab >>Calibrated explanations for regression
Show others...
2025 (English)In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 114, no 4, article id 100Article in journal (Refereed) Published
Abstract [en]

Artificial Intelligence (AI) methods are an integral part of modern decision support systems. The best-performing predictive models used in AI-based decision support systems lack transparency. Explainable Artificial Intelligence (XAI) aims to create AI systems that can explain their rationale to human users. Local explanations in XAI can provide information about the causes of individual predictions in terms of feature importance. However, a critical drawback of existing local explanation methods is their inability to quantify the uncertainty associated with a feature's importance. This paper introduces an extension of a feature importance explanation method, Calibrated Explanations, previously only supporting classification, with support for standard regression and probabilistic regression, i.e., the probability that the target is below an arbitrary threshold. The extension for regression keeps all the benefits of Calibrated Explanations, such as calibration of the prediction from the underlying model with confidence intervals, uncertainty quantification of feature importance, and allows both factual and counterfactual explanations. Calibrated Explanations for regression provides fast, reliable, stable, and robust explanations. Calibrated Explanations for probabilistic regression provides an entirely new way of creating probabilistic explanations from any ordinary regression model, allowing dynamic selection of thresholds. The method is model agnostic with easily understood conditional rules. An implementation in Python is freely available on GitHub and for installation using both pip and conda, making the results in this paper easily replicable.

Place, publisher, year, edition, pages
Springer, 2025
Keywords
Explainable AI, Feature importance, Calibrated explanations, Uncertainty quantification, Regression, Probabilistic regression, Counterfactual explanations, Conformal predictive systems
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:hj:diva-67398 (URN)10.1007/s10994-024-06642-8 (DOI)001427670500004 ()2-s2.0-85218409420 (Scopus ID)HOA;;1004935 (Local ID)HOA;;1004935 (Archive number)HOA;;1004935 (OAI)
Funder
Knowledge Foundation
Available from: 2025-03-04 Created: 2025-03-04 Last updated: 2025-10-13Bibliographically approved
Szabadvary, J. H., Löfström, T., Johansson, U., Sönströd, C., Ahlberg, E. & Carlsson, L. (2025). Classification with reject option: Distribution-free error guarantees via conformal prediction. Machine Learning with Applications, 20, Article ID 100664.
Open this publication in new window or tab >>Classification with reject option: Distribution-free error guarantees via conformal prediction
Show others...
2025 (English)In: Machine Learning with Applications, E-ISSN 2666-8270, Vol. 20, article id 100664Article in journal (Refereed) Published
Abstract [en]

Machine learning (ML) models always make a prediction, even when they are likely to be wrong. This causes problems in practical applications, as we do not know if we should trust a prediction. ML with reject option addresses this issue by abstaining from making a prediction if it is likely to be incorrect. In this work, we formalise the approach to ML with reject option in binary classification, deriving theoretical guarantees on the resulting error rate. This is achieved through conformal prediction (CP), which produce prediction sets with distribution-free validity guarantees. In binary classification, CP can output prediction sets containing exactly one, two or no labels. By accepting only the singleton predictions, we turn CP into a binary classifier with reject option . Here, CP is formally put in the framework of predicting with reject option. We state and prove the resulting error rate, and give finite sample estimates. Numerical examples provide illustrations of derived error rate through several different conformal prediction settings, ranging from full conformal prediction to offline batch inductive conformal prediction. The former has a direct link to sharp validity guarantees, whereas the latter is more fuzzy in terms of validity guarantees but can be used in practice. Error-reject curves illustrate the trade-off between error rate and reject rate, and can serve to aid a user to set an acceptable error rate or reject rate in practice.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Reject option, Conformal prediction, Binary classification, Abstain prediction, Refrain prediction, Error-reject curve
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-68026 (URN)10.1016/j.mlwa.2025.100664 (DOI)001492648200001 ()2-s2.0-105027852801 (Scopus ID)GOA;intsam;1020344 (Local ID)GOA;intsam;1020344 (Archive number)GOA;intsam;1020344 (OAI)
Available from: 2025-06-02 Created: 2025-06-02 Last updated: 2026-02-05Bibliographically approved
Maalej, A., Johansson, U. & Löfström, T. (2025). Evaluating Calibration Techniques for Reliable Predictions. In: Letian Huang (Ed.), L. Huang (Ed.), Machine Learning and Soft Computing: 9th International Conference, ICMLSC 2025, Tokyo, Japan, January 24–26, 2025, Revised Selected Papers, Part II: . Paper presented at 9th International Conference on Machine Learning and Soft Computing, ICMLSC 2025 Tokyo 24 January 2025 through 26 January 2025 (pp. 159-175). Paper presented at 9th International Conference on Machine Learning and Soft Computing, ICMLSC 2025 Tokyo 24 January 2025 through 26 January 2025. Springer
Open this publication in new window or tab >>Evaluating Calibration Techniques for Reliable Predictions
2025 (English)In: Machine Learning and Soft Computing: 9th International Conference, ICMLSC 2025, Tokyo, Japan, January 24–26, 2025, Revised Selected Papers, Part II / [ed] Letian Huang, Springer, 2025, p. 159-175Chapter in book (Refereed)
Abstract [en]

In data-driven decision support, having access to reliable confidence measures for individual predictions is crucial. Machine learning algorithms can provide probabilistic predictions, but these are often poorly calibrated, resulting in misleading decision support. This study empirically evaluates a set of readily available state-of-the-art calibration techniques, including both scaling and binning approaches. Using four different underlying models, and in total 40 publicly available datasets, the results analyzed using rigorous statistical testing show that calibration is generally successful. Specifically, applying a post-hoc calibration will reduce both log losses and calibration errors, without significantly lowering the predictive accuracy. However, the choice of calibration technique should depend on both the underlying model and the size of the dataset, resulting in the following guidelines for calibration: (i) We recommend Venn-Abers for decision trees and naïve Bayes (ii) Beta calibration for Extreme Gradient Boosting (XGB), (iii) Platt scaling for small datasets and Venn-Abers for larger ones when using random forests.

Place, publisher, year, edition, pages
Springer, 2025
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2488
Keywords
Calibration, Decision support, Machine learning, Probabilistic prediction, Reliability, Calibration techniques, Confidence Measure, Data driven decision, Decision supports, Individual prediction, Machine learning algorithms, Machine-learning, Scalings, State of the art
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-68648 (URN)10.1007/978-981-96-6403-0_14 (DOI)2-s2.0-105007227591 (Scopus ID)978-981-96-6402-3 (ISBN)978-981-96-6403-0 (ISBN)
Conference
9th International Conference on Machine Learning and Soft Computing, ICMLSC 2025 Tokyo 24 January 2025 through 26 January 2025
Available from: 2025-06-17 Created: 2025-06-17 Last updated: 2025-10-13Bibliographically approved
Szabadváry, J. H., Löfström, T. & Matela, R. (2025). Online-cp: a Python Package for Online Conformal Prediction, Conformal Predictive Systems and Conformal Test Martingales. In: Nguyen K.A., Luo Z., Papadopoulos H., Lofstrom T., Carlsson L., Bostrom H. (Ed.), Proceedings of Machine Learning Research: . Paper presented at 14th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2025, 10 September 2025 - 12 September 2025, London (pp. 595-614). ML Research Press, 266
Open this publication in new window or tab >>Online-cp: a Python Package for Online Conformal Prediction, Conformal Predictive Systems and Conformal Test Martingales
2025 (English)In: Proceedings of Machine Learning Research / [ed] Nguyen K.A., Luo Z., Papadopoulos H., Lofstrom T., Carlsson L., Bostrom H., ML Research Press , 2025, Vol. 266, p. 595-614Conference paper, Published paper (Refereed)
Abstract [en]

Conformal prediction (CP) has gained increasing attention in machine learning owing to its ability to provide reliable prediction sets with well-calibrated uncertainty estimates. While most existing CP implementations focus on inductive conformal prediction (ICP), full conformal prediction—also known as online or transductive CP—offers the strongest validity guarantees but has been largely absent from open-source software due to its computational complexity. In this paper, we introduce online-cp, a Python package designed for online conformal prediction, conformal predictive systems (CPS), and conformal test martingales. The package implements several online CP algorithms, enabling efficient and principled uncertainty quantification in streaming data scenarios. Additionally, it includes tools for testing the exchangeability assumption by using conformal test martingales. We demonstrate the functionality of online-cp through classification and regression examples as well as applications to predictive systems and exchangeability testing. By making online CP methods accessible, online-cp provides a foundation for the broader adoption and further development of conformal prediction in real-time machine learning applications. 

Place, publisher, year, edition, pages
ML Research Press, 2025
Series
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 266
Keywords
Conformal predictive systems, Conformal test martingales, Exchangeability testing, Machine learning, Online conformal prediction, Python package, Uncertainty quantification, E-learning, Forecasting, Interactive computer systems, Online systems, Open source software, Open systems, Python, Software testing, Uncertainty analysis, Well testing, Conformal predictions, Conformal predictive system, Conformal test martingale, Machine-learning, Predictive systems, Uncertainty estimates, Uncertainty quantifications, Learning systems
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-69687 (URN)001595063100029 ()2-s2.0-105013965466 (Scopus ID)
Conference
14th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2025, 10 September 2025 - 12 September 2025, London
Funder
Knowledge Foundation, 20220187
Available from: 2025-09-04 Created: 2025-09-04 Last updated: 2025-12-19Bibliographically approved
Papadopoulos, H., An Nguyen, K., Luo, Z., Löfström, T., Carlsson, L. & Boström, H. (Eds.). (2025). Preface. Paper presented at 14th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2025, 10 September 2025 - 12 September 2025, London. ML Research Press, 266
Open this publication in new window or tab >>Preface
Show others...
2025 (English)Conference proceedings (editor) (Other academic)
Place, publisher, year, edition, pages
ML Research Press, 2025. p. 786
Series
Proceedings of Machine Learning Research, ISSN 2640-3498
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-69697 (URN)2-s2.0-105013962679 (Scopus ID)
Conference
14th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2025, 10 September 2025 - 12 September 2025, London
Available from: 2025-09-05 Created: 2025-09-05 Last updated: 2025-10-13Bibliographically approved
Pettersson, T., Riveiro, M. & Löfström, T. (2025). Real-Time OCR-Based Grocery Product Recognition with Orientation Alignment and Embedding-Driven Classification. In: : . Paper presented at 18th International Conference on Machine Vision (ICMV 2025), October 19-22, 2025, Paris, France.
Open this publication in new window or tab >>Real-Time OCR-Based Grocery Product Recognition with Orientation Alignment and Embedding-Driven Classification
2025 (English)Conference paper, Published paper (Refereed)
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hj:diva-70327 (URN)
Conference
18th International Conference on Machine Vision (ICMV 2025), October 19-22, 2025, Paris, France
Available from: 2025-12-05 Created: 2025-12-05 Last updated: 2025-12-05Bibliographically approved
Löfström, T., Löfström, H. & Johansson, U. (2024). Calibrated explanations for multi-class. In: Simone Vantini, Matteo Fontana, Aldo Solari, Henrik Boström & Lars Carlsson (Ed.), Proceedings of the Thirteenth Symposium on Conformal and Probabilistic Prediction with Applications: . Paper presented at The 13th Symposium on Conformal and Probabilistic Prediction with Applications, 9-11 September 2024, Politecnico di Milano, Milano, Italy (pp. 175-194). PMLR, 230
Open this publication in new window or tab >>Calibrated explanations for multi-class
2024 (English)In: Proceedings of the Thirteenth Symposium on Conformal and Probabilistic Prediction with Applications / [ed] Simone Vantini, Matteo Fontana, Aldo Solari, Henrik Boström & Lars Carlsson, PMLR , 2024, Vol. 230, p. 175-194Conference paper, Published paper (Refereed)
Abstract [en]

Calibrated Explanations is a recently proposed feature importance explanation method providing uncertainty quantification. It utilises Venn-Abers to generate well-calibrated factual and counterfactual explanations for binary classification. In this paper, we extend the method to support multi-class classification. The paper includes an evaluation illustrating the calibration quality of the selected multi-class calibration approach, as well as a demonstration of how the explanations can help determine which explanations to trust.

Place, publisher, year, edition, pages
PMLR, 2024
Series
Proceedings of Machine Learning Research ; 230
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-66433 (URN)001347148200011 ()2-s2.0-85216630934 (Scopus ID)
Conference
The 13th Symposium on Conformal and Probabilistic Prediction with Applications, 9-11 September 2024, Politecnico di Milano, Milano, Italy
Projects
PREMACOPAFAIRETIAI
Funder
Knowledge Foundation, 20220187, 20200223, 20230040
Available from: 2024-10-17 Created: 2024-10-17 Last updated: 2026-01-19Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0274-9026

Search in DiVA

Show all publications