Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rule Extraction with Reject Option
Jönköping University, School of Engineering, JTH, Department of Computing.ORCID iD: 0009-0009-0404-2586
Jönköping University, School of Engineering, JTH, Department of Computing, Jönköping AI Lab (JAIL).ORCID iD: 0000-0003-0412-6199
2025 (English)In: Machine Learning and Soft Computing: 9th International Conference, ICMLSC 2025, Tokyo, Japan, January 24–26, 2025, Revised Selected Papers, Part I / [ed] L. Huang, Springer, 2025, Vol. 2487, p. 278-300Conference paper, Published paper (Refereed)
Abstract [en]

In many applications, it is vital to simultaneously obtain the best possible predictive performance and be able to understand the underlying logic. Rule extraction can then be employed, by having predictions made by an opaque model and an extracted transparent model to provide explanations and allowing inspection and analysis of relationships found. Using this setup, the quality of both global and local explanations depend on the fidelity of the extracted model. In this paper, a novel approach to rule extraction, which includes a reject option based on well-calibrated fidelity estimates, is presented and empirically evaluated. With the proposed method, the user can balance the required fidelity against the number of instances explained, by rejecting explanations in parts of feature space where the extracted model is generally weak in approximating the opaque model, or when the two models don’t agree. The transparent model can be visualized, using different rejection levels, thus identifying the parts of feature space where it is a good approximation of the opaque model. Empirical investigation, using a number of benchmark data sets, shows that extracted models are highly faithful, while often being small enough to be comprehensible. Most importantly, it is demonstrated how both local and global explanations can be provided through rule extraction with a reject option by leveraging well-calibrated fidelity estimates.

Place, publisher, year, edition, pages
Springer, 2025. Vol. 2487, p. 278-300
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2487
Keywords [en]
calibration, explanations, reject option, rule extraction, Benchmark data, Data set, Empirical investigation, Explanation, Feature space, Predictive performance, Rejection levels, Rules extraction
National Category
Information Systems
Identifiers
URN: urn:nbn:se:hj:diva-68365DOI: 10.1007/978-981-96-6400-9_21Scopus ID: 2-s2.0-105006899968ISBN: 978-981-96-6399-6 (print)ISBN: 978-981-96-6400-9 (electronic)OAI: oai:DiVA.org:hj-68365DiVA, id: diva2:1967279
Conference
9th International Conference, ICMLSC, 2025, Tokyo, Japan, January 24–26, 2025
Funder
Knowledge FoundationAvailable from: 2025-06-11 Created: 2025-06-11 Last updated: 2026-01-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Sönströd, CeciliaJohansson, Ulf

Search in DiVA

By author/editor
Sönströd, CeciliaJohansson, Ulf
By organisation
JTH, Department of ComputingJönköping AI Lab (JAIL)
Information Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 116 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf