Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Twist and Snap: A Neuro-Symbolic System for Affordance Learning of Opening Jars and Bottles
Jönköping University, School of Engineering, JTH, Department of Computing, Jönköping AI Lab (JAIL).ORCID iD: 0009-0005-6771-7967
Jönköping University, School of Engineering, JTH, Department of Computing, Jönköping AI Lab (JAIL).ORCID iD: 0000-0001-8308-8906
2024 (English)In: 2024 7th Iberian Robotics Conference (ROBOT) / [ed] Lino Marques & Manuel Ferre, IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous robotic systems need a flexible and safe method to interact with their surroundings. When encountering unfamiliar objects, the agents should be able to identify and learn the involved affordances to apply appropriate actions. Focusing on affordance learning, we introduce a neuro-symbolic AI system with a robot simulation capable of inferring appropriate action. The system's core is a visuo-lingual attribute detection module coupled with a probabilistic knowledge base. The system is accompanied by a Unity robot simulation that is used for evaluation. The system is evaluated through caption-inferring capabilities using image captioning and machine translation metrics on a case study of opening containers. The two main affordance-action relation pairs are the jar/bottle lids that are open using either a ‘twist’ or a ‘snap’ action. The results show the system is successful in opening all 50 containers in the test case, based on an accurate attribute captioning rate of 71%. The mismatch is likely due to the ‘snapping’ lids being able to open also after a twisting motion. Our system demonstrates that affordance inference can be successfully implemented using a cognitive visuo-lingual method that could be generalized to other affordance cases.

Place, publisher, year, edition, pages
IEEE, 2024.
Keywords [en]
Autonomous robots, Hybrid learning, Affordances, Inference mechanisms, Hybrid intelligent systems
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:hj:diva-66903DOI: 10.1109/ROBOT61475.2024.10797390ISI: 001420208700079Scopus ID: 2-s2.0-85216020388ISBN: 979-8-3503-7636-4 (electronic)OAI: oai:DiVA.org:hj-66903DiVA, id: diva2:1924546
Conference
2024 7th Iberian Robotics Conference (ROBOT), 6-8 November 2024, Madrid, Spain
Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-10-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Hedblom, Maria M.

Search in DiVA

By author/editor
Aina, Jorge AguirregomezcortaHedblom, Maria M.
By organisation
Jönköping AI Lab (JAIL)
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 199 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf