Automatic checkout systems are designed to predict a complete shopping receipt using an image from the checkout area. These systems require high classification accuracy across numerous classes and must operate in real-time, despite domain differences between training data and real-world conditions. Building on recent advancements, we propose a method that outperforms current solutions and can be applied in real-time in automatic checkout systems. Our method leverages the Segment Anything Model to extract high-quality masks from lab product images, which are then transformed into synthetic checkout images and adapted to the real domain using contrastive unpaired translation. We train a product recognition model with data augmentation, named SCA+Y8, and further improve it through fine-tuning with pseudo-labels from unlabeled checkout images, resulting in an improved model called SCAFT+Y8. SCAFT+Y8 achieves a great increase in state-of-the-art performance, with an average receipt classification accuracy of 97.58%, and shows strong performance in smaller models, indicating the potential for deployment on low-cost edge devices.