in

#Multimodal multitask deep learning framework for vibrotactile feedback and sound #SensoryFeedback

A multimodal multitask deep learning framework for vibrotactile feedback and sound rendering

The study developed a data-driven system that models and generates vibrotactile feedback and sounds simultaneously. Data collection involved manual data collection with a user interface-based visual guidance. The system recorded contact vibrations and sounds when a tool interacted with a surface texture. A custom haptic recording device was used to capture data, including acceleration signals and sound. The data was processed using deep learning models to predict acceleration signals and sounds based on user actions. The encoder-decoder network framework was used to model vibrotactile feedback and sounds, incorporating transformer layers and 1D CNNs. Multitask learning was employed to combine features from both modalities and improve performance. The model was trained using ADAM optimization and RMSE loss function. The system rendered vibrotactile signals to a voice-coil actuator and virtual sound to headphones when a stylus pen interacted with texture images on a tablet screen. Initial data was generated through interpolation if user actions did not match collected data. Threshold values were set for user actions to prevent continuous generation of initial data. The system continued to generate subsequent data until the user removed the pen from the screen. The study used six textured surface samples for experimentation.

Source link

Source link: https://www.nature.com/articles/s41598-024-64376-y

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

AI 모델로 홍보하는 하파크리스틴(Hapa Kristin) 신상 렌즈 | by CJ Lee | Do Things Studio | Jun, 2024

Promoting Hapa Kristin’s new lenses with AI models #AIAdvertising

ChatGPT-Maker OpenAI And Microsoft Sued By US Newspapers, Here Is Why - Times Now

Apple unveils AI advancements at annual conference with #innovation