End-to-End Semantically Aware Tactile Generation DOI Creative Commons
Mohammad Dastjerdi, Abbas Akkasi,

Hilaire Djani

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 6, 2024

Abstract Tactile graphics are an essential tool for conveying visual information to visually impaired individuals. However, translating 2D plots, such as B´ezier curves, polygons, and bar charts, into effective tactile format remains a challenge. This paper presents novel, two-stage deep learning pipeline automating this conversion process.Our method leverages Pix2Pix architecture, employing U-Net++ generatornetwork robust image generation. To improve the perceptual quality of tactilerepresentations, we incorporate adversarial loss function alongside agradient penalty. The operates in sequential manner: firstly, convertingthe source plot grayscale representation, followed by transformationinto channel-wise equivalent.We evaluate performance our model on comprehensive synthetic datasetconsisting 20,000 source-target pairs encompassing various types. Toquantify performance, utilize fuzzy versions established metrics like pixel accuracy, Dice coefficient, Jaccard index. Additionally, human study is conductedto assess generated graphics.The proposed approach demonstrates promising results, significantly streamliningthe plots graphics. paves way development fully automated systems, enhancing accessibility forvisually

Language: Английский

What AIs are not learning (and why) DOI Creative Commons
Mark Stefik

AI Magazine, Journal Year: 2025, Volume and Issue: 46(1)

Published: March 1, 2025

Abstract Today's robots do not yet learn the general skills that are necessary to provide home care, be nursing assistants, interact with people, or household chores nearly as well people do. Addressing aspirational goal of creating service requires improving how they created. mainstream AIs created by agents learning from experiences doing tasks in real‐world contexts and interacting people. sensing, acting, experiments, collaborating. Future will need such order ready for robust deployment human applications. This paper investigates what future autonomous human‐compatible know. It recommends developing experiential (robotic) foundation models (FMs) bootstrapping them.

Language: Английский

Citations

0

End-to-End Semantically Aware Tactile Generation DOI Creative Commons
Mohammad Dastjerdi, Abbas Akkasi,

Hilaire Djani

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 6, 2024

Abstract Tactile graphics are an essential tool for conveying visual information to visually impaired individuals. However, translating 2D plots, such as B´ezier curves, polygons, and bar charts, into effective tactile format remains a challenge. This paper presents novel, two-stage deep learning pipeline automating this conversion process.Our method leverages Pix2Pix architecture, employing U-Net++ generatornetwork robust image generation. To improve the perceptual quality of tactilerepresentations, we incorporate adversarial loss function alongside agradient penalty. The operates in sequential manner: firstly, convertingthe source plot grayscale representation, followed by transformationinto channel-wise equivalent.We evaluate performance our model on comprehensive synthetic datasetconsisting 20,000 source-target pairs encompassing various types. Toquantify performance, utilize fuzzy versions established metrics like pixel accuracy, Dice coefficient, Jaccard index. Additionally, human study is conductedto assess generated graphics.The proposed approach demonstrates promising results, significantly streamliningthe plots graphics. paves way development fully automated systems, enhancing accessibility forvisually

Language: Английский

Citations

0