An Experimental Interface to Present Audio Description of Video for the Blind
|佐藤 大介||Daisuke Sato||日本アイ・ビー・エム株式会社 東京基礎研究所||IBM Research, Tokyo Research Laboratory|
|高木 啓伸||Hironobu Takagi||日本アイ・ビー・エム株式会社 東京基礎研究所||IBM Research, Tokyo Research Laboratory|
|浅川 智恵子||Chieko Asakawa||日本アイ・ビー・エム株式会社 東京基礎研究所||IBM Research, Tokyo Research Laboratory|
The number of video files distributed via the Internet is increasing in recent years, because of the wide adoption of broadband. At the same time, the number of video files that are not accessible to the blind is also increasing. The W3C WCAG 2.0 working draft seeks to make video data understandable by providing audio escriptions and extended audio descriptions. However, it is too expensive to create alternatives for video data and also calls for expert knowledge and skills. In addition, it has not been clarified whether or the extended audio descriptions help blind users. Therefore, we developed an interface which provides interactive audio descriptions generated from text descriptions by using a Text To Speech (TTS) engine. The users are able to get audio description easily by interacting with the interface. We also conducted a pilot study and a listening test with two blind users, and discuss the impact of audio descriptions.