Experience the future of multimedia with subtitles and 3D sign language avatars—bridging the gap between sound and sign for an inclusive world.
AuralFlix is an AI-powered multimedia system designed to enhance communication accessibility between the deaf and hearing communities. The platform integrates advanced technologies such as voice-to-text transcription, emotion-aware subtitle coloring, text-to-sign language video generation, 3D sign language avatar animation, and an emergency sound detection system that alerts users to critical sounds like alarms or sirens in their environment.
AuralFlix intelligently synchronizes all media components—audio, subtitles, sign animations, and emergency alerts—through a frame alignment algorithm, ensuring seamless and context-aware playback. By combining artificial intelligence with human-centered design, the system provides a fully inclusive multimedia experience that bridges accessibility gaps, enhances safety awareness, and promotes equal participation for all users.
This research project is conducted under the Faculty of Computing, Sri Lanka Institute of Information Technology (SLIIT) as part of the IT4010 – Research Project (2025) module.
Advanced machine learning algorithms instantly detect critical emergency sounds—from fire alarms to medical alerts—providing visual notifications to keep deaf and hard-of-hearing individuals safe in any environment.
Ultra-fast response time
Precise sound recognition
Continuous monitoring
Instant visual alerts for smoke and fire alarm sounds
Detects emergency vehicle sirens and warning signals
Recognizes burglar alarms and security system alerts
Identifies medical emergency sounds and vital alarms
Experience peace of mind with 24/7 emergency sound monitoring. Our AI-powered system ensures you never miss a critical alert.
Experience the future of accessible entertainment. AuralFlix seamlessly integrates subtitles, 3D sign language avatars, and emotion-aware technology to create an inclusive viewing experience like never before.
Get started in seconds with our intuitive player
Accessible content for deaf and hearing communities
Simple interface without technical knowledge needed
Regular updates with new features and enhancements
Our research aims to advance accessible multimedia technologies through academic publications and global recognition at prestigious innovation competitions.
International Conference on Advancements in Computing
Novel framework for sign language translation using 3D avatar representation with AI-driven gesture synthesis.
World Conference on Applied Science, Engineering and Technology (WCASET)
Comprehensive framework combining AI, 3D animation, and NLP for accessible sign language communication.
49th World Conference on Applied Science, Engineering & Technology (WCASET-2025) – Bangkok, Thailand
Advanced AI framework enabling natural sign language translation through 3D avatar gesture representation.
48th World Conference on Applied Science, Engineering & Technology (WCASET-2025) – Kuala Lumpur, Malaysia
Research on integrating deep learning with 3D avatar animation for accurate sign language gesture synthesis.
4th International Conference on Advances in Science, Engineering & Technology (ICASET-2025) – New Delhi, India
Innovative approach to accessibility technology through AI-powered sign language avatar generation.
International Conference on Recent Trends in Multi-Disciplinary Research (ICRTMDR-2025) – Riyadh, Saudi Arabia
Multi-disciplinary research combining AI, computer graphics, and accessibility for sign language translation.
We are preparing to showcase AuralFlix at prestigious global competitions to gain recognition and support for accessible multimedia innovation.
Global student technology competition focused on solving real-world problems
Building solutions for local communities powered by Google technology
Global initiative recognizing digital innovation with social impact
Premier global competition for student entrepreneurs making impact
Experience the seamless journey from video upload to fully accessible content in just four simple steps.