SEE THE SOUND, FEEL THE STORY

AuralFlix: Where Sound Becomes Visible

Experience the future of multimedia with subtitles and 3D sign language avatars—bridging the gap between sound and sign for an inclusive world.

Voice-to-Text with Emotion Recognition
Emotion-Based Subtitle Coloring
Text-to-Sign Language Conversion
3D Sign Language Animation
Frame Synchronization Algorithm
New
Emergency Sound Detection System
Inclusive Multimedia Player
99.9%
Accuracy
98%
Accuracy
15+
Languages
PROJECT ABSTRACT

Transforming Accessibility

Through Innovation

AuralFlix is an AI-powered multimedia system designed to enhance communication accessibility between the deaf and hearing communities. The platform integrates advanced technologies such as voice-to-text transcription, emotion-aware subtitle coloring, text-to-sign language video generation, 3D sign language avatar animation, and an emergency sound detection system that alerts users to critical sounds like alarms or sirens in their environment.

AuralFlix intelligently synchronizes all media components—audio, subtitles, sign animations, and emergency alerts—through a frame alignment algorithm, ensuring seamless and context-aware playback. By combining artificial intelligence with human-centered design, the system provides a fully inclusive multimedia experience that bridges accessibility gaps, enhances safety awareness, and promotes equal participation for all users.

This research project is conducted under the Faculty of Computing, Sri Lanka Institute of Information Technology (SLIIT) as part of the IT4010 – Research Project (2025) module.

Objective

Accessible communication for all.

Innovation

Emotion-aware subtitles & 3D sign avatars.

Technology

AI, TensorFlow, Go & CUDA integration.

Impact

Bridging deaf and hearing communities.

ABOUT AURALFLIX

Redefining Accessibility

in Multimedia

An AI-powered multimedia platform bridging communication between hearing and deaf communities through subtitles, emotion-aware coloring, and lifelike 3D sign language avatars.

Subtitles

99.2%

AI-powered speech recognition generates accurate English subtitles instantly with high precision

Accuracy

3D Sign Language

150+

Lifelike 3D avatars translate content into sign language with natural gestures

Gestures

Emotion Detection

7 Types

Advanced AI captures and reflects emotional context with color-coded subtitles

Emotions

Frame Sync

<16ms

Perfect synchronization between audio, subtitles, and sign language animations

Latency
LIFE-SAVING TECHNOLOGYNEW

Emergency
Sound Detection

<0.5s

Detection Speed

98.5%

Accuracy Rate

24/7

Always Active

Our AI-powered emergency detection system monitors your environment 24/7, instantly identifying critical sounds like fire alarms, ambulance sirens, security alerts, and medical emergencies. Get visual notifications to stay safe and aware, even in noisy environments.

Fire Alarms

Sirens & Alerts

Security Alarms

Medical Alerts

KEY FEATURES

Features That Redefine Accessibility

Cutting-edge AI technology designed to break down barriers and create inclusive multimedia experiences for everyone.

Subtitles

AI-powered speech-to-text conversion generates accurate, synchronized subtitles instantly.

Powered by Advanced AI
3D Sign Language Avatars

Dynamic, phrase-based sign translations delivered through lifelike 3D avatars.

Powered by Advanced AI
Emotion Recognition

Advanced AI detects tone and emotion to create more meaningful and contextual subtitles.

Powered by Advanced AI
Seamless Synchronization

Ensures perfect frame-by-frame timing between audio, text, and sign language animations.

Powered by Advanced AI
User-Friendly Interface

Simple, intuitive, and accessible interface designed for all users regardless of ability.

Powered by Advanced AI
Multi-Purpose Support

Works seamlessly with movies, educational lectures, presentations, and all video content.

Powered by Advanced AI

Experience the future of accessible multimedia content

Emergency Sound Detection SystemLife-Saving

Stay Safe with AI-Powered
Emergency Alert Detection

Advanced machine learning algorithms instantly detect critical emergency sounds—from fire alarms to medical alerts—providing visual notifications to keep deaf and hard-of-hearing individuals safe in any environment.

< 0.5s

Detection

Ultra-fast response time

98.5%

Accuracy Rate

Precise sound recognition

24/7

Reliability

Continuous monitoring

Critical Sounds We Detect

Fire Alarms

Instant visual alerts for smoke and fire alarm sounds

Sirens & Alerts

Detects emergency vehicle sirens and warning signals

Security Alarms

Recognizes burglar alarms and security system alerts

Medical Alerts

Identifies medical emergency sounds and vital alarms

Your Safety is Our Priority

Experience peace of mind with 24/7 emergency sound monitoring. Our AI-powered system ensures you never miss a critical alert.

YOUR ACCESSIBLE MEDIA PLATFORM

Start Watching with AuralFlix
Today!

Experience the future of accessible entertainment. AuralFlix seamlessly integrates subtitles, 3D sign language avatars, and emotion-aware technology to create an inclusive viewing experience like never before.

100% Accessible
AI-Powered Technology
Universal Design
Privacy Protected
10K+
Active Users
99.8%
Uptime
4.9/5
User Rating

🚀 Start exploring now and be part of the future of inclusive entertainment!

Launch & Watch

Get started in seconds with our intuitive player

Ready to Use
Built for All

Accessible content for deaf and hearing communities

Ready to Use
Easy to Use

Simple interface without technical knowledge needed

Ready to Use
Always Improving

Regular updates with new features and enhancements

Ready to Use
PUBLICATIONS & FUTURE GOALS

Research Contributions & Aspirations

Our research aims to advance accessible multimedia technologies through academic publications and global recognition at prestigious innovation competitions.

Recent Publications

Accepted
2025

AI-Driven 3D Avatar Framework for Sign Language Translation and Gesture Representation

International Conference on Advancements in Computing

Novel framework for sign language translation using 3D avatar representation with AI-driven gesture synthesis.

Accepted - Nominated for Best Paper Award
2025

AI-Driven 3D Avatar Framework for Sign Language Translation and Gesture Representation

World Conference on Applied Science, Engineering and Technology (WCASET)

Comprehensive framework combining AI, 3D animation, and NLP for accessible sign language communication.

Under Review
2025

AI-Driven 3D Avatar Framework for Sign Language Translation and Gesture Representation

49th World Conference on Applied Science, Engineering & Technology (WCASET-2025) – Bangkok, Thailand

Advanced AI framework enabling natural sign language translation through 3D avatar gesture representation.

Under Review
2025

AI-Driven 3D Avatar Framework for Sign Language Translation and Gesture Representation

48th World Conference on Applied Science, Engineering & Technology (WCASET-2025) – Kuala Lumpur, Malaysia

Research on integrating deep learning with 3D avatar animation for accurate sign language gesture synthesis.

Under Review
2025

AI-Driven 3D Avatar Framework for Sign Language Translation and Gesture Representation

4th International Conference on Advances in Science, Engineering & Technology (ICASET-2025) – New Delhi, India

Innovative approach to accessibility technology through AI-powered sign language avatar generation.

Under Review
2025

AI-Driven 3D Avatar Framework for Sign Language Translation and Gesture Representation

International Conference on Recent Trends in Multi-Disciplinary Research (ICRTMDR-2025) – Riyadh, Saudi Arabia

Multi-disciplinary research combining AI, computer graphics, and accessibility for sign language translation.

Target Competitions & Recognition

We are preparing to showcase AuralFlix at prestigious global competitions to gain recognition and support for accessible multimedia innovation.

Target 2026
Microsoft Imagine Cup

Global student technology competition focused on solving real-world problems

Target 2026
Google Solution Challenge

Building solutions for local communities powered by Google technology

Target 2026
World Summit Awards (WSA)

Global initiative recognizing digital innovation with social impact

Target 2026
Global Student Entrepreneur Awards (GSEA)

Premier global competition for student entrepreneurs making impact

Technology Stack

Next.js
Frontend Framework
React.js
UI Library
Tailwind CSS
CSS Framework
Bootstrap
CSS Framework
Go
API Services
Python
ML & Data Processing
TensorFlow
ML Framework
PyTorch
Deep Learning
MMPose
3D Pose Detection
Whisper AI
Speech-to-Text
OpenCV
Computer Vision
Librosa
Audio Processing
Custom Frame Sync
Python Algorithm
SQLite
Database
Local Storage
Data Management
CUDA
GPU Acceleration
cuDNN
Deep Learning GPU
Docker
Containerization
Shell Scripts
Automation
Virtual Environments
Development Tools
HOW IT WORKS

The Magic Behind AuralFlix

Experience the seamless journey from video upload to fully accessible content in just four simple steps.

01

Upload Your Video

Upload any movie, lecture, or video to the AuralFlix platform. You can upload file types like MP4, MP3, MOV, and more.

02

AI Processes the Content

AuralFlix extracts voice, detects emotions, generates subtitles, and converts text to sign language using advanced AI.

03

Enable Subtitles & Sign Language

Choose subtitles and/or 3D sign language avatar while watching your content.

04

Enjoy Accessible Viewing!

Experience perfectly synchronized subtitles and sign animations for a seamless, inclusive multimedia experience.

Simple, Fast, and Completely Automated