There has never been a more exciting time to work as a developer in the digital learning industry. With the convergence of generative AI, immersive media, and advanced browser APIs, we now have the ability to push learning experience design far beyond traditional boundaries.
That’s exactly what I set out to demonstrate in my recent Articulate Guest Webinar, Unlocking the Senses: Reimagining Interaction in Storyline for Accessibility and User Experience. This live session explored how we can rethink interaction design through the lens of our five senses—sight, sound, touch, smell, and taste—to build more engaging, accessible, and truly modern eLearning experiences.
From Clicks to Senses: A Broader Vision for Interactivity
Storyline developers have long relied on a toolkit of mouse and keyboard inputs to deliver interactive content. But what if we started designing learning content around more natural, sensory-driven modes of interaction?
That was the central question of this session. And thanks to the JavaScript trigger in Articulate Storyline 360, the answer is a resounding “Yes, we can.”
Modern browsers support a rich suite of APIs that enable us to connect with webcams, microphones, touchscreens, game controllers, and even stylus pressure sensors. When combined with the creative power of tools like OpenAI’s GPT-4 Turbo and ElevenLabs for voice synthesis, we suddenly gain the ability to create eLearning interactions that are more intuitive and inclusive than ever before.
Sight: What the Learning Environment Can See
Rather than focusing only on what the learner sees, I invited the audience to consider what the learning environment itself could “see”. By embedding camera feeds into Storyline slides and sending images through OpenAI’s vision API, we can enable interactions based on facial expressions, hand gestures, or even objects held up to the webcam.
One demo featured a custom-built Storyline activity that recognised different hand shapes in real-time, using only a browser, webcam, and JavaScript. Another project involved Daisy, a virtual art critic who analyses your on-screen sketches and responds with feedback using high-fidelity voice synthesis.
I also shared experiments with WebGazer.js—a JavaScript library that provides eye-tracking capabilities using only a webcam. By mapping eye movement to slide triggers, developers can introduce gaze-based navigation or interaction options into their Storyline projects.
Sound: Beyond Text, Into Conversation
We explored how voice recognition has evolved from the limitations of the old Web Speech API to the much more powerful, real-time speech capabilities available through platforms like OpenAI and ElevenLabs.
Rather than simply transcribing speech, today’s AI agents can understand intent and perform contextual actions inside the Storyline environment. One demo featured a voice-driven AI assistant that could complete form fields, trigger animations, or even carry on a narrative conversation with learners—all within a Storyline project.
This marks a huge step forward in accessibility and user personalisation, especially for learners who prefer to speak rather than type, or for those with motor impairments who may find traditional interfaces challenging.
Touch: Rethinking How We Engage with Our Fingers
Touch is often overlooked in desktop-first design, yet it’s central to mobile interaction. The Pointer Events API offers far more than what Storyline provides out of the box. It enables developers to track finger gestures, stylus pressure, and multi-touch events with precision.
I demonstrated this with a mobile-friendly tracing game for children, as well as a pressure-sensitive drawing activity using a Surface Pen. By extending Storyline with JavaScript, you can bring app-like tactile interactions directly into your courseware.
Smell and Taste: Humour Meets Innovation
While these senses are clearly more metaphorical in the digital space, they offered the perfect segue into more unexpected forms of interaction—like using game controllers to navigate Storyline content.
Through the Gamepad API, it’s possible to connect traditional game controllers to browser-based learning experiences. I shared two examples, including one from my Advanced Animation Workshop, where learners can use gamepad buttons to interact with and control on-screen content—opening doors to gamified experiences that feel native to the devices many learners already enjoy.
Future Gazing: The Interface as the Next Frontier
The webinar concluded with a look towards the future. Industry leaders like OpenAI’s Sam Altman and Apple designer Jony Ive are already investing in new human-computer interfaces. Valve’s Gabe Newell is exploring neural input technologies. The direction is clear: tomorrow’s interaction design will be even more immersive, adaptive, and sensory-rich.
This has enormous implications for digital learning—not just in terms of creative potential, but also for accessibility. By breaking free from the confines of traditional input methods, we can design learning experiences that meet users where they are, in the most natural ways possible.
Final Thoughts
If you’re a fellow Storyline developer, I hope this session sparks your imagination and encourages you to explore the world of sensory-driven interaction. With tools like JavaScript, modern browser APIs, and AI-powered platforms, you’re no longer limited by the default options in Storyline.
I’d love to continue the conversation with you. Connect with me on LinkedIn, check out more of my projects on YouTube, or get in touch via the Discover eLearning website.