Web-based Interactive Experience

Comp 2

Exploration

What am I doing rn?

Context

Exploratory

Services

Machine Learning, HTML/CSS/JS, Lottie, Motion

What is it?

An in-browser interactive experience that uses your webcam to mirror your presence virtually in a way that makes you reconsider your relationship with tech, emphasizing the importance of physical presence and authentic engagement.
An in-browser interactive experience that uses your webcam to mirror your presence virtually in a way that makes you reconsider your relationship with tech, emphasizing the importance of physical presence and authentic engagement.

The user should have some takeaway questions like…


Why do we spend so much time everyday in front of a screen?


Why are we not present enough while using the devices we do?


Why do we try to fix our appearance when we see our reflections?


Why is our digital presence important?

The user should have some takeaway questions like…


Why do we spend so much time everyday in front of a screen?


Why are we not present enough while using the devices we do?


Why do we try to fix our appearance when we see our reflections?


Why is our digital presence important?

How does it work?

Real-time webcam input + Custom trained ML model (Microsoft Azure TensorFlow.js) + Custom Animations (made in After Effects) Lottie.js = Seamless blending of webcam input with animated graphics to detect and respond to specific user actions creating an interactive experience
Real-time webcam input + Custom trained ML model (Microsoft Azure TensorFlow.js) + Custom Animations (made in After Effects) Lottie.js = Seamless blending of webcam input with animated graphics to detect and respond to specific user actions creating an interactive experience

The primary objective of this project was to develop an interface capable of seamlessly detecting and responding to specific user actions in real-time. The animations integrated into the interface were designed to be triggered based on the detected user behavior and to play out in a visually captivating manner. This project served as an initial step towards the broader goal of training a model to recognize a wide range of user interactions, essentially replicating any action the user might perform.


To start, the focus was initially placed on training the model to recognize two distinct user actions: taking a sip from a drink and using their phone. The animations associated with these actions needed to be done such that they are vivid and engaging, at times even playfully mocking the user's behavior. This design approach needed to draw the user's attention and encourage them to be more present and engaged with the application.

The primary objective of this project was to develop an interface capable of seamlessly detecting and responding to specific user actions in real-time. The animations integrated into the interface were designed to be triggered based on the detected user behavior and to play out in a visually captivating manner. This project served as an initial step towards the broader goal of training a model to recognize a wide range of user interactions, essentially replicating any action the user might perform.


To start, the focus was initially placed on training the model to recognize two distinct user actions: taking a sip from a drink and using their phone. The animations associated with these actions needed to be done such that they are vivid and engaging, at times even playfully mocking the user's behavior. This design approach needed to draw the user's attention and encourage them to be more present and engaged with the application.

The model was trained using Microsoft's Azure Custom Vision

The model was trained using Microsoft's Azure Custom Vision

There were four main sample sets I had made and trained the model on to recognize: drinking, holding, (drinking and holding the bottle) idle (user doing nothing) and phone (user using their phone).

There were four main sample sets I had made and trained the model on to recognize: drinking, holding, (drinking and holding the bottle) idle (user doing nothing) and phone (user using their phone).

Illustrations and Animations

Lottie, a popular animation library, was utilized to integrate and control animations.

Different animations were loaded and played within designated HTML containers.

Illustrations and Animations

Lottie, a popular animation library, was utilized to integrate and control animations.

Different animations were loaded and played within designated HTML containers.

The project introduced a confidence-based control mechanism to trigger animations. For instance, animations related to phone usage would only play when the system was highly confident that the user was using their phone (confidence level > 95%).


The animations were designed to follow specific sequences. For instance, in the drinking scenario, the animation sequence included "initial," "startsFloating," "floating," and "startsFloating in reversal." In the phone usage scenario, the sequence was "startUPhone," "UPhone," "startUPhone in reversal," and "initial."


The project used the requestAnimationFrame function to continuously update and respond to changes in webcam input, providing a real-time interactive experience. The animations were configured to loop seamlessly and, when necessary, play in reverse to create smooth transitions between different states.


Most of the tools were new to me, and the most challenging task was to get the model itself to work, which is still buggy. But regardless, it was fun.

The project introduced a confidence-based control mechanism to trigger animations. For instance, animations related to phone usage would only play when the system was highly confident that the user was using their phone (confidence level > 95%).


The animations were designed to follow specific sequences. For instance, in the drinking scenario, the animation sequence included "initial," "startsFloating," "floating," and "startsFloating in reversal." In the phone usage scenario, the sequence was "startUPhone," "UPhone," "startUPhone in reversal," and "initial."


The project used the requestAnimationFrame function to continuously update and respond to changes in webcam input, providing a real-time interactive experience. The animations were configured to loop seamlessly and, when necessary, play in reverse to create smooth transitions between different states.


Most of the tools were new to me, and the most challenging task was to get the model itself to work, which is still buggy. But regardless, it was fun.

This experience welcomes anyone who interacts with digital devices in their daily life. It aims to encourage users to reconsider their relationship with technology, emphasizing the importance of physical presence and authentic engagement. Through this experience, individuals are hoped to rediscover and cherish the uniqueness of their daily interactions, leading to a renewed appreciation that endures well beyond the interaction.

This experience welcomes anyone who interacts with digital devices in their daily life. It aims to encourage users to reconsider their relationship with technology, emphasizing the importance of physical presence and authentic engagement. Through this experience, individuals are hoped to rediscover and cherish the uniqueness of their daily interactions, leading to a renewed appreciation that endures well beyond the interaction.

2:09 AM

Made with love by Manan Dua

2:09 AM

Made with love by Manan Dua