Open Source AI Material Transformation, Motion Study & Sound for Emotion
Case
An internal Valentine’s Day concept exploring material transformation, controlled motion, and symbolic storytelling using Open Source generative AI pipelines.
Date
An exploratory generative AI use case, developed in February 2026 for Valentine’s Day release.
tools & techniques
Text-to-image material generation, AI image editing for texture application, first-frame / last-frame video transitions, controlled reveal sequences, AI audio generation, all with Open Source AI models, and ComfyUI-based pipelines.
Project Overview
VALENTAINS 2026
VALENTAINS is a self-initiated concept project exploring how generative AI can be used to create symbolic, material-driven storytelling through structured pipelines.
The project begins with the creation of a heart shape in 2D, developed in Affinity as both a white and red heart on a black background. The white heart was then translated into a clean white 3D base form, used as the structural foundation for material exploration. The red 2D heart was also converted into a red 3D version, serving as the emotional core of the sequence.
From there, we generated an internal material library using Open Source text-to-image models and our very own prompt-system for materials, inside a ComfyUI pipeline. Materials generated included:
-
Cracked asphalt
-
Concrete
-
Soil and earth
- Ice and snow
-
White ceramic
-
White canvas paper
-
Marble
-
Rough stone
-
Red stone
-
Brown leather
-
Egg shell
-
Light stone
These materials were generated using the text-to-image Open Source AI model Qwen Image 2512, forming a reusable texture system.
We then used another Open Source AI model for image-editing called Qwen Image Edit 2511. We applied these generated textures directly onto the 3D white heart, preserving the exact geometry while transforming the surface material. The result was a series of identical heart forms, each with a distinct material identity.
Now it was time to generate videos and for that we used WAN 2.2, another Open Source AI video model for image-to-video by using a first-fram-last-frame workflow. Read more about that further down. Next step in the workflow is upscaling the video (or as we call it in GENERAITR Post-Generation). For that we used yet another Open Source AI model called SeedVR2.
And finally, it was time to add some sound effects and music. For that, we used the final Open Source model, this time an Audio model for text-to-audio called Stable Audio.
It was then all put together and edited in Final Cut Pro, both as a slightly longer video and as short social media videos of each heart’s heartbeat.
Visual Highlights
Project IMAGES
Expression, Graphics & Motion
The final project consists of 9 short videos, each featuring a different material heart:
-
Concrete heart
-
Ceramic heart
-
Earth and soil heart
-
Ice heart
-
Brown leather heart
-
Red stone heart
-
Marble stone heart
-
Rough stone heart
-
Canvas paper heart
Each heart first appears in its material form, beating in subtle motion. If cracks are present in the material, they follow the rhythm of the heartbeat, reinforcing the illusion of physical structure responding to motion.
Phase 1, The Heartbeat
These are the base motions used before transformation begins.
- Beating
- Pulsing
- Expanding and contracting
- Subtle surface vibration
- Cracks flexing with the rhythm
- Rhythmic deformation
- Internal pressure building
Phase 2, Revealing the Red Heart
-
Cracking open
-
Breaking apart
-
Crumbling
-
Shattering
-
Melting
-
Burning
-
Peeling away
-
Unfolding
-
Cutting in half
-
Core breaking through
Not all movements was used in the final videos.
CREATING THE SHAPE
HEARTS TURNING INTO BEATS
MORE HEARTS
Motion & Transformation Exploration
The animation stage was created using the Open Source video model WAN 2.2.
First, we used first-frame / last-frame logic to generate a beating motion for each textured 3D heart. The structural integrity of the material was maintained, while subtle deformation and crack movement followed the pulse.
The second sequence in each video focuses on transformation. Using first-frame / last-frame generation and controlled prompting, each material heart reveals the red core heart through different physical interpretations:
-
Breaking
-
Crumbling
-
Unfolding
-
Burning
-
Shattering
-
Melting
-
Cutting
-
Cracking
The material surface gives way until the red 3D heart is revealed. The sequence can be interpreted symbolically, as resilience, vulnerability, or the idea that love breaks through hardness.
Each video concludes with a final transformation where the red heart converts into a red plus icon on a black background, generated using the same Open Source video workflow.
Audio & Sound Design
The audio in VALENTAINS is generated using Open Source AI models.
The heartbeat sound was created using Stable Audio, then adjusted in tempo to reinforce the rhythmic pulse of each material heart even though we didn’t attempt to match the exact heartbeats.
For the background score, we used Ace-Step 1.5 to generate a more dramatic, hopeful, and uplifting musical layer that evolves alongside the visual transformation.
Together, the sound design strengthens the symbolic arc, from tension to warmth, from resistance to reveal.
Open Source & Infrastructure
VALENTAINS is built entirely using Open Source AI models:
-
Qwen Image 2512 for material generation
-
Qwen Image Edit 2511 for texture application
-
WAN 2.2 for motion and video generation
-
Stable Audio for audio generation (heartbeat)
- Ace-Step 1.5 for music generation
- SeedVR for upscaling
All pipelines were built and executed in ComfyUI, using structured node-based workflows for repeatability and control.
All generation was performed on EU-based servers, ensuring transparency, data control, and alignment with regulatory requirements.
In Summary
VALENTAINS explores:
-
Structured material generation as a reusable texture library
-
Image-to-image texture application with preserved geometry
-
First-frame / last-frame and prompts for controlled motion
-
Symbolic material transformation
- Audio and music generation to support
-
Open Source AI pipelines executed in ComfyUI
-
Emotion-driven storytelling through structured generative systems
The project demonstrates how generative AI can move beyond isolated images and into controlled, repeatable audiovisual storytelling systems.
products and services
Next-Level Audiovisual Solutions
Generative AI Audiovisuals
Harnessing generative AI to create dynamic and personalized content through advanced work-flow ready generative AI pipelines and through customized generative AI services.
Real-Time Experiences
Utilizing Unreal Engine for interactive, real-world based or 3D-rendered stunning, visuals and immersive interactive experiences that captivate and engage audiences.
Interactive UGC Experiences
Empowering users and players to interact and enjoy while learning. Through playable digital twins or interactive music experiences. In Fortnite or other UGC platforms.
Let's Collaborate on Your Next Vision
We invite you to explore the possibilities of collaboration with us at TOMLIN STUDIO. Whether you’re looking to innovate in digital media using generative AI workflows, create immersive and interactive experiences, or leverage other cutting-edge audiovisual technology, our team is here to bring your vision to life. Reach out today to discuss how we can work together to achieve extraordinary results.