
Pop Culture Soda - Adding Creative Pop With AI
ROLES
Retouching
| Photography
| Creative
| AI
We're popping off with AI Experiments in this case study.
The Pop Culture shoot was an experiment in pushing current AI tools. We like to see what's out there at the moment by actually using the available tech to develop a concept—ideally one without a real deadline or client, but that still sticks to the same technical and creative rigor.
Everything starts with a sketch—always with barely legible notes and indecipherable scribbled doodles that are more like reminders than directions. To be fair, I didn’t originally plan to share the sketch with anyone. That being said, I’m not sure I’d change the sketch; it’s just part of my process. So, you probably can’t read it, but you might see the base visual already there if you squint. The concept was: a can floating in a bubble, surrounded by lemon and ginger. Possibly an explosion or “popping,” playing off the brand colors and brand art—which reminded me of early Flash art, stop motion, paper cutout shapes, and playful illustrations. It also had this nice quality of balanced contrast: clean vector fonts and graphics mixed with looser, colorful, imperfect illustrations that felt like a surreal cartoon moment crashing into live action—or a charming buddy comedy where one friend wears a suit and the other wears flip-flops.


If this hadn’t been an AI experiment, I would’ve loved to build real construction paper sets and reassemble them in stop motion around the can. Still, we started similarly—with the idea of cutout illustrations on paper falling around the product. Enter Midjourney—except, Midjourney wasn’t having it. While AI is surprisingly good at random mashups, it still pulls from what's already out there—and there just aren’t a ton of references for what we were asking. So it defaulted to things it did understand. A common problem we keep running into with AI in unique creative scenarios. Getting weird but specific is where you start to see the breaking points.
That did lead to a happy accident, though: an image of a lemon made out of paper—not an illustration on paper, but one folded like origami. That inspired a pivot: making the whole scene from 3D paper elements, rather than 2D illustrations in 3D space. The bubble around the can became a new creative challenge. How do you make a bubble you can see inside feel like it belongs in an origami paper world?



The first image was what lead to trying out the origami style. The second image is an example of Midjourney getting weird, but not in a useful way. The third Image is one of the first that landed in the area we started to like.
In the end, we got something close to what was floating in our heads and used variations to chop up in Photoshop and piece back together to better direct the background. The bubble had to be larger to fit the can in proportion to the lemon and ginger—and also act as a stronger focal point. Some elements didn’t make sense and needed to be removed, swapped, or scaled. The paper bubble especially needed more presence, so we used Topaz Labs to upscale it with AI-assisted enhancement.



This shows the base image used, the image after editing in Photoshop to make the paper bubble larger, and the Pop Culture can as shot in studio. We shot the can for a previous quick mock-up test, so in this case we flipped the background to match the lighting on the can.
Once we had the plate, it became a more standard composite. We shot the can in studio, matched the lighting and perspective, retouched it, and blended it into the scene. In the end, we created two origami versions and a third “realistic” non-paper version.
Final version 2 of the paper theme
From there, we brought it into Runway to generate a video clip from the hero image. Considering the effort that goes into a single hero frame, turning it into a short video is a huge value add—something that would’ve otherwise required planning from the start, using a real set or full CGI.
This process surfaced two problems:
One, the paper lemons turned into real lemons almost immediately. AI models are trained on what they’ve seen—and there aren’t many floating origami lemons in slow motion on an even blue background, with a shredded yellow paper ball in the center. We managed to get about 1.5 seconds of usable pull-back before major distortion kicked in, then used Topaz to stretch that into a 10-second slow-mo.
One of the initial Runway Outputs. I left it sideways because this was the only ratio Runway would work in at the time. It's interesting to see how things progress and how to work around the restrictions. You can see the can coming in at the end shows us it does understand the scene in a horizontal perspective so it's not ideal to have to flip it. It does get us more pixels to work with though, which was more important in the context.
Two, the can fell apart instantly. Fonts? Melted. Shapes? Distorted. It’s a common AI issue currently with product in video generation. In future tests, I’ll try filming a short video of the can turning on a green screen and capturing a few photos of it from different angles to see if I can overlay those more effectively. For now, I pulled the can out of the background, ran Runway on the background only, and layered the can back in using After Effects so I could scale, rotate, and re-shadow it over time. It worked well enough, given the minimal movement, but I’m curious what future workflows might unlock.
The final After Effects composite
Of course we also pushed through a concept with a realistic look to compare. As always we created a bunch of extra pieces we could use to fill out the scene, replace things that didn't make sense, or add back in blurred out to create a better sense of depth. The exact same process we would use shooting all the images in camera, just digitally propping pieces.



The above images show some iterations of the realistic version. The below images show some examples of extra pieces made for composting after



The final realistic version
AI pushed things forward in ways traditional tools couldn’t, even if it stumbled on a number of fronts, especially video. But the process revealed new workflows, expanded creative possibilities, and delivered more value than would’ve been possible without a far more complex build.