Hero product image of Pop Culture Lemon Ginger Soda in an oragami paper world of 3D paper lemons and giner on a blue background

Pop Culture Soda - Adding Creative Pop With AI

ROLES

Retouching

| Photography

| Creative

| AI

We're popping off with AI Experiments in this case study.

The Pop Culture shoot was an experiment in pushing current AI tools. We like to see what's out there at the moment by actually using the available tech to develop a concept—ideally one without a real deadline or client, but that still sticks to the same technical and creative rigor.

Everything starts with a sketch—always with barely legible notes and indecipherable scribbled doodles that are more like reminders than directions. To be fair, I didn’t originally plan to share the sketch with anyone. That being said, I’m not sure I’d change the sketch; it’s just part of my process. So, you probably can’t read it, but you might see the base visual already there if you squint. The concept was: a can floating in a bubble, surrounded by lemon and ginger. Possibly an explosion or “popping,” playing off the brand colors and brand art—which reminded me of early Flash art, stop motion, paper cutout shapes, and playful illustrations. It also had this nice quality of balanced contrast: clean vector fonts and graphics mixed with looser, colorful, imperfect illustrations that felt like a surreal cartoon moment crashing into live action—or a charming buddy comedy where one friend wears a suit and the other wears flip-flops.

If this hadn’t been an AI experiment, I would’ve loved to build real construction paper sets and reassemble them in stop motion around the can. Still, we started similarly—with the idea of cutout illustrations on paper falling around the product. Enter Midjourney—except, Midjourney wasn’t having it. While AI is surprisingly good at random mashups, it still pulls from what's already out there—and there just aren’t a ton of references for what we were asking. So it defaulted to things it did understand. A common problem we keep running into with AI in unique creative scenarios. Getting weird but specific is where you start to see the breaking points.

That did lead to a happy accident, though: an image of a lemon made out of paper—not an illustration on paper, but one folded like origami. That inspired a pivot: making the whole scene from 3D paper elements, rather than 2D illustrations in 3D space. The bubble around the can became a new creative challenge. How do you make a bubble you can see inside feel like it belongs in an origami paper world?

The first image was what lead to trying out the origami style. The second image is an example of Midjourney getting weird, but not in a useful way. The third Image is one of the first that landed in the area we started to like.

In the end, we got something close to what was floating in our heads and used variations to chop up in Photoshop and piece back together to better direct the background. The bubble had to be larger to fit the can in proportion to the lemon and ginger—and also act as a stronger focal point. Some elements didn’t make sense and needed to be removed, swapped, or scaled. The paper bubble especially needed more presence, so we used Topaz Labs to upscale it with AI-assisted enhancement.

This shows the base image used, the image after editing in Photoshop to make the paper bubble larger, and the Pop Culture can as shot in studio. We shot the can for a previous quick mock-up test, so in this case we flipped the background to match the lighting on the can.

Once we had the plate, it became a more standard composite. We shot the can in studio, matched the lighting and perspective, retouched it, and blended it into the scene. In the end, we created two origami versions and a third “realistic” non-paper version.

Final version 2 of the paper theme

From there, we brought it into Runway to generate a video clip from the hero image. Considering the effort that goes into a single hero frame, turning it into a short video is a huge value add—something that would’ve otherwise required planning from the start, using a real set or full CGI.

This process surfaced two problems:

One, the paper lemons turned into real lemons almost immediately. AI models are trained on what they’ve seen—and there aren’t many floating origami lemons in slow motion on an even blue background, with a shredded yellow paper ball in the center. We managed to get about 1.5 seconds of usable pull-back before major distortion kicked in, then used Topaz to stretch that into a 10-second slow-mo.

One of the initial Runway Outputs. I left it sideways because this was the only ratio Runway would work in at the time. It's interesting to see how things progress and how to work around the restrictions. You can see the can coming in at the end shows us it does understand the scene in a horizontal perspective so it's not ideal to have to flip it. It does get us more pixels to work with though, which was more important in the context.

Two, the can fell apart instantly. Fonts? Melted. Shapes? Distorted. It’s a common AI issue currently with product in video generation. In future tests, I’ll try filming a short video of the can turning on a green screen and capturing a few photos of it from different angles to see if I can overlay those more effectively. For now, I pulled the can out of the background, ran Runway on the background only, and layered the can back in using After Effects so I could scale, rotate, and re-shadow it over time. It worked well enough, given the minimal movement, but I’m curious what future workflows might unlock.

The final After Effects composite

Of course we also pushed through a concept with a realistic look to compare. As always we created a bunch of extra pieces we could use to fill out the scene, replace things that didn't make sense, or add back in blurred out to create a better sense of depth. The exact same process we would use shooting all the images in camera, just digitally propping pieces.

The above images show some iterations of the realistic version. The below images show some examples of extra pieces made for composting after

The final realistic version

AI pushed things forward in ways traditional tools couldn’t, even if it stumbled on a number of fronts, especially video. But the process revealed new workflows, expanded creative possibilities, and delivered more value than would’ve been possible without a far more complex build.

UPDATE: July 2025 - Moonvalley

Sometimes I re-run old projects with new models. It's a great way to see how updates have improved, changed, or where they have not changed. In this case I was intrigued by Moonvalley; trained off fully licensed cinematic video and offering controls like camera movement or movement of objects in the video (and other things less interesting for my needs). This type of control is where we see a continuing split between professional tools and more consumer level AI. But, how well does it actually work?

Moonvalley camera control through a projected depth map

The camera control interface was pretty cool. It makes a depth map and then projects it into a 3D space you can rotate around and pan into. You then create key frames that it interpolates between to determine the motion. You can see I didn't do anything too crazy I wanted to push into the scene to the can, rotate around it a little, then pull back out. The rotation when zoomed in was mostly just to add another motion step and see how Moonvalley interpreted that.

Two outputs from the same Moonvalley prompt and same camera control seen above

The prompt starts with Moonvalley giving you it's description of the image that you upload. Great to see what it thinks it sees and you can then build off quickly or tweak it. It also has negative prompts which can help a lot. In this case the issue was it didn't see anything in the image as paper (typical AI trying to make them into real lemons again) so we added that into the description.

It did a much better job than previous models, of keeping the origami objects as paper and not throwing in random real lemons. Things held their shape for a lot longer, and it did a significantly better job at keeping the logo and large text intact than previous models have.

Image to video has notably, garbled all fonts from the start, and this still does it with the smaller fonts. AI changes everything every time it's run so it's trying to change the small fonts just like it's trying to move the origami paper lemon across the field of view. Small details change more and large ones less so. It's just a more complicated version of predictive text; it's why Chat GPT won't just change one part of what you wrote, it starts from scratch each time, changing other words for seemingly no reason. It just usually comes up with most of the same things again.

The most obvious issue with these is the camera movement is wildly different between them, even when using the exact same camera path input. The second vid, sort of, does what it was supposed to, but not quite. So I decided to try the "real" bubble, lemons, and ginger version to see if it would do better because it would understand the objects in the scene better.

Two runs of the realistic bubble image through Moonvalley, with the same camera movement inputs as before

This image certainly allows Moonvalley to do a better job of not morphing the individual objects and just moving them in space. It unfortunately has a really hard time figuring out where to focus though, and if the focus isn't on the can it's pretty much unusable. Also, while the objects all move individually really well there are definitely issues of it not understanding where in 3D space they are, in relation to one another. Objects that seemed to be further back suddenly appear close to an object that's in front. Nothing morphing through the can or anything that bad, but very clearly unnerving motion shifts.

Final version: one of the Moonvalley inputs, clipped to grab a workable section, expanded longer with slow-mo in Topaz, and made into a loop in Premiere

In the end I took a section of a generation that worked well, expanded it by creating new frames using Topaz, and then brought it into Premiere to make it a loop. It retains being a "can" and the logo/text just long enough, with just a touch of motion. It is definitely an improvement from the previous Runway attempt in the sense that some products might not need to be composited back in. This specific can also has the benefit of being just an even color at the edges, so all the branding is visible and it can rotate somewhat without needing to make up new designs or text that doesn't exist, just more blue can.

What I didn't do is re-run it through Runway, Veo3, Kling, etc. That may have worked better for some of the issues, and perhaps we'll test another down the line. This isn't meant to be comprehensive, it was mostly about trying the Moonvalley camera controls. And, in that area I'm not convinced it's working great at this time. Especially at $1.50 a generation, when you'd clearly need many generation attempts. I do appreciate the intuitive controls and it might work better on more basic images. It did just come out to be fair so I'll definitely still keep my eye on it.

Image to Video is still a cool use of AI; it adds more value to a single hero image by potentially justifying more resources than a single still image may have had before. It's also a use that isn't taking over an existing workflow pipeline. This is essentially a new capability, which personally, is more compelling than simply using a different tool to do the same thing.

However despite the endless YouTube clickbait, using AI to get specific, unique, quality creative is quite time consuming and not inherently cheap. Most problematically you can't guarantee you'll EVER get to a certain creative point or how long it will take because even with more control it's just endless trial and error instead of steady improvement. That's a huge problem for high end creative problems with lots of stakeholders and specific demands.

With correct expectations set though, it can certainly be leveraged to add value to existing work, and in that sense it's exciting. What do you think?

Have a creative idea?

Also like new socks?

Just want to say hi?

Reach out, let's talk!

White New Socks Creative Logo

Have a creative idea? Also like new socks?

Just want to say hi? Reach out, let's talk!

White New Socks Creative Logo

Have a creative idea?

Also like new socks?

Just want to say hi?

Reach out, let's talk!

White New Socks Creative Logo