By: Logan Kilpatrick
How to build with Nano Banana for free in Google AI Studio and the Gemini API

Two weeks ago, we launch Nano Banana (aka Gemini 2.5 Flash Image) and it has taken the world by storm. As of the end of Sept, we already have more than 500,000,000 images edited just in the Gemini app, with hundreds of millions more across other surfaces. This model, which excels at targeted edits, can be used for some pretty wild use cases. In this blog, we will explore 5 simple ideas of how you can start using Nano Banana right now to solve actual problems people have. We will be using https:///aistudio.google.com which is completely free, along with the Gemini API.
As always, you are reading this on my personal blog, so you guessed it, these are my personal opinions : )
AI powered interior design and editing with Nano Banana
Personally, this is one of the coolest use cases. I have always struggled to image the possible in a room, but this model makes it super easy to do. In this example, which you can follow along with in AI Studio, we take a product image plus a scene, and let the user drag the image of the product into the scene, letting the Nano Banana model fuse them together into a single image. If you want to see the prompt used, which was not that complex, you can click the “code” tab and then “geminiService.ts” and scroll down to line 300. This is a great example of Gemini’s native spatial understanding capabilities coming into play, something which no other image model has.

If you want to riff on this example from Google AI Studio, just use the chat bar on the left to prompt the edits you want, the model will rebuild the app and make that experience possible (this will apply to all the other examples we look at as well!).

Character consistency and editing with Nano Banana
So far, I think this has been the use case folks are most awe-struck by, mostly because you can easily upload a picture of yourself and see it in action. But the Nano Banana model is exceptionally good at character consistency, meaning you can make targeted edits without distorting the key features of the original character. We made a free example of this called past forward in Google AI Studio where you can visualize what you would look like through the past 5 decades, it is pretty funny.

The applications of this world class character consistency are endless. I have seen apps already going viral where you help people visualize what they would look like with different haircuts as an example. And like I showed before, the cool part about this experience in Google AI Studio is we can actually build that on the fly, let me try taking the example about and use the prompt “okay now take the same idea we have here with past forward but help me visualize 8 different haircut styles, take into account common men / women styles”. This will take around 90 seconds (I am doing it live while I write this blog), so hopefully it all works and turns out okay!

Okay wow, that is almost exactly what I was looking for (though not sure any of these styles are speaking to me). The level of complexity to build these types of products continues to go down, it is so cool to see! You really are 1 prompt away from a great idea these days.
Creative editing with Nano Banana
When I saw this example, I immediately went and took an image of my childhood home and sent it to my parents, their response was so positive, they loved it. The ability for the model to capture different stylistic behaviors, in this case, water coloring, is extremely impressive, while still retaining the DNA of the original picture (that is my home, now some AI derivative of it).

In this example, we use the Google Maps API to capture satellite data of a location and edit the image to be water color based. You can try this yourself in Google AI Studio if you want, it is a lot of fun to play around with! I also imagine there are lots of cool and unique businesses to be created with something like this (let you retrace some path through satellite images and do something creative with all these images).
Virtual “try on” experiences with Nano Banana
One of the biggest questions when someone does clothing shopping is “will this look good on me”. For the last 10 years there has been a huge amount of investment and innovation happening to try and bridge this gap. With Nano Banana, it now “just works”. You can take an image of yourself and a clothing item you want to image yourself in, and simply fuse the two together. From a technical POV, this is a near identical setup to the first example I showed above with home remodeling with AI.

The reason I wanted to include this example is that it is widely applicable. Everyone selling any physical product should be using this type of setup to showcase the product in different setups. You can play around with this try it on example app we created in Google AI Studio. You can also imagine you end up with a human AI avatar that does something like scrape your email and show you an inventory of all your personal clothes at home, which would be a great app to build : )!
Nano Banana for Video Generation
One of the last use cases I will talk about (even though there are 100’s more) is around video generation, specifically with Veo 3 (which we just dropped the price of by ~50%). One of the big challenges of video generation today is that the video’s are only 8 seconds when generated by an AI model. You need to stitch together multiple 8 second videos to create anything useful. Further, one of the most common failure modes is that the character consistency between 8 second videos ends up not being good enough and subtly changes in a way that breaks a longer form video. With Nano Banana however, you can lean on the model’s character consistency strength to ensure you have a good starting frame for every video you make.

In the example above, we are using tldraw’s canvas which lets you chain together different workflows and do AI explorations visually, including with our models like Nano Banana and Veo 3. You can try this example for free in Google AI Studio (but note that Veo does require a paid API key).
The tldraw canvas is very powerful, you can put together pretty much anything, but it takes a little to grok what is going on if you have never used it before. What I did that helped me was put in an image into the main chat UI, select the dropdown on the input field, and then go “Generate image” based on the image I provided I asked for a targeted edit.
Overall, there is so much to be built with Nano Banana. I have already seen thousands of new startups spawning around these very simple ideas, and some even going after the most ambitious AI image problems you can imagine. To me, what has made this so much fun is being able to vibe code it all in AI Studio. I am of course biases since I work on AI Studio but being able to play with or build apps around new frontier AI capabilities and having something up and running in ~90 seconds for free has never happened before. It is amazing to see the trend of democratizing access to be able to build with this technology. Happy building, and please send over any feedback about AI Studio’s build mode or the Nano Banana model!
5 things to build with Google’s new Nano Banana image editing & generation model was originally published in Around the Prompt on Medium, where people are continuing the conversation by highlighting and responding to this story.

