The AI is transforming the image generation and amendment process into a smooth workflow. Now, with only one indicator, you can ask your computer to create or edit the current icon. Google now launches its new model for photo generation or editing, “Nano Banana” – Gemini 2.5 Flash. It is a powerful, fruitful tool that is changing how we think about image generation and manipulation, and this is something you will definitely want in your developer tool cut.
In this article, you will learn how to use “nano bananas” for generation generation using Gemini’s 2.5 flash image. So, let’s start!
The table of content
What is “nano bananas”?
Nano Kala is the Google Deep Mind’s latest photo editing low generation tool. Forget the formal jargon for a second. Imagine that on your back and call you have an incredibly talented, sharp artist. You can describe Anything else For them – “a astronaut riding a horse on the moon” – and pooffIt appears. Or, you give them a picture of your dog and say, “Wear a hat on her head,” and they do immediately look like their cat Your Dog
This is mainly nano bananas. It is a modern AI model of the Gemini family, especially faster, engineer for intelligent image generation and newborns. It understands your natural language orders, which enable you to bring complex visual ideas into life or make surgical changes in existing images with amazing ease.
Why “nano bananas”?
Because it’s small (flash!), Is full of goodness, and you feel like you have just sprung a new layer of creative possibility. It is fast, effective and incredibly versatile.
The superpower you get is:
Immediately perfect modification: Want to change a background, want to change the pose, or add a particular item? Just ask. Nano Cales understands and acts on it.
Character consistency: This is a huge big. If you are creating a story or a series of images, it is very important to maintain the shape of a particular character or item. Nano bananas are taking over it, ensuring that your main character looks the same, whether in the jungle or on the moon.
Visual Meshaps (Multi -Image Fusion): Find some different visual elements you want to collect without interruption? It can mix them in an integrated new image.
And a lot!
Interested? Let’s mess up our hands. But wait! To use “nano bananas”, you have two ways to do this:
Using Google AI Studio: The easiest and easiest way to create or edit images in Google Studio. This is a web -based tool that gives you direct access to gymnasium models without writing a line of code. It is absolutely the best place to test and start, and is also useful for developers and non -developers. Also, libraries need to install, manage API Keys, or write no code
Building with Gemini API: This is beneficial if you want more customs solution for your application. For any serious application – whether it’s a web app, a mobile app, or back and service – you will need to integrate directly with Gemini API. This is the place where the real power lies, as it allows you to automatically make tasks and create interactive experiences.
In this tutorial, you will see how we can use this tool in our applications, nothing but Azgar. So, let’s start.
How to configure your project
Step 1: Get an API key from Google Gemini
The first step for the use of “nano bananas” is to get an API key. Move toward the head Google AI StudioClick on the “API key”, and prepare a new one by explaining a project from your existing Google Cloud projects.

Once you have created an API key, save it safely somewhere.
Step 2: Install SD and other dependents
Open your terminal and drive:
pip install google-generativeai pillow python-dotenv
We will use Pillow For easy image handling and python-dotenv To safely manage our API key.
Step 3: Set up your environment
It is very important to keep the key to security your API key from your code. We will usually use environmental variables. So, make a file by name name .env In the root of your project and add your API key:
GEMINI_API_KEY="YOUR_API_KEY_HERE"
Step 4: Image Generation and Editing
Example 1: Text to Image Generation
The text to image is like an artist who can attract anything you describe. In it, you easily write a gesture (a phrase or detail), even a very detailed, and AI will produce a unique, high quality icon that matches your detail. This is perfect for bringing your most imaginary ideas into life with just a few words.
import os
import google.generativeai as genai
from PIL import Image
from io import BytesIO
from dotenv import load_dotenv
load_dotenv()
genai.configure(api_key=os.getenv("GEMINI_API_KEY"))
model = genai.GenerativeModel('gemini-2.5-flash-image-preview')
prompt = "A golden retriever puppy sitting in a field of daisies, bright and cheerful"
output_filename = "text_to_image_result.png"
def save_image_from_response(response, filename):
"""Helper function to save the image from the API response."""
if response.candidates and response.candidates(0).content.parts:
for part in response.candidates(0).content.parts:
if part.inline_data:
image_data = BytesIO(part.inline_data.data)
img = Image.open(image_data)
img.save(filename)
print(f"Image successfully saved as {filename}")
return filename
print("No image data found in the response.")
return None
def main():
print(f"Generating image for prompt: '{prompt}'...")
response = model.generate_content(prompt)
save_image_from_response(response, output_filename)
if __name__ == "__main__":
main()
Output:

The code used in the example handles everything needed to communicate with Gemini API and protect the image.
First, we import required libraries and load the API key
.envUsingload_dotenv(). This makes the key available so that we can contact Google Servicegenai.configure().The model we are using is
gemini-2.5-flash-image-previewDesigned for fast image generation.We explain A
prompt(“A golden retriever puppy...”)And a file name to save the image.Helpful function
save_image_from_response(...)Looks at the API response, extracts raw image data, and saves it as a PNG file.I
main()We call the model with a signal, then pass the answer to the helpful function to save the result.if __name__ == "__main__":The block ensures that the script runs only when it is implemented directly, not imported.
Example 2: Image to Image Editing
The image to image is like a photo editor. Instead of starting from the beginning, you can upload an existing picture and explain how to change it. For example, you can request background removal, add new items, or even a complete artistic style change.
import os
import google.generativeai as genai
from PIL import Image
from io import BytesIO
from dotenv import load_dotenv
load_dotenv()
genai.configure(api_key=os.getenv("GEMINI_API_KEY"))
model = genai.GenerativeModel('gemini-2.5-flash-image-preview')
input_image_path = "input_dog.png"
prommpt = "Make the dog wear a small wizard hat and spectacles."
output_filename = "edited_image_result.png"
def save_image_from_response(response, filename):
"""Helper function to save the image from the API response."""
if response.candidates and response.candidates(0).content.parts:
for part in response.candidates(0).content.parts:
if part.inline_data:
image_data = BytesIO(part.inline_data.data)
img = Image.open(image_data)
img.save(filename)
print(f"Image successfully saved as {filename}")
return filename
print("No image data found in the response.")
return None
def main():
print(f"Editing image '{input_image_path}' with prompt: '{prommpt}'...")
try:
img_to_edit = Image.open(input_image_path)
response = model.generate_content((prommpt, img_to_edit))
save_image_from_response(response, output_filename)
except FileNotFoundError:
print(f"Error: The file '{input_image_path}' was not found.")
if __name__ == "__main__":
main()
Output:

This code is very similar to the first example, but the key difference is the basic logic.
input_image_path: This variable now has the file to the photo that you want to edit.Image.open(input_image_path): This line uses a pillow library to open your local image file.model.generate_content((prommpt, img_to_edit)): This is the most important part. Unlike before, now we send a listgenerate_contentThe function, which includes both the text prompt and the image object. It provides API to use the image provided as the beginning of its breed.try...exceptBlock: Here, we are handling mistakes. It tries to open the image file, and if it fails (because the file is not there), then it will beexceptFileNotFoundErrorAnd print a friendly message to the user instead of an accident.
Example 3: Multi -Image Fusion
Multi -image fusion is equivalent to integrating two or more images or items. Upload multiple photos and mix AI in a comprehensive picture without interruption. It is a means of creating new scenes, connecting people and backgrounds, or creating detailed products.
import os
import google.generativeai as genai
from PIL import Image
from io import BytesIO
from dotenv import load_dotenv
load_dotenv()
genai.configure(api_key=os.getenv("GEMINI_API_KEY"))
model = genai.GenerativeModel('gemini-2.5-flash-image-preview')
image1_path = "dog_image.png"
image2_path = "cap_image.png"
prompt = "Make the dog from the first image wear the cap from the second image. The cap should fit realistically on the dog's head."
output_filename = "dog_with_cap_result.png"
def save_image_from_response(response, filename):
"""Helper function to save the image from the API response."""
if response.candidates and response.candidates(0).content.parts:
for part in response.candidates(0).content.parts:
if part.inline_data:
image_data = BytesIO(part.inline_data.data)
img = Image.open(image_data)
img.save(filename)
print(f"Image successfully saved as {filename}")
return filename
print("No image data found in the response.")
return None
def main():
print(f"Fusing images '{image1_path}' and '{image2_path}'...")
try:
img1 = Image.open(image1_path)
img2 = Image.open(image2_path)
response = model.generate_content((prompt, img1, img2))
save_image_from_response(response, output_filename)
except FileNotFoundError:
print("Error: One or both image files were not found.")
if __name__ == "__main__":
main()
Output:

The logic of the code is an extension of the image to image.
image1_pathAndimage2_path: These variables have the paths of the two images you want to fuse or integrate.model.generate_content((prompt, img1, img2)): Here, given the listgenerate_contentThe function contains three items: text prompt and both image items. It tells AI to use a signal to connect the elements of both images into one output.
Example 4: Image restoration
This feature can restore old, matte or damaged images. Upload a photo and request Gemini to maintain it. This includes sharpening low -quality images, dyeing old black and white images, and enhancing textures, which can once again look new to your memories.
import os
import google.generativeai as genai
from PIL import Image
from io import BytesIO
from dotenv import load_dotenv
load_dotenv()
genai.configure(api_key=os.getenv("GEMINI_API_KEY"))
model = genai.GenerativeModel('gemini-2.5-flash-image-preview')
input_image_path = "old_photo.png"
prompt = "Restore this old, faded photograph. Sharpen the details, remove any scratches or damage, and enhance the colors to make it look like a new, high-quality photo."
output_filename = "restored_image_result.png"
def save_image_from_response(response, filename):
"""Helper function to save the image from the API response."""
if response.candidates and response.candidates(0).content.parts:
for part in response.candidates(0).content.parts:
if part.inline_data:
image_data = BytesIO(part.inline_data.data)
img = Image.open(image_data)
img.save(filename)
print(f"Image successfully saved as {filename}")
return filename
print("No image data found in the response.")
return None
def main():
print(f"Attempting to restore image: '{input_image_path}'...")
try:
old_photo = Image.open(input_image_path)
response = model.generate_content((prompt, old_photo))
save_image_from_response(response, output_filename)
except FileNotFoundError:
print(f"Error: The file '{input_image_path}' was not found.")
if __name__ == "__main__":
main()
Output:

The structure here is similar to the image -to -image editing example, from a technical point of view, the image restoration image is a form of image editing.
- Now
promptWhere the magic happens. The text indicator clearly tells the model what to do with the icon, the sketch of maintenance measures “sharpen the details,” “remove the scratches,” and “increase the colors”. The model’s intelligence allows it to understand these abstraction guidelines and apply them to visual data so that you can get your old image better and realistic update.
Beyond the basics: What else can you do?
It’s just the tip of the iceberg! Nano bana is incredibly versatile. Here are some ideas where you can take your projects:
Beach processing: Automatically make a generation of images from the indicator list.
Creative Asset: Design icons, backgrounds, or character spirits directly for sports or apps directly from your script.
Data processing: Integrate Nano Banana into the data pipeline to edit or produce photos based on data input -based programs.
AI Art Gallery: Create a backward service that allows users to offer indicators and receive photos.
Wrap
“Nano Banana” (Gemini 2.5 Flash Image) is not just a cool tech tool. This is a practical, powerful tool equally for developers and creations. With just a few lines of the code, you can tap its abilities and bring your visual ideas to real life. This makes it easy to start, experience, experience, and integrate this visual magic into your projects.
If you want to discuss this article and discuss AI development, LLM, or software development, feel free to contact me. X/TwitterFor, for, for,. LinkedOr check my portfolio on me Blog. I regularly share insights about AI, development, technical writing, and more.
Happy coding, and your creations can be dynamic like a fresh banana field!