

Photo by Author | Gemini (nano-nano-naughty self-portrait)
. Introduction
The image generation with Generative A has become a widely used tool for both individuals and businesses, which can help them immediately form their desired style without any design skills. Basically, these tools can accelerate the tasks that will otherwise take a certain amount of time, completing them in just seconds.
With the development of technology and competitiveness, many modern, modern image generation products were released, such as StableFor, for, for,. MidgornFor, for, for,. dall-eFor, for, for,. ImaginAnd more. Everyone offers unique benefits to its customers. However, Google recently had a significant impact on the landscape of image generation with releases. Gemini 2.5 Flash Image (Or Nano-Bina).
Nano-Anna is a modern image generation and editing model of Google, which features realistic image creation, multiple image mixing, constant temperament in character, targeted instant changes and public access capabilities. The model offers far more control over Google or its competitors than the previous models.
This article will seek the ability to create and edit Nano-Bana’s images. We will demonstrate this using these features Google AI Studio Platform and Gemini api In an atmosphere of azar.
Let’s enter it.
. Testing the nano-Bani model
Follow this tutorial, you will need to enroll for a Google account and sign in to the Google AI studio. You will also need to get one API Key That is to use Gemini API, which requires paid plans because free tires are not available.
If you prefer to use API with azer, make sure to install Google Generative AI Library with the following command:
Once your account is compiled, let’s find out how to use the Nano-Annea model.
First, go to Google AI Studio and select Gemini-2.5-flash-image-preview The model, which is the Nano-Bani model we will use.


With the selected model, you can start a new chat to create a picture from an indicator. As Google has stated, there is a fundamental principle to achieve the best results Specify the scene, not just a keyword list. This statement, describing the image you imagine, usually produces high results.
In the AI Studio Chat interface, you will see a platform like below where you can enter your gesture.


We will use the following gestures to produce photovirialstics for our example.
Indonesia’s Batak craftsman’s photovirialstic close -up portraits, stained with hand wax, is detecting the map on an indigo cloth with a canting pen. She works on a wooden table in a sharp export. Joint textiles and diwats fade behind it. Late morning window light slips into the fabric, which reveals fine wax lines and saagan grains. Caught at 85 mm on F/2 for soft separation and creamy bobby. Overall the mood is focused, superstitious and proud.
The developed syllable is shown below:


As you can see, the created icon is realistic and followed the indicators of loyalty. If you prefer the implementation of azar, you can use the following code to create a picture:
from google import genai
from google.genai import types
from PIL import Image
from io import BytesIO
from IPython.display import display
# Replace 'YOUR-API-KEY' with your actual API key
api_key = 'YOUR-API-KEY'
client = genai.Client(api_key=api_key)
prompt = "A photorealistic close-up portrait of an Indonesian batik artisan, hands stained with wax, tracing a flowing motif on indigo cloth with a canting pen. She works at a wooden table in a breezy veranda; folded textiles and dye vats blur behind her. Late-morning window light rakes across the fabric, revealing fine wax lines and the grain of the teak. Captured on an 85 mm at f/2 for gentle separation and creamy bokeh. The overall mood is focused, tactile, and proud."
response = client.models.generate_content(
model="gemini-2.5-flash-image-preview",
contents=prompt,
)
image_parts = (
part.inline_data.data
for part in response.candidates(0).content.parts
if part.inline_data
)
if image_parts:
image = Image.open(BytesIO(image_parts(0)))
# image.save('your_image.png')
display(image)If you provide your API key and the desired gesture, the above -mentioned code will be created.
We have seen that Nano-Bana’s model can produce photovirialistic image, but its strengths increase. As mentioned earlier, nano -nanna is particularly powerful for image editing, which we will look forward to.
Let’s just try to edit the image -based imagery with the icon just prepared. We will use the following indicators to slightly change the appearance of the craftsman:
Using the icon provided, when it draws the wax lines, place a pair of thin reading glasses on Artisan’s nose. Make sure the reflection looks realistic and the glasses naturally sit on his face without blurring his eyes.
The resulting picture shown below:


The above image is like before, but the glass has been added to the craftsman’s face. This shows how nano -nano -nano -nano -nano can edit a picture based on a specific gesture while maintaining overall consistency.
To do this with Uzar Lou You, you can provide your base image and a new indication using the following code:
from PIL import Image
# This code assumes 'client' has been configured from the previous step
base_image = Image.open('/path/to/your/photo.png')
edit_prompt = "Using the provided image, place a pair of thin reading glasses gently on the artisan's nose..."
response = client.models.generate_content(
model="gemini-2.5-flash-image-preview",
contents=(edit_prompt, base_image))Next, let’s examine the constant temperament of the character by developing a new scene where the craftsman is looking directly by the camera and smiles:
Prepare a new and photovirialstick image using the icon provided as a reference to the identity: The same talk craftsman is now looking at the camera with a comfortable smile at a similar wooden table. Look with medium close -up, 85 mm soft export light, fading background utensils.
The result of the picture is shown below.


We have successfully changed the scene by maintaining consistency in the role. To test more drastic changes, let’s use the following prompt to see how nano -nano -nano performs.
Create a product styling image using the iconic Identity as a reference to the identification: The same craftsman offers a ready -made beetroot cloth, extending the arms toward the camera. Soft, even window light, 50 mm look, neutral background disorder.
The result is shown below.


The resulting picture shows a completely different scene but maintains the same role. This highlights the model’s ability to produce diverse content from a single reference image.
Next, let’s try the transfer of image style. We will use the following indicators to convert photovirialistic image into water color painting.
Using the icons provided as an identification reference, regenerate this scene as a delicate water color on a cool press paper: Wash for loose Indiago fabric, soft blood flow edges, table and background on flowers. Keep its poses holding fabric, soft smiles and round glasses. Let the export reduce the structure of light grains and visible paper.
The result is shown below.


The icon shows that this style has been converted into a water color while preserving the subject and formation of the original.
Finally, we will try the image fusion, where we will add one thing to another. For this example, I have developed a picture of a woman’s hat using nano-bana:


Using the icon of the hat, now we will place it on the craftsman’s head with the following indicators:
Move the same woman and stand out in the open shade and place a straw hat from the product image on her head. Align the crown and put it on the head realisticly. Bend over his right ear (camera left), the ribbons flow slowly with gravity. Use a soft sky light as a key with a soft rum from the bright background. Maintain true straw and lace structure, natural skin tone, and a reliable shade from the top of the glass and the upper edge. Keep a talk cloth and its hands. Do not change the water color style.
This process integrates the hat’s image with a base image to create a new icon, which will have minimal changes in poses and overall style. In azagar, use the following code:
from PIL import Image
# This code assumes 'client' has been configured from the first step
base_image = Image.open('/path/to/your/photo.png')
hat_image = Image.open('/path/to/your/hat.png')
fusion_prompt = "Move the same woman and pose outdoors in open shade and place the straw hat..."
response = client.models.generate_content(
model="gemini-2.5-flash-image-preview",
contents=(fusion_prompt, base_image, hat_image))Use the best results, at least three input images. Excessive use can reduce output standards.
It covers the basics of using the Nano-Anna model. In my opinion, this model is better when you have existing images when you want to change or edit. It is useful to maintain consistency in a series of photos made especially.
Try it for yourself and don’t be afraid to repeat it, as you often do not find the best picture in the first effort.
. Wrap
Gemini is 2.5 flash image, or nano -nanana, Google’s latest image generation and editing model. It has powerful abilities compared to the previous image generation model. In this article, we discovered how to use nano-nano-nano-nano-nano-based features, highlighting its features to maintain consistency and implement stylish changes.
I hope this has been helpful!
Cornelius Yodha Vijaya Data Science is Assistant Manager and Data Writer. Elijan, working in Indonesia for a full time, likes to share indicators of data and data through social media and written media. CorneLius writes on various types of AI and machine learning titles.