The AMD has released a new stable dispersion 3 medium artificial intelligence (AI) model for XDNA 2 neural processing units (NPU). The chipmaker claimed that it is the world’s first AI model to act on the output in the BF16 format. This model will help with at least 24 GB RAM through the new Rising AI laptop, when users download tanker steak entertainment 3.1 beta software. Stable Battle 3 Medium is an on -device image generation model that does not require internet contact.
AMD’s Image Generation Model Prints can produce photos
In a press release, Santa Clara -based Tech Dev told the new image generation model in detail. The AI model is based on a stable 3 medium, which is better for the company’s XDNA NPUS and is equipped with 2024 and the newly released in Raisen AI laptop.
The company claims that the model can be used to produce stock quality images through text indicators. The model produces 1024 × 1024 resolution images, which then increases the 2048 × 2048 print radling resolution using NPU capabilities.
The new AI model is part of the AMD and the new entertainment 3.1 desktop app, which is free to download and install. Since the image generation model operates completely locally, it also works even when the device is not connected to the Internet. Data processing is on a device, which runs by XDNA 2 NPUs.
The AMD said it had worked on the memory needs of the AI model, and now instead of 32 GB RAM, 24 GB RAM needed which was necessary for the stable XL Turbo model. In addition, the new image model uses only 9GB of RAM while active. The company obtained it using effective format from block floating point 16 or block FP16 (BF16) memory.
Tech Dev highlighted that the stable dispersion 3 medium AI model is strictly adhered to, structure and layout. The AMD said that users who try the model should first explain the image type, then structural components, and finally the details and other contexts. Negative indicators can be used to remove elements from the image, and replace the complete stop -determining model’s understanding of the model’s context.