Overview
- Zhipu AI says GLM-Image was trained end to end on Huawei’s Ascend Atlas 800T A2 servers using Ascend processors and the MindSpore framework.
- The 16‑billion‑parameter model uses a hybrid autoregressive plus diffusion design that improves text rendering, spatial control, and layout fidelity.
- The release follows Zhipu AI’s 2025 U.S. blacklist, which cut access to Nvidia H100 and A100 GPUs, and is presented as proof of a viable all‑domestic training stack.
- Reuters reported that Chinese customs instructed agents to block Nvidia H200 imports and officials urged companies to avoid purchasing the chips unless necessary.
- GLM-Image is available on Hugging Face and via paid API, while analysts note Huawei plans to ramp Ascend output even as per‑chip performance trails Nvidia and Zhipu has not disclosed training scale or speed.