Google Accelerates XR Web Creation by Merging Gemini and Canvas 🚀

Published on February 23, 2026 | Translated from Spanish

Google is lowering the entry barrier for developing extended reality experiences on the web. Its strategy combines the Gemini model and the Canvas workspace. The idea is that a descriptive text serves as the basis for generating interactive 3D prototypes, which can then be refined and exported to WebXR. This workflow aims to streamline the transition from a concept to an immersive scene executable on devices like virtual reality headsets.

A designer writes a prompt in Canvas while Gemini generates in real-time an interactive 3D prototype that is visualized in a virtual reality headset.

Technical Workflow: From Prompt to Prototype and Then to WebXR 🔄

The process begins with the user describing a scene through text to Gemini. It acts as an assistant that interprets the request and generates both the 3D graphic elements and the code necessary for their interactive visualization. Canvas provides the environment to modify and assemble these assets. Once the prototype is defined, the platform facilitates its conversion to a WebXR standard, allowing visualization in browsers compatible with XR headsets.

Goodbye to 3D Modeling Courses, Hello to Creative Writing Ones ✍️

It seems that the next demanded skill won't be mastering Blender, but knowing how to write descriptions like a room with a plush sofa and a plant that looks like it needs water. Our value as creators might depend on our ability to be more specific than make something cool in 3D. Who would have thought that the path to virtual reality would go through perfecting the essays we wrote in school.