Adobe Expands Its Generative AI Firefly with Video and External Models

Published on January 05, 2026 | Translated from Spanish
Adobe Creative Cloud interface showing the Firefly panel with options to generate video from text and a dropdown menu with logos of OpenAI, Runway, Pika Labs, and WeShop.

Adobe Expands Its Generative AI Firefly with Video and External Models

Adobe takes a significant leap by announcing that its generative AI platform Firefly can now handle video content. The company also reveals an openness strategy, integrating artificial intelligence models developed by external partners directly into its Creative Cloud ecosystem. 🚀

Firefly Ventures into Video Territory

The new Firefly video model not only generates clips from scratch using text commands but also offers powerful features to modify existing material. Users can extend a clip's duration, alter its entire visual style, or remove unwanted objects simply by describing what they want to do. Adobe emphasizes that this model was trained on licensed and public domain content, ensuring the tools are safe for use in commercial projects.

Key capabilities of the video model:
  • Generate from text: Create video sequences based on written descriptions.
  • Extend clips: Add frames coherently to lengthen a shot.
  • Change style: Modify the overall visual appearance of a video.
  • Remove objects: Eliminate specific elements from footage using prompts.
Adobe emphasizes that these tools are designed to be commercial and safe, as the model was trained on licensed and public domain content.

An Open Ecosystem: Adobe Integrates Third-Party AI

Adobe's strategy goes beyond its own developments. The company launches a program to partner with other AI model creators. The first confirmed partners are OpenAI (with Sora), Runway (Gen-3 Alpha), Pika Labs, and WeShop. This means artists working in apps like Photoshop or Illustrator can invoke these external models without leaving the Adobe environment, maintaining consistency in their projects' style and layer structure.

Benefits of the integration:
  • Unified access: Trigger different AI models from a single interface within Creative Cloud.
  • Seamless workflow: Avoid switching between applications; everything is managed within Adobe software.
  • Creative consistency: Preserve adjustments, styles, and layers when using complementary AI capabilities.

The New Landscape for Creators

This evolution creates a scenario where artists have a wider array of AI options directly in their everyday tools. The question can now focus less on which software to use and more on which AI model to choose for a specific task, whether Adobe's native one or one from its partners, while the machine processes the request in the background. This integration could redefine how digital creative work is conceptualized and executed. 💡