Luma, an AI video and 3D model firm supported by a16z, has unveiled Ray3 Modify, a new model. This innovation enables users to alter existing video clips using character reference images while retaining the original performance. Additionally, users can specify start and end frames to direct the model in creating seamless transitional footage.
On Thursday, the company announced that its Ray3 Modify model addresses the challenges faced by creative studios in maintaining human performance during AI-driven editing or effect generation. The startup emphasized that the model tracks input footage more effectively, empowering studios to leverage human actors for creative and branding projects. Luma highlighted that the new model preserves the actor’s original movements, timing, gaze, and emotional expressions even as the scene is transformed.
Ray3 Modify allows users to input a character reference image, transforming a human actor’s appearance in the original footage into the specified character. This feature also helps creators maintain consistent elements such as costumes, likeness, and identity throughout the entire production.
Furthermore, the new Ray3 Modify model enables users to define start and end reference frames for video generation. This capability assists creators in guiding transitions and managing character movements or behavior, ensuring smooth continuity across scenes.
“While generative video models offer immense expressiveness, they often lack precise control,” stated Amit Jain, co-founder and CEO of Luma AI. He added, “Today, we are thrilled to unveil Ray3 Modify, which merges the real world with AI’s expressive capabilities, granting creatives complete control. This empowers creative teams to film performances and then instantly alter them to any desired location, change outfits, or even digitally reshoot scenes using AI, thereby eliminating the need for physical reshoots.”
Luma announced that its new model is accessible to users via the company’s Dream Machine platform. The firm, a competitor to companies like Runway and Kling, initially introduced video modification features in June 2025.
This model’s launch follows a recently announced $900 million funding round for the startup in November, spearheaded by Humain, an AI company owned by Saudi Arabia’s Public Investment Fund. Previous investors, including a16z, Amplify Partners, and Matrix Partners, also contributed to the funding. Additionally, the startup intends to collaborate with Humain to establish a 2GW AI cluster in Saudi Arabia.