Machinima: The umbrella term for filmmaking using real-time 3D engines, including video games and virtual worlds. A more precise label for films made within platforms like Second Life, Kitely, OpenSimulator, or VRChat is Virtual World Machinima or Metaverse filmmaking. Metaverse filmmaking is a newer term gaining traction, especially with the rise of immersive platforms and social VR. It emphasizes storytelling within persistent, user-generated virtual environments. Avatar Cinema is occasionally used to describe narrative films where avatars act as performers in virtual spaces. Synthetic Cinema is a broader term that includes machinima but also covers AI-generated or procedurally created cinematic content.
Kitely, built on OpenSimulator, has a rich history of machinima, with creators using in-world tools, avatar animations, and scripted environments to produce films with more control over hosting and scalability. Filmmakers often use screen capture software, in-world camera tools, and post-production editing to shape their narratives.
12 hours was created before Animesh actors were widely available and programmable. This film used a live avatar actor, in-world camera techniques, uploaded sound clips played using in-world controls, and an uploaded song by the author played using music box scripts. Other actors were scripted NPC’s where the avatar takes an image of their appearance doing scripted movements, and the image is kept in an object and clicked on to play. Background actors were still-life objects. The film also used follower scripts for the Angel NPC. Other films are often created with many avatars working collectively in-world, much like theater actors.
In virtual world filmmaking—especially in platforms like Second Life or Kitely—non-Animesh actors typically refer to live avatars controlled by users or scripted bots that are not rigged mesh objects. Instead of being autonomous animated mesh entities, these actors are captured visually during performance using screen recording or snapshot tools.
These are avatars that are standard user avatars, not mesh-based NPCs. Perform actions via user control or scripted gestures. They are recorded in real time using in-world camera tools or external screen capture software. They do not require rigged mesh or Animesh scripting to animate. Filmmakers use: In-world camera controls: To frame and follow avatar movement. Gesture and animation HUDs: To trigger expressions, dances, or dialogue. Screen capture software (e.g., OBS Studio, Camtasia): To record scenes. Post-production editing: To add voiceovers, effects, and transitions.
This method is especially common in avatar cinema, where the emphasis is on performance, staging, and narrative rather than autonomous animation. It’s akin to live-action filmmaking—except the actors are digital avatars in a virtual set.
It has gotten much easier to create movies in-world with animesh actors. Programmed Animesh actors are animated, scripted characters used in virtual worlds—especially platforms like Second Life—that combine mesh-based avatars with embedded behaviors, allowing them to perform autonomously or interactively within a scene.
Animesh stands for animated mesh, a feature introduced in Second Life that allows creators to: Use rigged mesh objects (like avatars or creatures). Animate them using scripting (via Linden Scripting Language, or LSL). Place them in-world as non-player characters (NPCs) or props. Unlike static mesh objects, Animesh can walk, gesture, emote, or perform complex sequences—making them ideal for machinima, storytelling, and immersive experiences.
Programmed Animesh actors are essentially scripted performers. Creators can: Assign animation sequences (e.g., walking, dancing, speaking). Trigger behaviors based on events (like proximity, time, or user input). Use dialogue scripts or AI-driven responses for interactive scenes. Coordinate multiple actors for choreographed performances
These actors can be part of a machinima film, a virtual theater production, or even a museum exhibit—bringing life to environments without needing live users to control them. LSL scripting: Controls movement, animation, and interaction. Animation overriders (AOs): Customize default behaviors. Scene managers: Coordinate timing and transitions. Voice sync or gesture triggers: For lip-sync and expressive realism.

Leave a comment