Experiments
Experimental features that consume inference credits. Explore new workflows before they graduate into full features.
Segment objects in images and lift them into 3D meshes with point/box prompts.
Analyze object orientation from images and align GLB assets with inferred camera pose.
Edit and test Gaussian splat scenes with navmeshes, physics, and live preview.
Generate character cards from templates with art, stats, and story fields.
Debug and preview VRM character models with live animation playback and facial expressions.
3D reference scene with turntable rotation, compass overlay, and orbit controls for baking angle references.
Play ROMs in-browser with EmulatorJS and AI-assisted input helpers.
Pick a level and stream a playable browser session with VLM chat controls.
Remote browser streaming with WebRTC and VLM-assisted navigation.
Experiment with AI-generated UI surfaces using the A2UI protocol.
Realtime full-duplex voice chat with cloned voice samples, transcript streaming, and Web Audio playback.
Voice-avatar sessions with LiveKit streaming, gestures, and transcription.
Realtime video generation with keyboard/mouse control from seed images.
AI image upscaling in the browser using WebGPU compute shaders with multi-pass support.
Preview how URLs appear on Telegram, Discord, Slack, X, Facebook, LinkedIn, and WhatsApp.
Test GPT-5.4 native computer use with screen sharing and system input injection.
Real-time conversational AI avatars with custom image and voice via RunwayML.
Inspect, edit, and repack u8-encoder binary files with a devtools-style tree viewer.
Generate, remesh, and rig 3D models with the Meshy AI API.
Generate, animate, and segment 3D meshes with the Tripo AI API.