claude-code-video-toolkit
digitalsamba/claude-code-video-toolkit ↗/plugin install claude-code-video-toolkitcontents
Video and audio processing with FFmpeg. Use for format conversion, resizing, compression, audio extraction, and preparing assets for Remotion. Triggers include converting GIF to MP4, resizing video, extracting audio, compressing files, or any media transformation task.
AI video generation with LTX-2.3 22B — text-to-video, image-to-video clips for video production. Use when generating video clips, animating images, creating b-roll, animated backgrounds, or motion content. Triggers include video generation, animate image, b-roll, motion, video clip, text-to-video, image-to-video.
AI music generation with ACE-Step 1.5 — background music, vocal tracks, covers, stem extraction, audio repainting, and continuation for video production. Use when generating music, soundtracks, jingles, or working with audio stems. Triggers include background music, soundtrack, jingle, music generation, stem extraction, cover, style transfer, repaint, continuation, or musical composition tasks.
Python video composition with moviepy 2.x — overlaying deterministic text on AI-generated video (LTX-2, SadTalker), compositing clips, single-file build.py video projects. Use when adding labels/captions/lower-thirds to LTX-2 or SadTalker outputs, building short ad-style spots in pure Python without Remotion, or doing programmatic video composition. Triggers include text overlay on video, label LTX-2 clip, caption SadTalker output, lower third, build.py video, moviepy, Python video composition, sub-30s ad spot.
Cloud GPU processing via RunPod serverless. Use when setting up RunPod endpoints, deploying Docker images, managing GPU resources, troubleshooting endpoint issues, or understanding costs. Covers all 5 toolkit images (qwen-edit, realesrgan, propainter, sadtalker, qwen3-tts).
Generate AI voiceovers, sound effects, and music using ElevenLabs APIs. Use when creating audio content for videos, podcasts, or games. Triggers include generating voiceovers, narration, dialogue, sound effects from descriptions, background music, soundtrack generation, voice cloning, or any audio synthesis task.
Best practices for Remotion - Video creation in React
Embedding images in Remotion using the <Img> component
Typography and text animation patterns for Remotion.
Importing images, videos, audio, and fonts into Remotion
Getting the duration of a video file in seconds with Mediabunny
Transcribing audio to generate captions in Remotion
Light leak overlay effects for Remotion using @remotion/light-leaks.
Measuring text dimensions, fitting text to containers, and checking overflow
Make a video parametrizable by adding a Zod schema
Using TailwindCSS in Remotion.
Check if a video can be decoded by the browser using Mediabunny
Scene transitions and overlays for Remotion using TransitionSeries.
Interpolation and timing in Remotion—prefer interpolate with Bézier easing; springs as a specialized option
Importing .srt subtitle files into Remotion using @remotion/captions
Trimming patterns for Remotion - cut the beginning or end of animations
Adaptive silence detection for video/audio files using FFmpeg loudnorm and silencedetect
Measuring DOM element dimensions in Remotion
Defining compositions, stills, folders, default props and dynamic metadata
Sequencing patterns for Remotion - delay, trim, limit duration of items
Embedding videos in Remotion - trimming, volume, speed, looping, pitch
Getting the duration of an audio file in seconds with Mediabunny
Audio visualization patterns - spectrum bars, waveforms, bass-reactive effects
Using audio and sound in Remotion - importing, trimming, volume, speed, pitch
3D content in Remotion using Three.js and React Three Fiber.
Loading Google Fonts and local fonts in Remotion
Adding AI-generated voiceover to Remotion compositions using TTS
subtitles and caption rules
Dynamically set composition duration, dimensions, and props
Chart and data visualization patterns for Remotion. Use when creating bar charts, pie charts, line charts, stock graphs, or any data-driven animations.
Fundamental animation skills for Remotion
Make map animations with Mapbox
Extract frames from videos at specific timestamps using Mediabunny
Embedding Lottie animations in Remotion.
Displaying GIFs, APNG, AVIF and WebP in Remotion
Including sound effects
Getting the width and height of a video file with Mediabunny
Rendering transparent videos in Remotion
Displaying captions in Remotion with TikTok-style pages and word highlighting
AI image editing prompting patterns for Qwen-Image-Edit. Use when editing photos while preserving identity, reframing cropped images, changing clothing or accessories, adjusting poses, applying style transfers, or character transformations. Provides prompt patterns, parameter tuning, and examples.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
Record browser interactions as video using Playwright. Use for capturing demo videos, app walkthroughs, and UI flows for Remotion videos. Triggers include recording a demo, capturing browser video, screen recording a website, or creating walkthrough footage.
Toolkit-specific Remotion patterns — custom transitions, shared components, and project conventions. For core Remotion framework knowledge (hooks, animations, rendering, etc.), see the `remotion-official` skill.
Create professional videos autonomously using claude-code-video-toolkit — AI voiceovers, image generation, music, talking heads, and Remotion rendering.