ScreenKiteScreenKite|指南
    • 安装 ScreenKite
    • 系统要求
    • 设置权限
    • 新建录制
    • 录制全屏
    • 录制窗口
    • 录制区域
    • 摄像头与麦克风
    • 系统音频
    • 录制 iOS 设备
    • 键盘快捷键
    • 自动缩放
    • 配置缩放设置
    • 项目编辑器概览
    • 时间线与轨道
    • 裁剪与分割
    • 外观自定义
    • 设备边框
    • Agentic Video Editing
    • Word-Level Generated Captions
    • 导出设置
    • 常见问题
    • 权限与访问
    ← ScreenKite 首页
    指南/编辑

    Word-Level Generated Captions

    这篇文章尚未翻译到你的语言,当前显示英文版本。

    Generated captions in ScreenKite are word-level. Instead of creating one long subtitle block for a full sentence or clip, ScreenKite creates one caption cue per spoken word. This gives the editor the timing data it needs for short, Screen Studio-style caption reveals and precise agent workflows.

    Before You Generate Captions

    Open Settings -> Transcription and configure the Word-Level tab:

    1. Choose Automatic for the normal setup. ScreenKite uses ElevenLabs when an API key is configured, then falls back to a downloaded WhisperKit model.
    2. Choose ElevenLabs when you want hosted Scribe word timings.
    3. Choose Local when you want on-device WhisperKit word timestamps from a downloaded model.

    OpenAI, Groq, and Azure OpenAI are not used for generated caption timing. They can still be configured under Text & Export for AI cleanup, proofreading, or explicit transcript export workflows.

    ✅

    For the most reliable generated captions, record microphone narration as its own track. ScreenKite can also generate captions from replacement or main audio when microphone audio is not available.

    Generate Captions

    1. Open a .skbundle project in the Project Editor.
    2. Make sure the project has microphone, replacement, or main audio.
    3. Use the caption generation action in the editor or ask an agent to generate captions.
    4. ScreenKite transcribes the audio with the configured word-level provider.
    5. ScreenKite imports an SRT where each cue maps to one spoken word.

    The result is a caption track made of short word-timed clips instead of sentence-length chunks. If the provider returns no speech, ScreenKite reports that no speech was detected. If the provider returns only sentence segments without word timestamps, generated captions stop instead of creating approximate long captions.

    Agent Workflow

    Agents use the same word-level caption path as the app. A prompt can be as direct as:

    codex "Open ~/Desktop/Recording.skbundle and generate word-level captions from the microphone track"
    

    For transcript cuts, filler-word cleanup, or B-roll planning, the agent can reuse the same word timestamps so cuts and visual beats stay aligned with speech.

    Timeline Behavior

    Generated captions appear on a Captions track in the timeline. Because every word has its own cue, you can inspect and edit timing at word granularity.

    Use Timeline & Tracks for track navigation basics, and Agentic Video Editing for transcript-driven editing workflows.

    上一篇

    ← Agentic Video Editing

    下一篇

    导出设置→