faster-whisper can be a reimplementation of OpenAI's Whisper model working with CTranslate2, that's a fast inference motor for Transformer types.Since it utilizes significantly less VRAM, What's more, it usually means that folks who doesn't have ten GB VRAM can use big-v2. RTX 2060 6GB appears to run it effortlessly according to a touch upon Quicke