← Back to Podcli

Podcli vs Vizard

Vizard is a hosted clipper with a clean dashboard. Podcli is the local-first open-source version: no monthly cap, no watermark, no upload, scriptable from the CLI.

Hosted AI clipping tool with a template-driven editor. · Free tier with watermark and minute cap. Paid plans from ~$30/mo.

Choose Podcli when

  • You publish often and hit Vizard's monthly minute cap.
  • You want exports with no watermark regardless of plan.
  • Your podcast files are sensitive enough that uploading them is off the table.
  • You want a CLI and an MCP server, not just a hosted UI.
  • You want to tune caption styles and AI scoring in code rather than presets.

Choose Vizard when

  • A hosted UI with no install is non-negotiable.
  • You do not want to maintain a local toolchain.
  • You like their template library and stay inside the free tier.

Side by side

The features that change the day-to-day for clip creators.

Feature
Podcli
Vizard
Price
Free, MIT licensed
~$30-$60/mo paid tiers
Free-tier watermark
None
Yes
Monthly minute cap
None
Plan-dependent
Where files go
Local
Vizard cloud
Caption styles
4 editable React components
Template library
Face tracking
YuNet + mouth-motion split-screen
Built-in
Scriptable / CLI
Yes
Limited
MCP server
Yes (19 tools)
No
Open source
GitHub
Closed

Is there a free Vizard alternative?

Yes. Podcli is MIT licensed and free. There is no per-minute pricing, no plan tiers, and no watermark. The only optional cost is the API key you bring for AI clip suggestion (Claude or OpenAI), and that step is skippable.

Minute caps are the volume problem

Vizard and most hosted clippers price around monthly processing minutes. If your show is 90 minutes a week and you batch a few episodes at once, you hit the cap fast. Podcli has no concept of a minute cap. The work happens on your laptop. The cost is wall-clock time plus any AI API calls you choose to make.

Captions: templates vs editable components

Vizard ships a template library you pick from. Podcli ships four styles (branded, hormozi, karaoke, subtle), each a real React component rendered by Remotion.

For most shows the four defaults cover what you would pick anyway. For shows that want a unique signature look, editing JSX beats picking a preset.

Scripting and AI agent integration

Vizard does not expose an MCP server. Podcli does. An AI coding agent (Claude Code, Codex, Cursor) can run the whole pipeline for you, including titles and descriptions, while you do other work. For people already living inside an AI coding tool, this changes the workflow.

Moving from Vizard to Podcli

The honest version. Steps in the order you'd actually do them.

  1. 1

    Install: git clone https://github.com/nmbrthirteen/podcli && cd podcli && ./setup.sh.

  2. 2

    Use the web UI or CLI the same way you would use Vizard's dashboard, but pointing at local files.

  3. 3

    Pick a caption style. The four built-ins cover the look-and-feel of Vizard's defaults.

  4. 4

    Optional: fill in .podcli/knowledge/ with your show identity. This replaces Vizard's brand kit.

  5. 5

    Optional: add an API key for Claude or OpenAI to get AI clip scoring.

  6. 6

    Run a batch. Output: 1080x1920 MP4s ready to upload.

Questions about switching from Vizard

Direct answers to the searches people run before they decide.

Is Podcli a free Vizard alternative?+

Yes. Podcli is free under MIT. There is no paid tier. You only pay for an AI API key if you opt in to AI clip suggestions.

Does Podcli have face tracking like Vizard?+

Yes. Podcli uses YuNet for face detection and a per-clip mouth-motion analysis to track the active speaker on split-screen interviews. No diarization is required for 2-person split-screen.

Can I batch-process multiple episodes with Podcli?+

Yes. The CLI and MCP server both support batch processing. There is no monthly cap because nothing runs in the cloud.

What languages does Podcli support?+

Whisper supports 99 languages including Spanish, Portuguese, French, German, Mandarin, Hindi, and more. Captions and transcripts work in any of them.

Do I need a GPU to run Podcli?+

No. Hardware encoders are used when available (VideoToolbox on Mac, NVENC on NVIDIA GPUs, VAAPI on Linux), but the CPU fallback works on any modern laptop. Whisper transcription is the only step that benefits from a GPU and even there CPU works fine.

Try Podcli in 30 seconds

Open source, MIT, no signup, no watermark, no upload. Clone and run.

$ git clone https://github.com/nmbrthirteen/podcli.git
$ cd podcli
$ ./setup.sh