JSON-to-Video

API service that assembles videos from a JSON spec — image clips, audio tracks, captions, transitions, fonts, all declared as data and rendered server-side. The wiki’s canonical video assembly primitive for the no-code stack: where Remotion is React-components → MP4 for the developer stack, JSON-to-Video is HTTP-POST → MP4 for the n8n stack.

Why it shows up here

JSON-to-Video is the load-bearing API in two of the four n8n-content-pipeline sources in this batch:

In both cases the n8n workflow generates a JSON template (scenes array, with voice_text, image_prompt, duration etc), POSTs it to JSON-to-Video, waits for the render, and pulls down the finished MP4.

Pricing model

Subscription with credits / per-video. Free tier exists for testing; cost scales with rendering minutes and model choice (Flux Pro and 11Labs are the expensive line items inside the JSON template, not the assembly itself).

Customization surface

The JSON template controls:

  • Scene composition (image + voice + duration)
  • Fonts, colors, desaturation, subtitle styling
  • Transitions and intro clips
  • Background music tracks

Why it matters

JSON-to-Video closes the loop on no-code AI video generation. Without it, n8n builders would have to wire FFmpeg or similar — a non-starter for the no-code audience. With it, the entire faceless-content pipeline becomes a single n8n flow.

Sources

See Also