Hello all , I’d like to formally introduce https://samsar.one which is now in public alpha.
It has two workflow paths for Video creation-
Bottom-up: Create videos by stitching together a scene at a time in Studio creator.
Top-down: Create entire video by entering prompt and rendering in VidGPT creator, post-process in studio afterwards.
VidGPT (and API) is a Full-stack video/movie creation agent (including narrative, lipsync and sound-effects) with Studio post-processing capabilities.
Uses GPT4.5 by default as inference model, can be configured to use other OAI models from settings.
It supports pretty much all publicly available SoTA generative media models for rendering and all OpenAI models for assistant and inference.
Can be used for variety of purposes & use-cases including educational content, on-demand content etc.
Docs here.
Please check it out and let me know what you think.
Noted and thanks for adding the tag.
The Agent inference model is based on the GPT family of models. Recommended Default is GPT 4.5.
We also use specific models for specific purposes in specific parts of the pipeline (e.g., O-3 mini for the vision pipeline)
Apart from that the app -
Uses OpenAI moderation APIs to prefilter all user prompts.
Provides Dalle3 and Dalle3 HD as options for Image models.
Provides OpenAI TTS speakers as options for TTS models.
Provides OpenAI family of models as options for Assistant model.
We try to keep the tech stack as minimal as possible and almost never host our own models or infra.
We are currently focused on building distribution mechanisms for delivering niche, high-quality content as well as plugging into AI media-only apps and other agentic pipelines.