What is Sulphur 2
Sulphur 2 is an uncensored AI video generator. It is an open-weights, community-released video model fine-tuned on LTX 2.3, designed for local GPU use.
How to use Sulphur 2
The recommended deployment path for Sulphur 2 is using ComfyUI with the native LTX Video nodes. The release includes ready-to-import workflows.
- Choose a weight format: Select
sulphur_dev_bf16.safetensorsfor maximum quality (requires 24-32 GB VRAM) orsulphur_dev_fp8mixed.safetensorsfor tighter memory budgets. - Optionally stack the distill LoRA: Combine the dev base with
sulphur_distil_bf16.safetensorsorsulphur_lora_rank_768.safetensorsfor faster sampling, accepting potential trade-offs in fidelity. - Use the bundled ComfyUI workflows: Import the provided JSON workflows (e.g.,
ltx23_t2v base,ltx23_i2v distilled) into ComfyUI. - Add the local prompt enhancer: Download the prompt enhancer GGUF and mmproj files and set them up in LM Studio for on-device prompt rewriting.
Features of Sulphur 2
- Open-weights uncensored AI video generator: Based on LTX 2.3.
- Native Text-to-Video (T2V) and Image-to-Video (I2V): Supports direct video generation from text prompts or images.
- Distill LoRAs: Includes LoRA weights for potentially faster generation.
- ComfyUI workflows: Ships with four official JSON workflows for T2V and I2V.
- Local prompt enhancer: Bundled with a Qwen 3.5 9B model for on-device prompt rewriting via LM Studio.
- Realism-biased motion: Training data filtered to exclude 2D and animation content, focusing on realism.
- Community-licensed: Distributed under the LTX 2 community license.
Use Cases of Sulphur 2
- Generating videos from text descriptions.
- Creating videos from existing images.
- Experimenting with AI video generation on local hardware.
- Integrating AI video generation into existing LTX 2.3 workflows.
FAQ
Is Sulphur 2 fully open source? It is more accurate to call Sulphur 2 an open-weights, community-licensed LTX 2.3 derivative. The upstream LTX 2.3 ships under the ltx-2-community-license-agreement, and Sulphur 2 and its community GGUF re-quants sit inside that same license envelope rather than being an OSI-approved open-source project.
What architecture does Sulphur 2 use? Sulphur 2 is a fine-tune of Lightricks' LTX 2.3, a DiT-based audio-video foundation model published as ltx-2.3-22b-dev and ltx-2.3-22b-distilled. The 'video' part of Sulphur 2 inherits that backbone, while the bundled Qwen 3.5 9B prompt enhancer is a separate model used only for prompt rewriting.
How much VRAM does Sulphur 2 need? 24-32 GB VRAM is the comfortable range using the dev safetensors. 16 GB VRAM is workable with fp8mixed and the distill LoRA. Community GGUF re-quants (Q8 down to Q3) make 8 GB experimentally viable, but expect quality and stability trade-offs at the low end.
How does Sulphur 2 differ from Kling, Runway and Sora? Kling and Runway are hosted SaaS creative studios with credit-based pricing; Sora 2 is being wound down with the OpenAI Videos API on a documented retirement path. Sulphur 2's distinct value is local, modifiable open weights with ComfyUI workflows and LoRA support, not a polished hosted UX or vendor SLA.
Is sulphur2.org the official Sulphur 2 site? Public release materials only confirm the Hugging Face repository and the SulphurAI Discord as official channels. sulphur2.org and similar SaaS frontends are not confirmed by the release trail as official, so treat them as third-party wrappers unless that changes.
What does 'uncensored' actually mean here? Per the contributor, training filtered only illegal content and 2D animation, with no broader safety filter applied beyond that scope. 'Uncensored' in Sulphur 2's framing is a training-filter description, not a license, redistribution, or warranty statement.




