AI-Generated Content Policy

Last updated: March 25, 2026

This policy explains how Instavar expects AI-generated or materially AI-altered content to be reviewed, disclosed, and handled. It works alongside our Terms of Service, Acceptable Use Policy, and Report Abuse process.

1. Scope

This policy applies to content created, edited, rendered, reviewed, exported, or published using Instavar Studio, including video, captions, thumbnails, voiceovers, scripts, and related synthetic media.

2. How We Classify Content

3. Disclosure Expectations

Users are responsible for any disclosure obligations that apply to synthetic media under law, platform policy, contract, or ordinary user expectations. At a minimum, do not publish materially AI-generated or materially AI-altered content without clear disclosure where any of the following are true:

4. Provenance And Label Integrity

Instavar may add or preserve labels, metadata, workflow prompts, or other provenance signals for generated content. You may not remove, conceal, or falsify those signals where they are required by law, policy, or our platform rules.

5. Prohibited Synthetic-Media Uses

You may not use Instavar to create or distribute synthetic media that:

6. Review Before Publishing

You are responsible for reviewing generated outputs before publication. Do not assume generated text, voice, visuals, or claims are accurate, lawful, or fit for publication without human review.

7. Reporting Concerns

To report harmful synthetic media, deepfakes, impersonation, or other abuse, email abuse@instavar.com or use our Report Abuse page. For broader legal or policy questions, contact legal@instavar.com.