AI Safety & Transparency Statement
Effective Date: February 1, 2026
Last Updated: December 10, 2025

1. System Classification and Scope
In accordance with the Colorado AI Act (SB 24-205) and emerging global standards, VastAdvisor Enterprise is classified as a High-Risk Artificial Intelligence System. This classification applies because the system is designed for use in the financial, lending, and insurance sectors, where its outputs—specifically the generation of marketing strategies, Ideal Customer Profiles (ICPs), and targeted advertising content—may serve as a substantial factor in making consequential decisions regarding consumer access to financial services.
While VastAdvisor Enterprise functions as an intelligent assistant to marketing and compliance professionals, we acknowledge our role as a Developer and our duty to exercise reasonable care to prevent algorithmic discrimination.
2. Management of Algorithmic Discrimination Risks
VastAdvisor employs a "Defense-in-Depth" technical strategy to identify, mitigate, and monitor known or reasonably foreseeable risks of algorithmic discrimination, including proxy discrimination based on protected characteristics (such as race, color, disability, or age).
Our risk management program integrates the following core pillars:
A. The SEC/FINRA Compliance Engine (Technical Guardrails)
To prevent the generation of biased or non-compliant content, all AI outputs are routed through a deterministic Compliance Engine. This layer functions as a mandatory gatekeeper that:
-
Audits generated content against strict financial regulations (SEC/FINRA) and anti-discrimination logic.
-
Detects and blocks "promissory language" and "proxy discrimination," such as exclusionary geographic targeting (e.g., zip code redlining).
-
Produces "Redlines" and "Risk Scores" that must be resolved before content can be deployed.
B. Data Governance and Provenance (The Two-Layer Stack)
We strictly control the data used to train our models to ensure fairness and accuracy. VastAdvisor utilizes a Two-Layer Architecture:
-
Base Model: Trained on broad global finance and marketing knowledge.
-
Tenant-Specific Adapter: A fine-tuned layer trained exclusively on a client’s "Approved Marketing Materials," "Past Compliance Corrections," and "Firm Voice".
-
Bias Auditing: We conduct systematic bias audits on training datasets to ensure historical marketing biases are not ingested or replicated by the Tenant-Specific Adapter.
-
Data Isolation: Client data is isolated to ensure that one firm’s risk profile does not affect another.
C. Human Oversight and "AI Metadata"
VastAdvisor is designed to augment, not replace, human decision-making. To facilitate meaningful human oversight by our clients (Deployers):
-
Explainability: Every AI output generates AI Reasoning metadata, articulating the logic used to reach a conclusion.
-
Confidence Scoring: Outputs include an AI Confidence Score (1–10) and specific AI Concerns flags to alert human reviewers to potential risks or hallucinations.
-
Mandatory Review: We advise all Deployers that content flagged with low confidence or high compliance risk should undergo manual review before publication.
D. Continuous Monitoring and Rollback
We utilize real-time observability tools (VoltAgent Console) to monitor system performance post-deployment.
-
Canary Deployment: Updates to high-risk models are released incrementally.
-
Auto-Rollback: If our monitoring detects a spike in Compliance Flags or evidence of performance degradation (drift), the system is engineered to automatically roll back to the previous safe version to protect consumers.
3. Consumer Transparency and Synthetic Media
In compliance with state laws regarding synthetic media and chatbots:
-
AI Disclosure: Any interactive conversational interface (e.g., chatbots) powered by VastAdvisor is configured to clearly disclose to the consumer that they are interacting with an Artificial Intelligence system.
-
Provenance: We apply metadata and/or watermarking to AI-generated visual and audio content to verify its synthetic nature and origin.
4. Modifications and Reporting
We are committed to maintaining the accuracy of this statement. We will update this disclosure no later than 90 days following any intentional and substantial modification to VastAdvisor Enterprise that creates a new reasonably foreseeable risk of algorithmic discrimination.
If we discover that our system has caused or is reasonably likely to have caused algorithmic discrimination, we commit to disclosing this to the Attorney General and affected Deployers without unreasonable delay, in accordance with Colorado Revised Statutes § 6-1-1702(5).
Contact Information
For questions regarding this AI Safety & Transparency Statement or to report a concern regarding algorithmic fairness, please contact our AI Governance Committee at:
ai-governance@vastadvisor.ai
