Trust & Transparency
Last updated: March 2026
Our Commitment
At Ophraxx AI, we believe trust is earned through transparency, accountability, and consistent action. We are committed to building AI systems that are safe, reliable, and aligned with user values. Everything we describe on this page—models, safety layers, rate limiting, moderation, and data handling—is built and operated by us. We do not outsource core safety or policy enforcement to unnamed or unaccountable parties. This applies to both our conversational bot (used in servers and connected spaces) and our website and web applications (accounts, profiles, settings, security).
How Ophraxx AI Works
Ophraxx Web Core & Model Network
All models in our platform run on the Ophraxx Web Core, our proprietary infrastructure optimized for fast, reliable, and intelligent responses. The Web Core powers four tiers, each representing a different tuning profile and capability level:
- OW-F1 (Free): Entry-level model for casual chat, quick questions, and basic coding assistance — approximately 500 messages per month.
- OW-B12 (Basic): General-purpose model for everyday tasks, code generation, debugging, and file uploads with text-to-speech — approximately 1,000 messages per month.
- OW-U45 (Ultra): Advanced model with long-term memory, priority processing, and extended context for complex reasoning and creative projects — 10,000+ messages per month.
- OW-P88 (Pro): Full-power model with the highest priority processing, large file and dataset support, API access, and collaboration tools — 50,000+ messages per month.
- Safety Models: Dedicated in-house content moderation and threat detection systems run on every request regardless of tier.
- Verification Models: Fast fact-checking and accuracy validation applied to every response before delivery.
Intelligent Model Routing
We automatically route your queries to the most appropriate model tier based on your subscription and the complexity of the request, ensuring optimal performance and resource efficiency.
Multi-Layer Safety System
Layer 1: Input Validation
- Real-time pattern matching for known threats
- SQL injection and script injection detection
- Prompt injection and jailbreak prevention
- PII detection and automatic redaction
Layer 2: AI Safety Check
- Dedicated safeguard model analyzes all inputs
- 8-category safety classification (S1-S8)
- Context-aware threat assessment
- Distinguishes harmful requests from legitimate discussions
Layer 3: Output Validation
- Scans AI responses before delivery
- Blocks harmful instructions and dangerous content
- Detects policy violations in generated text
- Ensures compliance with safety standards
Layer 4: Fact-Checking
- Automated verification of factual claims
- Identifies dates, statistics, and historical facts
- Flags inaccurate or misleading information
- Rejects responses with detected errors
Adaptive Safety Profiles
We maintain dynamic safety profiles for each user:
- Trust Levels: New → Trusted → Suspicious → Blocked
- Risk Scoring: Increases with violations, decreases with good behavior
- Automatic Escalation: Repeated violations trigger progressive restrictions
- Rehabilitation: Risk scores naturally decay over time with positive behavior
Content Moderation Categories
We actively moderate content across multiple categories:
- S1 - Violence & Harm: Instructions for violence, injury, or harm to people
- S2 - Weapons: Bomb-making, explosives, illegal weapon modifications
- S3 - Drugs: Controlled substance synthesis and manufacturing
- S4 - Self-Harm: Suicide methods, self-injury instructions (provides crisis resources)
- S5 - Sexual Content: Absolute prohibition on CSAM, strict adult content policies
- S6 - Hate Speech: Slurs, discrimination, dehumanizing language
- S7 - Illegal Activities: Fraud, hacking, doxxing, and platform ToS violations
- S8 - Prompt Injection: Jailbreaks, system manipulation, context attacks
Rate Limiting & Fair Use
To ensure service availability and prevent abuse, limits are applied per subscription tier across the Ophraxx Web Core:
- OW-F1 (Free): ~500 messages per month
- OW-B12 (Basic): ~1,000 messages per month
- OW-U45 (Ultra): 10,000+ messages per month, priority processing
- OW-P88 (Pro): 50,000+ messages per month, highest priority processing
- Rate Limiting: 1 request per 2 seconds per user (all tiers)
- Spam Protection: Automatic detection and temporary blocks (all tiers)
Data Handling Transparency
We distinguish between (1) bot and conversational data and (2) web and account data. Both are handled under our control.
Bot and conversational service
- What we store: User and server identifiers (for functionality and rate limits), usage counts (e.g. messages per user per day, per server per month), server configuration (channels, toggles), first-use flags, and safety violation metadata (category and timestamp, not full message content). Conversation context exists only in memory for a short session (e.g. 15-minute TTL) and is not written to a persistent database.
- What we don't store: Permanent message or conversation archives, PII in responses (we redact before delivery), or credentials. When a session expires or the process restarts, in-memory conversation history is discarded.
Website and web application
- What we store: Account and profile data (email, username, display name, bio, preferences, settings), authentication and session data (hashed passwords, 2FA secrets, session tokens), and login/session activity for security and abuse prevention. Cookies and similar technologies are used as described in our Cookie Policy.
- What we don't do: We do not sell your data or use it for advertising. We do not share your data with data brokers or third-party advertisers.
Continuous Improvement
We are committed to ongoing enhancement:
- Feedback Integration: User feedback directly improves our systems
- Regular Updates: Safety patterns updated as new threats emerge
- Model Upgrades: Continuous integration of improved AI models
- Security Audits: Regular reviews of safety and security measures
Accountability
We hold ourselves accountable through:
- Comprehensive Logging: All security events are logged and reviewable
- Moderation Transparency: Clear communication of policy violations
- Appeal Process: Users may contest restrictions through our support channel; we review appeals in accordance with our policies.
- Public Documentation: Open documentation of our safety systems and policies
Model Evaluation
We evaluate models for accuracy, safety, and bias using automated tests, human review, and targeted red-team exercises. Findings inform model updates and policy improvements.
Policy Updates
We review and update safety policies as threats evolve. Changes are published on our policy pages and may affect access or behavior of the service.
Limitations & Disclaimers
We are transparent about our limitations:
- AI systems can make mistakes or generate inaccurate information
- No safety system is 100% perfect; edge cases may occur
- AI responses should not be considered professional advice
- Service availability may be affected by infrastructure or operational factors
Contact & Support
We value open communication. For questions, concerns, or feedback about our trust and transparency practices, please reach out through our designated support channel. Our systems are built and operated by us—no outsourced safety or moderation.