Disclaimer: This website is currently in beta testing. Some features may not be complete, and content may be out of date.

← Back to Other Policies

Your Data and Model Performance

Last updated: March 12, 2026

Overview

This page explains how Ophraxx AI uses data from interactions on the platform—both our conversational bot (in servers and connected spaces) and our website and web applications (account, profile, settings, and any in-product feedback). We describe what we store, what we do not store, and how your data relates to model performance and safety. Ophraxx AI is currently in public beta. Our data practices are designed to be minimal by default: we collect only what is needed to run the service and improve it safely. All systems described here are built and operated by us.

What We Do Not Store

Conversation messages are not written to any persistent storage. When you send a message to Ophraxx AI on our platform, that message is processed in memory to generate a response. It is held in a short-lived session — scoped to your user account within your server — that expires after 15 minutes of inactivity. When the session times out, the conversation history is discarded entirely.

This means we have no long-term record of what you said or what the bot replied. There is no database of conversation logs that could be searched, audited, or accidentally exposed. If the bot restarts, active sessions are cleared and users start fresh on their next message.

What We Do Store

A small set of operational data is stored to run the service and enforce usage policies:

  • Usage counts

    We track how many AI requests each user makes per month. These counts are used to enforce rate limits based on subscription tier across the Ophraxx Web Core: ~500/month for OW-F1 (Free), ~1,000/month for OW-B12 (Basic), 10,000+/month for OW-U45 (Ultra), and 50,000+/month for OW-P88 (Pro). Counts reset monthly. No message content is stored alongside these counts, only the numeric totals.

  • Server configuration

    When an administrator configures the service, their choices (e.g. channels, toggles, invoke settings) are stored so the system can apply them consistently. This is operational data required for the bot to function. It does not include any message content.

  • First-use flag

    We store a flag per user indicating whether they have received the one-time welcome or onboarding message. This prevents duplicate sends. It contains no message content—only an account identifier and a boolean.

  • Moderation records

    When a safety violation fires — a blocked message, a spam event, a soft-block — a record is kept per user to inform escalation decisions. We track the category and timestamp of recent violations, not the full content of the triggering message. This data informs automated enforcement and is not used for any other purpose.

How Data Relates to Model Performance

Ophraxx AI runs on our own model infrastructure. The models are not trained in real time from live conversations — each request is processed by a fixed model version, and the models themselves are updated separately through a deliberate development process, not from ongoing user interactions.

We do collect aggregate, non-identifiable signals that help us evaluate model performance over time. These include the feedback signals from the thumbs-up and thumbs-down buttons that appear on every AI response. These signals tell us whether responses are useful or not — they do not include the content of the message or the response. We use this data to understand where performance needs to improve across different query types, server types, and personality configurations.

Any future use of interaction data for model training or fine-tuning would be subject to a separate policy update and opt-in or opt-out mechanisms before it is implemented.

Safety Systems and Data

Every message that passes through Ophraxx AI is checked by multiple safety layers before a response is generated and before that response is delivered. These checks happen in memory as part of request processing — they do not involve writing the message to a database for review.

The safety pipeline includes a pattern-based pre-screen, an AI safeguard model that classifies content against nine threat categories, an automated fact-checker that screens AI responses for factual accuracy, a second safeguard pass on the output, and a PII redaction step that strips personal identifiers from responses before they are sent. All of this runs per-request without creating a persistent record of the interaction.

When a safety violation fires and results in a soft-block or moderation action, the category and timestamp are stored (as described above under moderation records). The content of the blocked message is not retained.

PII in AI Outputs

As part of our output safety pipeline, every AI response passes through a PII detection and redaction step before it is sent. This step scans the response for personal identifiers — phone numbers, email addresses, and similar data — and removes them before delivery. This is a safeguard against the model inadvertently reproducing personal data that appeared in the conversation context. The redaction happens on the outbound response only; it does not affect what users can send in their own messages.

Model Evaluation and Accuracy

Ophraxx AI includes an automated fact-checking layer that evaluates every AI response before it is sent. A secondary AI model reviews the response against verifiable claims — dates, statistics, scientific facts, historical events, technical details — and returns one of three verdicts: approved, uncertain, or incorrect. Responses flagged as incorrect are blocked and not delivered. This layer operates in real time on every request and does not require storing conversation content.

We also track thumbs-up and thumbs-down feedback on responses (described above). Over time, patterns in this aggregate signal help identify areas where the models perform well and where improvement is needed. This feedback does not link back to individual conversations or users — it is associated with the message ID only, not with user identity or message content beyond what the feedback signal itself represents.

Web Account and Profile Data

On the website and web applications we store your account and profile data (e.g. email, username, display name, bio, preferences, settings) so we can provide the Service and apply your choices. This data is used only to operate the Service and as described in our Privacy Policy. We do not use your profile or account data for advertising or sell it to data brokers. If you delete your account or request deletion of your data, we will process that in accordance with our Privacy Policy and applicable law.

Your Choices

Because we do not store conversation content in persistent form, there is no conversation history to request, correct, or delete once your session expires. If you have concerns about the operational data we do store (usage counts, configuration, moderation records, or web account and profile data), you can contact us through our designated support channel to request access, correction, or deletion where applicable.

You may opt out of having your service interactions used for future model improvement by contacting us through our support channel. Opting out does not affect core service functionality—rate limits, safety systems, and AI responses operate the same regardless of opt-out status. Our models and improvement pipelines are built and operated by us.