AI & Model Use

What the AI coach can see, what it can't, and how it makes decisions

You are talking to AI, not a human

AI Coach's coaching assistant is powered by large language models from third-party providers (currently Anthropic and OpenAI). The platform clearly identifies the assistant as AI, never as a human coach. Your assigned human coach is a separate, real person — when you message them through Messages, that goes to a human, not to the AI.

What the AI sees during a session

When you have a coaching conversation, the AI receives:

  • The current message you sent
  • The recent message history of this specific session
  • Your memory entries — facts and themes you have explicitly consented to keep across sessions, or that the system has summarised from prior conversations under your control
  • Your profile context — role, goals, interests you set during the personalisation flow
  • Your active stream framework and competency dimensions — used to ground the conversation in the coaching model your tenant has chosen
  • Your recent learning + reflection signals — when explicitly bridged into a session (e.g. “I just finished module X”)

What the AI does NOT see

  • Other users' conversations or memory — strict tenant + user isolation
  • Your messages to your human coach (Messages and session-request channels)
  • Your assessment scores or competency progression numbers (those are derived data, not surfaced as AI input)
  • Your account credentials or payment information

Your data is not used to train models

AI Coach sends your messages to AI providers for inference only. We have contractual no-training agreements with our model providers — your conversations, reflections, and memory are processed in real time and not retained by the providers for training. Your data is never used to improve the underlying models.

Boundary detection and safety

Every coaching conversation runs through a boundary-detection layer. If the conversation moves toward areas the AI is not qualified to handle (e.g. crisis, medical, legal), the AI redirects you to a human coach or external resource rather than attempting to advise. This system is conservative by design: when in doubt, it escalates.

Provenance disclosure

Where AI generates content that drives decisions (e.g. competency scores, recommendations on the Journey page), AI Coach surfaces a provenance line — framework name, version, and whether the output is rule-derived or AI-analysed. This means you can always trace why a recommendation appears.

AI errors and recourse

AI output can be wrong. If the assistant says something incorrect, harmful, or off-base, you can flag it via the Report-an-issue affordance in the user menu, or raise it directly with your human coach via Messages. Your feedback feeds the prompt-tuning and boundary-detection improvements.

Tenant administrator oversight

Tenant administrators (your organisation's coaches and admins) configure the coaching modes, competency frameworks, and content sources for AI Coach. They cannot read individual user conversations, memory, or reflections — tenant isolation is enforced at the database layer (RLS) and at the application layer (tenant-scoped queries). Administrators see aggregate analytics and their own coaching threads with participants, nothing more.

Last updated: 2026-04-25. For technical detail on encryption and isolation, see the Privacy & Security page.