AI trust model

Siit’s AI is designed to be useful and governable. Our trust model gives admins fine‑grained control over what AI can read, propose, and execute—backed by least‑privilege permissions, approvals, and full audit trails. It applies to both agents:

What “trust model” means in Siit

  • Identity and scope

    • AI acts as a workspace agent with role‑based permissions. It only sees data the requester is allowed to see (service audience, article visibility, directory rules).

    • For third‑party systems, the IT Agent uses dedicated, scoped service accounts or OAuth scopes—never broad admin tokens by default.

  • Knowledge boundaries

    • You choose the knowledge sources (Siit Knowledge, specific collections, selected URLs/docs).

    • Answers cite sources. If no trustworthy source exists, the agent defaults to “I don’t know” and can open/route a request.

  • Action governance

    • All actions are packaged as playbooks (for example, “Reset Okta MFA,” “Add to App X group”).

    • Per‑playbook autonomy levels:

      • Suggest only: propose steps to an agent.

      • Execute with approval: require human approval (assignee, team lead, manager).

      • Auto‑execute: run automatically when conditions are met.

    • Extra controls: allow/deny lists, parameter whitelists, requester eligibility checks (employment status, manager, location, device posture).

  • Human in the loop

    • Any interaction can be escalated to a human. Approvals are private and auditable. Agents can always override or roll back via a paired playbook.

  • Observability and audit

    • Every answer, decision, and external action is logged in the request timeline with who/what/when, inputs, outputs, and the playbook used.

    • Shadow mode: test playbooks with “simulate” to log what would have happened without making changes.

  • Safety and privacy

    • Least‑privilege connectors, token rotation, and encryption in transit/at rest.

    • Prompt hardening: sensitive tokens never included in prompts; PII redaction rules for inputs you choose.

    • Rate limits and throttling per user/channel to prevent abuse.

    • We use model providers through “no training” endpoints when available and do not permit providers to use your content for their own model training. Refer to your chosen provider’s data‑handling terms.

How each agent applies the model

AI assist

  • Reads only the knowledge sources you enable; respects article visibility.

  • Produces short answers with citations; can suggest forms or open a request when needed.

  • Optional behaviors: auto‑tag, classify to a service, language detection and response in the user’s language.

IT agents

  • Executes only the playbooks you publish and only for eligible users.

  • Typical uses: unlock Okta account, reset MFA, add to group, provision/deprovision app, run Jamf remediation.

  • Guardrails: require approvals for high‑impact operations, restrict to office hours, or limit to P1 incidents.

Admin controls

  • Sources and tone

    • Choose knowledge collections; set answer length and personality.

  • Playbooks and connectors

    • Connect apps with scoped credentials (Okta, Azure/Google, Jamf, etc.).

    • Build playbooks with pre‑checked conditions, required parameters, and post‑actions (notify, tag, update status).

    • Set autonomy: Suggest, Execute with approval, or Auto‑execute.

  • Policies

    • Who can invoke which agent and where (channels, DMs, Portal).

    • Office‑hours rules and rate limits.

    • Data filters and redaction preferences.

  • Kill switch

    • Pause an agent or a single playbook instantly; all requests fall back to human.

Rollout checklist

  • Start in a pilot channel with Shadow mode for IT Agent playbooks.

  • Enable AI Assist on a small, high‑quality knowledge collection; review answer sources.

  • Add approvals to any playbook that changes identity or device state.

  • Monitor logs and CSAT; gradually increase autonomy where safe.

Model providers and data residency

Which LLMs we use

  • Providers: OpenAI and Mistral AI. Both are supported across our agents (AI Assist and IT Agent).

  • Why two: this lets us meet performance, locality, and compliance requirements per customer.

How we choose a provider for you

  • Workspace default: set a default provider in Settings → Agents.

  • Per‑agent override: select a different provider for AI Assist or IT Agent if needed.

  • Regional preference: we adapt to your company’s location and data‑residency policy:

    • EU residency: route via Mistral AI (EU‑hosted) by default.

    • US/global: route via OpenAI.

    • We can switch providers on request without changing your workflows or playbooks.

Last updated