MaWiki
AI usage policy

How MaWiki uses AI.

MaWiki uses generative AI to help you find and write things in your wiki. This page explains what AI does inside the product, what it does not do, the safeguards we have in place, and how to report a response that you think is wrong, harmful, or out of place.

Effective10 May 2026Last updated10 May 2026

1. What AI does in MaWiki

MaWiki includes a chat assistant that can answer questions about the documents in your workspace. You ask a question in plain English; the assistant retrieves the relevant snippets from your own content, sends them to a language model along with your question, and shows you the model's response with citations back to the source documents.

AI is also used in a small number of supporting places, such as suggesting titles for new chats and improving the quality of in-product search.

AI is not used to make decisions about your account, your billing, or anyone else's account. AI does not auto-publish content, auto-delete content, or take other irreversible actions in your wiki.

2. Which models we use

MaWiki supports two AI execution paths and you choose which one is active:

2.1 Cloud AI

When cloud AI is selected, your question and the relevant document snippets are sent to a third-party language-model provider that responds with the generated answer. We currently support models from Anthropic and OpenAI. The model in use at any moment is shown in the chat UI and can be changed in settings.

2.2 Local AI (desktop app only)

When local AI is selected, the language model runs entirely on your own computer. The model files are downloaded once from Hugging Face and then served by a llama.cpp engine bundled with the MaWiki desktop app. Your question and the document snippets never leave your device while local AI is active.

You can switch between cloud AI, local AI, and AI off at any time. The choice is per workspace.

For full detail on the data that flows in each mode, see the AI section of the privacy policy.

3. Known limitations

Generative AI models can be wrong. The output is generated by predicting the most likely next words and is not a verified statement of fact. We cap that risk by feeding the model excerpts from your own documents and asking it to cite them, but you should still treat AI answers as a starting point, not a final source of truth.

  • Inaccuracy: the model may misread the source documents, combine unrelated documents, or invent details that are not in the source. Always check the citations.
  • Outdated knowledge: cloud models have a training cutoff. Information added to the world after that date is not in the model unless it is in your own documents.
  • Style and tone variation: the same question can produce different responses on different days. This is normal for generative models.
  • Coverage: the assistant can only answer questions about content that is in your workspace. It will not search the public web or your other systems.

Every AI-generated response in MaWiki is labelled AI-generated and a disclaimer about accuracy is shown under the chat input.

4. Safety measures

We use a combination of model-side and product-side safeguards to reduce the chance of harmful or inappropriate output:

  • Trusted providers: cloud AI uses models from established providers (Anthropic, OpenAI) that ship their own content-safety filtering and refuse policy-violating prompts at the model layer.
  • Workspace-scoped retrieval: the assistant can only retrieve content from your own workspace. It cannot read other tenants' documents and cannot fetch arbitrary content from the public internet.
  • Local-AI provenance: when running locally, MaWiki uses curated open-source models distributed by Hugging Face. We document which model files we recommend and ship the configuration in the desktop app, so you can audit and replace the model yourself if you prefer.
  • Per-message reporting: every AI message has a Report button that opens a quick form. Reports go straight to our team for review (see section 5).
  • No silent training: MaWiki does not use your conversation content to train models. Cloud providers' training behaviour follows their own published API terms.

5. Reporting a problem

If an AI response in MaWiki looks wrong, harmful, offensive, or otherwise problematic, please report it. The fastest way is the Report button next to any AI message inside the app. A short form lets you pick a category (inaccurate, harmful, offensive, or other) and add an optional comment.

Reports are logged against your workspace, reviewed by the Edge Software Pty Ltd team, and used to fix prompts, refine retrieval, and identify wider issues with model behaviour. We do not share report content with other users.

If you cannot use the in-app reporter (for example, the issue prevented you from logging in), email us at ai-reports@mawiki.com with as much detail as you can share.

6. Your responsibilities when using AI in MaWiki

Generative AI is a tool, not a substitute for human judgement. By using AI features in MaWiki you agree that:

  • You will not rely on AI output as the sole source for decisions that have legal, financial, medical, or safety consequences.
  • You will not use MaWiki AI to generate content that breaches the law in your jurisdiction, infringes someone else's rights, or violates the published terms of the underlying model providers.
  • You will respect the privacy of other people whose information you might place in your workspace, including by not uploading content you do not have the right to use.

7. Changes to this policy

We update this policy when we add, change, or remove AI features in MaWiki. The date at the top of the page reflects the most recent change. Material changes will be announced in the in-app changelog.

Questions about this policy can be sent to privacy@mawiki.com. For the underlying data-handling rules see the privacy policy.