Features How It Works Providers Pricing Download

Help Centre

Complete guide to AI Council features, setup, and troubleshooting.

What is AI Council?

AI Council is available for macOS and iOS (iPhone and iPad) as separate apps. It helps you get better answers from artificial intelligence by asking multiple AI systems the same question and combining their responses into a single, more reliable answer.

Think of it like consulting a panel of experts rather than relying on a single opinion. Each AI brings its own perspective, and the combined result is often more accurate and comprehensive than any individual response.

Stage 1: Gathering Responses
Your question is sent to multiple AI systems at the same time. Each AI provides its own independent answer without seeing what the others have written.

Stage 2: Peer Review
Each AI then reviews the responses from all the others, but without knowing which AI wrote which answer. This anonymous review prevents any AI from showing favouritism. Each reviewer ranks the responses from best to worst and explains their reasoning.

Stage 3: Synthesis
A designated "Chairman" AI reads all the original responses and the peer reviews, then produces a final answer that incorporates the best insights from everyone. This synthesis is highlighted so you can easily find the conclusion.

Getting Started

Open Settings (Command+,) and go to the Providers tab. Click "Configure" next to any provider to enter your API key. Use "Test Connection" to verify it works and see available models.

Popular providers to get started with:

  • OpenAI: platform.openai.com/api-keys
  • Anthropic: console.anthropic.com
  • Google: aistudio.google.com/apikey
  • xAI: console.x.ai
  • OpenRouter: openrouter.ai/keys (provides access to all providers)

API keys are stored securely in your Mac's Keychain.

Note: Local model support is available on macOS only and is not available on iOS.

For offline use or complete privacy, go to the Local Models tab in Settings:

Download from Featured or Hugging Face:

  • Browse the Featured section for recommended models, or use the Search Hugging Face tab
  • Check the compatibility badge to ensure the model fits your Mac's RAM
  • Click Download and choose a save location (defaults to Application Support/AI Council/Models)
  • The model will appear in model selection dropdowns throughout the app

Link Existing Models:

  • Go to the My Models tab
  • Click "Link Model" to browse for GGUF files already on your Mac
  • Select any .gguf file to add it to AI Council without copying

Use Ollama Models:

  • Ensure Ollama is running on your Mac
  • Go to Settings > Council tab
  • Click "Add Model" to open the model picker
  • Ollama models appear at the top under "Ollama (Local)"
  • Click any model to add it to your council

In Settings, go to the Council tab to choose which AI models participate in your council:

  • Click "Add Model" to open the model picker
  • Browse models grouped by provider, or use search to filter
  • Click any model to add it to your council
  • Remove models by clicking the red minus button next to them
  • Choose which model serves as Chairman for final synthesis
  • Optionally save your configuration as a preset

The status indicators show model availability:

  • Green: Direct API connection available
  • Orange: Available via OpenRouter fallback
  • Blue: Local model (Ollama or downloaded GGUF)
  • Red: No API key configured for this provider
  • Click the plus button or press Command+N to start a new conversation
  • The new conversation appears in the sidebar with a smooth entrance animation
  • A welcome view appears with helpful tips for getting the most from the council
  • The input area pulses gently to draw your attention
  • Type your question and press Return to send
  • Watch as the three stages complete and your synthesised answer appears

Features

AI Council supports over 20 cloud AI providers out of the box:

  • OpenAI: GPT-4, GPT-4o, GPT-4.5, o1, o3, and more
  • Anthropic: Claude Sonnet, Claude Opus, Claude Haiku
  • Google: Gemini 2.0 Flash, Gemini 2.5 Pro, Gemini 3
  • xAI: Grok 2, Grok 3, Grok 4
  • Mistral AI: Mistral Large, Mistral Medium
  • Groq: Ultra-fast inference for open models
  • Together AI: Open-source model hosting
  • Perplexity: Search-augmented AI
  • DeepSeek: Specialised coding models
  • Cohere: Enterprise-focused models
  • Cerebras, SambaNova, AI21, Fireworks, DeepInfra, Hugging Face
  • OpenRouter: Unified access to all providers

The provider registry is updated automatically, so new providers and models become available without app updates. Configure providers in the Providers tab of Settings.

Note: Local model support is available on macOS only and is not available on iOS.

Run AI models directly on your Mac with complete privacy and zero API costs:

  • Metal GPU Acceleration: Optimised for Apple Silicon (M1/M2/M3/M4) performance
  • Complete Privacy: Your data never leaves your Mac
  • Offline Operation: Works without internet connection
  • Zero API Costs: No per-query charges after download

The Local Models tab in Settings has three sections:

  • Featured: Curated list of popular models from trusted sources (bartowski, TheBloke, and others) with compatibility indicators showing if they fit your Mac's RAM
  • Search Hugging Face: Search Hugging Face for GGUF models, filter by quantisation level, and download with progress tracking
  • My Models: View and manage your models, link existing GGUF files from anywhere on disk, and scan for models from Ollama, LM Studio, or GPT4All

Each model shows a compatibility badge based on your Mac's available RAM, so you know before downloading whether it will run well.

Note: Ollama integration is available on macOS only and is not available on iOS.

If you have Ollama installed on your Mac, AI Council can use your Ollama models directly:

  • Auto-Detection: AI Council automatically detects when Ollama is running
  • Model Picker: Click "Add Model" in Council settings - Ollama models appear at the top under "Ollama (Local)"
  • No Configuration: Ollama models work immediately without API keys
  • Blue Indicator: Local models show a blue status dot in the council configuration

This is ideal if you already use Ollama and want to include those models in your council without downloading them again.

Get started quickly with pre-built prompts for common tasks:

Coding

  • Code Review: Have your code examined for bugs, security issues, and improvements
  • Debug Help: Get assistance with error messages and problematic code
  • Architecture Design: Plan the structure of your software projects

Writing

  • Proofread: Check your text for grammar, clarity, and style
  • Summarise: Get a concise version of longer documents

Research

  • Compare Options: Evaluate multiple choices with pros, cons, and recommendations
  • Explain Concept: Get clear explanations of complex topics

Analysis

  • SWOT Analysis: Examine strengths, weaknesses, opportunities, and threats
  • Risk Assessment: Identify and evaluate potential risks

Select a template from the new conversation menu to start with a structured prompt.

Choose how the council members interact with each other. Different modes suit different types of questions:

  • Standard: Models answer independently in parallel (default)
  • Devil's Advocate: One model actively challenges assumptions and finds flaws
  • Expert Panel: Each model adopts a specialist role (Security Expert, Performance Engineer, UX Designer, Software Architect, QA Engineer, Business Analyst)
  • Debate: Models debate in rounds before the chair synthesises, responding to each other's arguments
  • Consensus Seeker: Models focus on finding common ground and agreement

Select your preferred mode in the Council tab of Settings. The mode indicator appears in the chat interface.

After the council deliberates, AI Council analyses the responses to show you:

  • Confidence Score: A percentage indicating how much the council agrees (higher = more consensus)
  • Consensus Points: Key points where all models agree
  • Disputed Points: Areas where models disagree, showing different perspectives

This helps you understand when to trust the synthesis and when to dig deeper. The consensus card appears above the chair's synthesis and can be expanded for details. Toggle this feature in Settings.

Define what matters when models evaluate each other's responses. Choose from built-in presets or create your own:

  • Standard: Accuracy, completeness, clarity, helpfulness
  • Coding: Correctness, performance, maintainability, security, best practices
  • Creative: Originality, engagement, tone, style consistency
  • Research: Rigour, evidence quality, balanced perspective, citations
  • Decision: Actionability, risk awareness, completeness of options, clarity of trade-offs

Select criteria in the Council tab of Settings. A preview shows exactly what the peer reviewers will evaluate.

Organise your work into projects with persistent context. Each project can have:

  • System Prompt: A custom instruction that's automatically prepended to every query within the project
  • Scoped Conversations: Conversations are grouped by project, making it easy to find related discussions
  • Project Overview: Name and description to help you identify each workspace

Create projects from the sidebar using the folder button. Switch between projects or view "All Conversations" to see everything. The active project's context is applied automatically to all new queries.

Choose how the chair formats the final synthesis to match your needs:

  • Standard: Balanced analysis with natural structure
  • Executive Summary: Concise 3-5 bullet points with key takeaways
  • Detailed Analysis: Comprehensive sections with thorough explanations
  • Comparison Table: Side-by-side evaluation of options with trade-offs
  • Action Plan: Numbered steps with clear next actions

Select your preferred style from the controls panel above the message input. The chair's prompt is automatically adjusted to produce the requested format.

Generate structured, professional documents with enforced sections:

  • Product Requirements Document (PRD): Overview, requirements, user stories, success metrics, timeline
  • Code Audit: Executive summary, critical issues, security concerns, recommendations
  • Meeting Notes: Key decisions, action items, discussion summary, next steps
  • Research Summary: Methodology, key findings, analysis, recommendations
  • Technical Specification: Architecture, components, interfaces, data flow

Select a deliverable template from the controls panel. The council's synthesis will follow the template's required sections, creating consistent, exportable documents.

Execute multi-step workflows where each stage feeds into the next:

  • Research Report: Gather Information > Analyse Findings > Write Report
  • Decision Process: Brainstorm Options > Evaluate Options > Recommend > Create Action Plan
  • Code Review & Refactor: Identify Issues > Propose Fixes > Refactor Code
  • SWOT Analysis: Internal Analysis > External Analysis > Strategic Recommendations
  • Content Creation: Create Outline > Write Draft > Refine & Polish
  • Problem Solving: Define Problem > Root Cause Analysis > Develop Solutions

Select a chain from the controls panel before sending your query. The chain progress view shows your current step and allows you to continue to the next step or end the workflow.

When analysing documents, enable Evidence Mode to require citations:

  • Toggle Evidence Mode when you have attachments
  • The council automatically chunks your documents into numbered sections
  • The chair must cite specific section numbers for every claim
  • Claims without supporting evidence are flagged as "No evidence found"

This is essential for professional document analysis where you need to verify sources.

Record important decisions with full council context:

  • Click "Log Decision" on any synthesis card
  • Enter the decision outcome and your reasoning
  • The decision is saved with the full council context (models, rankings, synthesis)
  • Access all logged decisions from the sidebar
  • Export decisions as a formatted document for records

This creates an auditable trail showing how AI-informed decisions were made.

Presets provide optimised model combinations for different tasks, and they automatically adapt based on which providers you have configured:

  • Balanced Council: Diverse mix of major providers for well-rounded perspectives
  • Speed Focus: Fast models (GPT-4o-mini, Haiku, Gemini Flash) for quick iterations
  • Code Review: Specialist coding models including DeepSeek Coder and Qwen Coder
  • Creative Writing: Models with strong creative capabilities
  • Research & Analysis: Models suited for factual analysis and research

Presets intelligently filter to show only models from providers you have API keys for. You can also save your own custom presets for configurations you use frequently.

Additional Features

Real-Time Streaming: Watch AI responses arrive in real-time as they are generated. Each council member's response streams live with animated progress indicators.

Response Depth Control: Control how detailed you want the council's responses with the depth selector next to the send button:

  • Brief: Concise answers that get straight to the point
  • Standard: Balanced level of detail (default)
  • Detailed: Comprehensive responses with thorough explanations

File Attachments: Attach documents to your questions. Supported formats include PDF, Microsoft Word, plain text, and code files.

Prompt Enhancer: Open with Command+Shift+E, paste in your rough idea, and the AI will help you refine it into a clearer, more effective prompt.

Second Opinions: After receiving the chair's synthesis, request an alternative perspective from a different council member. Click the Second Opinion button on any synthesis card.

Follow-up Conversations: Continue the discussion with the chair without running a new council. Follow-up questions go directly to whoever provided the most recent synthesis.

Health Check: Before sending your question to the council, the app performs a quick health check on all configured models to verify they are responding. Any models that fail are clearly indicated, and the council continues with available models.

Cost Estimation: Before you send a question, AI Council shows an estimate of API costs.

Token Counter: The input area displays an estimate of how many tokens your message contains.

Smart Follow-up Suggestions: After receiving the chair's synthesis, AI Council suggests relevant follow-up questions based on the discussion, highlighting unexplored angles and areas of disagreement.

Search: Press Command+Shift+F for full-text search across all conversations.

Export: Save conversations in Markdown or JSON format with Command+E.

Model Analytics: Track how well each AI model performs over time - query counts, win rates, and average rankings. Access with Command+Shift+A.

  • Theme: System (follows your Mac), Light, or Dark mode
  • Accent Colour: Blue (default), Purple, Pink, Orange, Green, Teal
  • Font: System (SF Pro), Google Sans, Inter, Roboto, Source Sans Pro, JetBrains Mono
  • Font Size: Small (13pt), Medium (15pt), Large (17pt)
  • Density: Compact or Comfortable spacing

Troubleshooting

  • Verify your API key is correct in Settings > Providers
  • Click "Test Connection" to check if the provider is responding
  • Check if you have sufficient credits/quota with the provider
  • Some models may be temporarily unavailable - try again later
  • For local models, ensure Ollama is running or the GGUF file is valid

The council will continue with available models even if some fail.

Note: Local model support is available on macOS only.

  • Check the compatibility badge before downloading - it shows if the model fits in your RAM
  • Use quantised models (Q4, Q5) for faster inference on less powerful Macs
  • Close other memory-intensive applications
  • Apple Silicon Macs (M1/M2/M3/M4) perform significantly better than Intel Macs

For the best experience, use models that show a green "Recommended" badge.

  • Ensure Ollama is running (check for the Ollama icon in your menu bar)
  • Verify Ollama is accessible at localhost:11434
  • Click "Add Model" in the Council settings - Ollama models appear at the top under "Ollama (Local)"
  • If models still don't appear, try restarting both Ollama and AI Council

Keyboard Shortcuts

Command + N
New conversation
Command + E
Export conversation
Command + Shift + E
Open Prompt Enhancer
Command + Shift + F
Search conversations
Command + Shift + C
Copy last response
Command + Shift + A
Open Model Analytics
Command + ,
Open Settings
Return
Send message

Data and Privacy

AI Council stores all your data locally on your device:

  • API keys are stored in your device's secure Keychain
  • Conversations are saved as encrypted files in your application data folder
  • No data is sent to us or any third party servers
  • iCloud sync keeps your conversations synchronised across your Mac, iPhone, and iPad when you own both apps

When you ask a question, your query is sent directly to the AI providers you have configured. These providers have their own privacy policies that govern how they handle your data. AI Council itself does not collect, store, or transmit any of your information.

System Requirements

For macOS (€29.99):

  • macOS 14.0 (Sonoma) or later
  • An internet connection (for cloud AI providers)
  • API keys from at least one supported AI provider, OR local models via Ollama/GGUF*

For iOS (€9.99):

  • iOS 17.0 or later (iPhone or iPad)
  • An internet connection
  • API keys from at least one supported AI provider

* Local model support (Ollama, GGUF files) is available on macOS only.

Note: macOS and iOS are separate apps requiring separate purchases. iCloud sync keeps your conversations synchronised across devices when you own both apps.

Still need help?

Send us a message and we'll get back to you as soon as possible.

Thank you! Your message has been sent. We'll get back to you as soon as possible.