discourse/plugins/discourse-ai
2025-07-31 16:18:53 +02:00
..
admin/assets/javascripts/discourse DEV: Reapply gjs-codemod in d-ai (#33758) 2025-07-23 12:05:40 +02:00
app FIX: ensure usage report is timezone aware to prevent gaps (#33913) 2025-07-29 13:42:31 -07:00
assets DEV: Enable ember/no-classic-components (#33978) 2025-07-30 14:54:24 +02:00
config Update translations (#34004) 2025-07-31 16:18:53 +02:00
db FIX: Remove old code reference on Discourse AI migration (#33943) 2025-07-29 18:08:36 -03:00
discourse_automation
evals
lib FIX: Optimize shortcomings from topic truncation from a27e20c (#33983) 2025-07-30 17:09:05 -03:00
public/ai-share
spec DEV: Fix a flaky AI composer helper test (#33998) 2025-07-31 13:55:44 +10:00
svg-icons
test/javascripts DEV: Fix various lint issues (#33811) 2025-07-24 15:27:04 +02:00
.prettierignore
about.json
plugin.rb
README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key