discourse/plugins/discourse-ai
Keegan George 7b17447c68
DEV: Add more early exits for credit limits on translation jobs (#35578)
Adding more early exits on translation related jobs to prevent Job
exceptions from showing up in the logs
2025-10-23 13:04:18 -07:00
..
admin/assets/javascripts/discourse DEV: Remove unused service injections (#34750) 2025-10-08 13:31:41 +02:00
app DEV: Add more early exits for credit limits on translation jobs (#35578) 2025-10-23 13:04:18 -07:00
assets UX: Make sentiment analysis reports accessible to moderators (#35577) 2025-10-23 11:39:56 -07:00
config UX: Move AI bot PM to navigation menu (#35189) 2025-10-20 09:15:51 -04:00
db FIX: Disable AI Problem Checks (#35475) 2025-10-17 14:07:32 -03:00
discourse_automation FIX: allow AI tagging automation to tag posts by bots (#35310) 2025-10-09 18:45:43 -04:00
evals DEV: Switch AI debug messages to off by default (#35320) 2025-10-10 21:47:36 +08:00
lib DEV: Add more early exits for credit limits on translation jobs (#35578) 2025-10-23 13:04:18 -07:00
public/ai-share
spec DEV: Add more early exits for credit limits on translation jobs (#35578) 2025-10-23 13:04:18 -07:00
svg-icons
test/javascripts UX: Move AI bot PM to navigation menu (#35189) 2025-10-20 09:15:51 -04:00
.prettierignore
about.json
plugin.rb DEV: Move 'unknown OID' embeddings fix into core (#35519) 2025-10-21 15:23:24 +01:00
README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key