discourse/plugins/discourse-ai
Kris e2c701a849
UX: remove total and update backfill message for AI translations (#35479)
This removes the "N posts are ready to translate" message and updates
the backfill estimate text to also reference the date

Before:
<img width="2228" height="430" alt="image"
src="https://github.com/user-attachments/assets/112c933f-0bae-4e87-8949-317a9b6eccb8"
/>


After:
<img width="2194" height="624" alt="image"
src="https://github.com/user-attachments/assets/b4157144-e3c6-481c-beeb-a5d841230b55"
/>
2025-10-17 14:09:28 -04:00
..
admin/assets/javascripts/discourse DEV: Remove unused service injections (#34750) 2025-10-08 13:31:41 +02:00
app UX: remove total and update backfill message for AI translations (#35479) 2025-10-17 14:09:28 -04:00
assets UX: remove total and update backfill message for AI translations (#35479) 2025-10-17 14:09:28 -04:00
config UX: remove total and update backfill message for AI translations (#35479) 2025-10-17 14:09:28 -04:00
db FIX: Disable AI Problem Checks (#35475) 2025-10-17 14:07:32 -03:00
discourse_automation FIX: allow AI tagging automation to tag posts by bots (#35310) 2025-10-09 18:45:43 -04:00
evals DEV: Switch AI debug messages to off by default (#35320) 2025-10-10 21:47:36 +08:00
lib UX: Sort AI translations by completion, show decimal between 99 and 100% (#35461) 2025-10-16 17:32:39 -04:00
public/ai-share
spec FIX: Disable AI Problem Checks (#35475) 2025-10-17 14:07:32 -03:00
svg-icons
test/javascripts FEATURE: Append limited search results with semantic search (#35446) 2025-10-16 09:27:31 -07:00
.prettierignore
about.json
plugin.rb FIX: Disable AI Problem Checks (#35475) 2025-10-17 14:07:32 -03:00
README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key