discourse/plugins/discourse-ai
Kris a79aaa5ead
UX: only show AI persona dropdown with multiple options (#34527)
Currently we show the AI persona dropdown on the AI conversations page
if there's only one option:

<img width="2192" height="828" alt="image"
src="https://github.com/user-attachments/assets/79316217-e9e8-48f6-b9de-4a61f5d8ba25"
/>

To simplify, we should only show it when more than one option are
present:

<img width="2202" height="1180" alt="image"
src="https://github.com/user-attachments/assets/baff1bab-da85-4e9f-9091-a5df544b3762"
/>

and it appears again with multiple options: 

<img width="880" height="354" alt="image"
src="https://github.com/user-attachments/assets/e666916f-b958-4370-9e5e-bf7558f48f52"
/>
2025-08-27 11:55:35 -04:00
..
admin/assets/javascripts/discourse FEATURE: Translation progress admin UI (#34239) 2025-08-15 12:19:35 -07:00
app FEATURE: add support for Groq as a pre-configured LLM (#34402) 2025-08-19 16:17:45 +10:00
assets UX: only show AI persona dropdown with multiple options (#34527) 2025-08-27 11:55:35 -04:00
config Update translations (#34568) 2025-08-27 11:00:03 +02:00
db FIX: Truncate seeded persona's names to fit name length constraint (#34393) 2025-08-18 18:08:43 -03:00
discourse_automation FEATURE: Use a persona when running the AI triage automation script (#34010) 2025-08-04 09:11:13 -03:00
evals
lib PERF: do not hydrate all candidates in big relation (#34553) 2025-08-27 14:09:04 +10:00
public/ai-share
spec UX: only show AI persona dropdown with multiple options (#34527) 2025-08-27 11:55:35 -04:00
svg-icons
test/javascripts DEV: Fix various lint issues (#33811) 2025-07-24 15:27:04 +02:00
.prettierignore
about.json
plugin.rb FEATURE: Translation progress admin UI (#34239) 2025-08-15 12:19:35 -07:00
README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key