discourse/plugins/discourse-ai
Kris c972bfa239
FIX: show AI gists in mobile messages (#35124)
The outlet used previously doesn't render when there's no category (in
message topic lists) so here I'm moving it to an outlet that does. Also
added a spec to check for this case.



Before:
<img width="400" alt="image"
src="https://github.com/user-attachments/assets/831cd17f-db6b-49d8-a2eb-88a668854e2b"
/>
 
 
 
After:
<img width="400" alt="image"
src="https://github.com/user-attachments/assets/f1e5c672-ca0f-4b84-9241-c0211f12c5e7"
/>
2025-10-01 14:11:25 -04:00
..
admin/assets/javascripts/discourse DEV: Standardize Ember route, controller and template naming (#34417) 2025-09-25 11:27:45 +01:00
app DEV: Clean up scope resolution operators in plugins (#34979) 2025-09-30 14:36:34 +02:00
assets FIX: show AI gists in mobile messages (#35124) 2025-10-01 14:11:25 -04:00
config Update translations (#35065) 2025-09-30 16:06:14 +02:00
db FEATURE: Promote Discover to a dedicated feature. (#34846) 2025-09-23 14:01:45 -03:00
discourse_automation FEATURE: Add option to flag + delete for llm triage (#34590) 2025-09-02 09:16:30 +10:00
evals
lib DEV: Clean up scope resolution operators in plugins (#34979) 2025-09-30 14:36:34 +02:00
public/ai-share
spec FIX: show AI gists in mobile messages (#35124) 2025-10-01 14:11:25 -04:00
svg-icons
test/javascripts DEV: Fix various lint issues (#33811) 2025-07-24 15:27:04 +02:00
.prettierignore
about.json
plugin.rb DEV: Clean up scope resolution operators in plugins (#34979) 2025-09-30 14:36:34 +02:00
README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key