Introduces two new site settings to apply exponential age-based penalties to semantic topic suggestions, similar to algorithms used by Reddit/HN: * ai_embeddings_semantic_related_age_penalty (default: 0.3) Controls penalty strength. 0.0 = disabled, 0.3 = gentle bias toward newer content, 1.0+ = strong recency preference * ai_embeddings_semantic_related_age_time_scale (default: 365 days) Controls time horizon. Use 365 for yearly scale, 90 for quarterly scale, etc. Formula: similarity_score / POWER(age_in_days / time_scale + 1, penalty) This allows sites to de-prioritize older topics in suggestions while maintaining configurability for forums with different content lifecycles. Performance optimized with conditional JOINs only when penalty > 0. "Age" here is using bumped_at to work with communities with long lived mega topics too. --------- Co-authored-by: Penar Musaraj <pmusaraj@gmail.com> |
||
|---|---|---|
| .. | ||
| admin/assets/javascripts/discourse | ||
| app | ||
| assets | ||
| config | ||
| db | ||
| discourse_automation | ||
| evals | ||
| lib | ||
| public/ai-share | ||
| spec | ||
| svg-icons | ||
| test/javascripts | ||
| .prettierignore | ||
| about.json | ||
| plugin.rb | ||
| README.md | ||
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco
Evals
The directory evals contains AI evals for the Discourse AI plugin.
You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.
To run them use:
cd evals ./run --help
Usage: evals/run [options]
-e, --eval NAME Name of the evaluation to run
--list-models List models
-m, --model NAME Model to evaluate (will eval all models if not specified)
-l, --list List evals
To run evals you will need to configure API keys in your environment:
OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key