discourse/plugins/discourse-ai
Natalie Tay b7d7f99c04
FEATURE: Allow re-localization twice a day if post version has changed (#34023)
This commit is a continuation of
https://github.com/discourse/discourse-ai/pull/1422.

Previously, we entirely skipped / disallowed localization. With this PR,
- For topics, we will only enqueue the translate title job if the post
revisor indicates there is a title change
- For posts, we will only enqueue the translate post job if there is a
post version change

Both jobs will be enqueued with a delay of
`SiteSetting.editing_grace_period` or `5 minutes`, whichever is larger.
Each topic or post may be retranslated to a locale at a maximum of twice
a day.
2025-08-04 10:58:30 +08:00
..
admin/assets/javascripts/discourse DEV: Reapply gjs-codemod in d-ai (#33758) 2025-07-23 12:05:40 +02:00
app FEATURE: Allow re-localization twice a day if post version has changed (#34023) 2025-08-04 10:58:30 +08:00
assets DEV: Enable ember/no-classic-components (#33978) 2025-07-30 14:54:24 +02:00
config Update translations (#34004) 2025-07-31 16:18:53 +02:00
db FIX: Remove old code reference on Discourse AI migration (#33943) 2025-07-29 18:08:36 -03:00
discourse_automation
evals
lib FEATURE: Allow re-localization twice a day if post version has changed (#34023) 2025-08-04 10:58:30 +08:00
public/ai-share
spec FEATURE: Allow re-localization twice a day if post version has changed (#34023) 2025-08-04 10:58:30 +08:00
svg-icons
test/javascripts DEV: Fix various lint issues (#33811) 2025-07-24 15:27:04 +02:00
.prettierignore
about.json
plugin.rb
README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key