Don’t trust “sandboxed” templating when any part of template text (or dangerous expressions) can be influenced by an untrusted party. For security-sensitive code, assume Jinja2 (and similar template engines) is dangerous with untrusted inputs and enforce strict trust boundaries.
Apply this rule:
Example (safe pattern: allowlisted templates; user data only as inert values):
from jinja2 import Template
ALLOWED_TEMPLATES = {
"welcome": "Hello !",
}
def render_welcome(template_id: str, user_name: str) -> str:
# Block attacker-controlled templates
if template_id not in ALLOWED_TEMPLATES:
raise ValueError("Unknown template")
tmpl = Template(ALLOWED_TEMPLATES[template_id])
# Treat user_name as inert data; ensure it’s escaped for your output context
return tmpl.render(name=user_name)
If your system needs to accept user-provided templates, require additional hardening and security review rather than assuming Jinja2 sandboxing is sufficient.
Enter the URL of a public GitHub repository