Prompt
Properly sanitize or escape all user-provided inputs before using them in sensitive contexts to prevent injection attacks. This applies to multiple contexts:
- SQL/OData queries: Use parameterized queries or properly escape special characters:
-request = request.filter(`contains(subject, '${query}')`); +const escapedQuery = query.replace(/'/g, "''"); +request = request.filter(`contains(subject, '${escapedQuery}')`); - HTML content: Use a library like DOMPurify to sanitize HTML before rendering:
-return `<div dir="ltr">${latestReplyHtml}</div>`; +return DOMPurify.sanitize(`<div dir="ltr">${latestReplyHtml}</div>`); - XML construction: Escape special characters in XML:
-<rule_name>${rule.name}</rule_name> +<rule_name>${escape(rule.name)}</rule_name> - AI prompts: Sanitize inputs before inclusion in prompts:
-${user.about ? `<user_info>${user.about}</user_info>` : ""} +${user.about ? `<user_info>${sanitizeText(user.about)}</user_info>` : ""}
Input sanitization is your first line of defense against injection vulnerabilities and should be applied consistently throughout your codebase.