When providing context to LLMs, choose the most efficient method based on the nature and size of the context data. For bounded, static context (like feature flag mappings or configuration options), inject the information directly into the system prompt rather than using tool calls. This approach is faster, more cost-effective, and reduces complexity.
When providing context to LLMs, choose the most efficient method based on the nature and size of the context data. For bounded, static context (like feature flag mappings or configuration options), inject the information directly into the system prompt rather than using tool calls. This approach is faster, more cost-effective, and reduces complexity.
Prefer prompt injection when:
Use tool calls when:
Example:
# Good: Inject bounded feature flag context into prompt
enhanced_system_prompt = SURVEY_CREATION_SYSTEM_PROMPT
if feature_flag_context:
enhanced_system_prompt += f"\n\n## Available Feature Flags\n{feature_flag_context}"
# Avoid: Using tool calls for static, bounded context
# This adds unnecessary complexity and cost
def retrieve_flag_id(feature_key): # Tool call - overkill for small static data
return api_call_to_get_flag_id(feature_key)
Also ensure your prompts only reference capabilities the LLM actually has - avoid instructing LLMs to manipulate internal state variables they cannot access.
Enter the URL of a public GitHub repository