When documenting AI applications that run local models or perform inference, always provide comprehensive infrastructure requirements including hardware specifications, platform-specific drivers, and setup instructions. This ensures users can successfully run AI models without encountering compatibility issues.
When documenting AI applications that run local models or perform inference, always provide comprehensive infrastructure requirements including hardware specifications, platform-specific drivers, and setup instructions. This ensures users can successfully run AI models without encountering compatibility issues.
Include the following details:
Example documentation structure:
### Hardware Requirements
- **RAM/VRAM**: 8GB minimum (3B models), 16GB recommended (7B models)
- **CPU**: ARM, x86 architectures supported
- **GPU**: NVIDIA (via llama.cpp), AMD and Intel support coming soon
### Platform Setup
- **Windows**: Install NVIDIA drivers if GPU available
- **Linux**: Install CUDA Toolkit if GPU available
This prevents user frustration and ensures AI applications can run as intended across different hardware configurations.
Enter the URL of a public GitHub repository