AI Models / Compare
Phi-4
Best-in-class 14B SLM for STEM reasoning and coding; fits consumer hardware.
- Creator
- Microsoft
- Lifecycle
- Active
- Context
- 16.4K
- Max output
- 16.4K
- Released
- Dec 12, 2024
- Status
- unknown
- Input
- $0.13 / 1M tokens
- Output
- $0.52 / 1M tokens
- Cached read
- — / 1M tokens
- Cached write
- — / 1M tokens
- Batch discount
- —%
- Source
- Phi-4 pricing
- Verified
- Apr 5, 2026 (High)
Capabilities
- Modalities
- text→text
- Capabilities
- batchSupportfunctionCallingstructuredOutputs
- Strengths
- Beats models 3× its size on STEM, Fits in 16 GB RAM
- Tradeoffs
- Short 16K context, text-only
Migration Guidance
Best open-weight SLM for constrained compute. Upgrade to Phi-4 MoE or Llama 3.x for larger context.