losclouds

AI Models / Compare

Phi-4

Best-in-class 14B SLM for STEM reasoning and coding; fits consumer hardware.

Creator
Microsoft
Lifecycle
Active
Context
16.4K
Max output
16.4K
Released
Dec 12, 2024
Status
unknown
Input
$0.13 / 1M tokens
Output
$0.52 / 1M tokens
Cached read
/ 1M tokens
Cached write
/ 1M tokens
Batch discount
%
Source
Phi-4 pricing
Verified
Apr 5, 2026 (High)

Capabilities

Modalities
texttext
Capabilities
batchSupportfunctionCallingstructuredOutputs
Strengths
Beats models 3× its size on STEM, Fits in 16 GB RAM
Tradeoffs
Short 16K context, text-only
Official Links

Benchmark Coverage

BenchmarkVersionScoreDateSourceNotes
MMLU202484.8 %Dec 1, 2024MicrosoftVendor-reported
MATH202480.4 %Dec 1, 2024MicrosoftVendor-reported
HumanEval202482.6 %Dec 1, 2024MicrosoftVendor-reported

Release History

ReleaseAliasLifecycleRelease DateDeprecationShutdownSummary
Phi-4phi-4ActiveDec 12, 2024Current published model family snapshot.

Host Coverage

HostTypeContextPricing NoteDifferences
Azure AI Foundryfirst-party16.4K$0.13/$0.52 per MTok.
Migration Guidance

Best open-weight SLM for constrained compute. Upgrade to Phi-4 MoE or Llama 3.x for larger context.

Change Events
DateTypeTitleDescriptionSource
Dec 12, 2024family_addedPhi-4 publishedInitial public model family launch.Phi-4 release notes

Other models from Microsoft

Phi-4 Mini