Open source AI
Open Source LLM Picker
Find the right open source model for your project. Filter by task, model size, and license to compare Llama, Mistral, Phi, Gemma, Qwen, and CodeLlama — with VRAM requirements and deployment options.
Task
Size
License
Llama 3.1 8B
8B parameters
License
Meta Community
VRAM
~6 GB
Best For
Lightweight general tasks, chatbots, edge deployment
Deploy Options
Llama 3.1 70B
70B parameters
License
Meta Community
VRAM
~40 GB
Best For
Strong reasoning and coding, enterprise workloads
Deploy Options
Llama 3.1 405B
405B parameters
License
Meta Community
VRAM
~230 GB (multi-GPU)
Best For
Maximum open-source performance, research
Deploy Options
Mistral 7B
7B parameters
License
Apache 2.0
VRAM
~5 GB
Best For
Fast inference, EU compliance, resource-constrained environments
Deploy Options
Mixtral 8x7B
46.7B (MoE) parameters
License
Apache 2.0
VRAM
~26 GB
Best For
Multilingual tasks, efficient MoE architecture, coding
Deploy Options
Phi-3 Mini
3.8B parameters
License
MIT
VRAM
~3 GB
Best For
Edge and mobile deployment, on-device AI, lightweight tasks
Deploy Options
Gemma 2
9B / 27B parameters
License
Gemma License
VRAM
~6-18 GB
Best For
Research, instruction following, Google ecosystem integration
Deploy Options
Qwen 2
7B / 72B parameters
License
Apache 2.0
VRAM
~5-42 GB
Best For
Multilingual (strong CJK), coding, math reasoning
Deploy Options
CodeLlama
7B / 13B / 34B parameters
License
Meta Community
VRAM
~5-20 GB
Best For
Code generation, code completion, programming-specific tasks
Deploy Options
Deploy LLMs on real AWS
Deploy open source models on Amazon Bedrock and SageMaker in guided, hands-on missions. Real infrastructure, real learning.
Start building free →