Please enable JavaScript.
Coggle requires JavaScript to display documents.
IRCC, AI Clusters, Large Language Models (LLMs), Applications/Users -…
IRCC
External & API Cluster (
$
)
Anthropic/Claud
XAi/Grok
Google/Gemini
OpenAI/ChatGPT
Research Cluster
Open LLMs
Meta/Llama
Meta LLaMA Models (LLaMA 2 / LLaMA 3)
• Strengths: General-purpose reasoning, multilingual capabilities, strong base for fine-tuning.
• Best for: Custom chatbots, research, instruction following.
• Vicuna – Chat-focused.
• Alpaca – Instruction-following.
• OpenHermes – Balanced, conversational and general reasoning.
Mistral
Mistral 7B / Mixtral (Mixture of Experts)
• Strengths: Fast, efficient, and very strong performance for their size.
• Best for: Instruction-following, chatbots, and even coding tasks.
• Mistral 7B – Lightweight and powerful.
• Mixtral – Uses only part of the model per query (efficient Mixture of Experts), great for multitask dialogue and cost-effective serving.
MS Phi
Phi-1.5 / Phi-2 (Microsoft)
• Strengths: Small model, trained on textbook-quality data.
• Best for: Lightweight tutoring bots, educational tools, reasoning on a tight budget.
• Phi-2 can do math, logic puzzles, and basic reasoning surprisingly well for its 1.3B size.
EleutherAI
GPT-J (6B) / GPT-NeoX (20B)
• Strengths: Early open-weight models, good at natural language generation.
• Best for: Writing, summarization, general content generation.
• Note: GPT-J is easier to run locally; NeoX has higher performance but needs more resources.
Administrative Cluster
Teaching/Learning Cluster
Applications/Users
Compute
Databases
AI Clusters
Large Language Models (LLMs)
Applications/Users