Open Source LLM Models

Prompt Engineering

Few-Short Prompt

Need of Few-shot technique

Prompt Manipulation techniques

Comparison among LLM models

Reliable few-shot selection

Zero-Shot Prompts

Effectiveness and need of Zero-shot

Types of Zero-shot

Comparison with other advance PE techniques

Fine-Tuning & Pre-training

Why pretrained model is required for finetuning?

Pretraining, Finetuning Techniques

Instruction Fine-tunning and Transfer Learning

Implementation, Resources, Memory requirements

Prompt Injection

Prompt Injection Types

Prompt Injection Impacts over LLMs

Knowledge Distillation

Model Compression

Teacher & Student Model

Prediction Layer distillation

Intermediate Layer distillation

Low Rank Adaption

Quantization (QLora)

Improvements overlay

Creating LLMs

Software limitations for LLMs

Open Source Models Comparison with Open AI

Model Evaluation

Models Limitations based on parameters and size

Open Source model leaderboard,

datasets for pre-training and Fine-tuning

Cost effectiveness and size limitation of LLM

Cost Estimation of Open source and Open AI models

Challenges faced by LLM models

Strategies for optimization

Cost Calculation

Hardware Selection

Model compression, distillation, and prunning based cost

Data Cleaning and Preprocessing for LLM Training

Data Cleaning techniques,

Data Cleaning Tools

Data preprocessing techniques, tools

Data Privacy and Security in LLMS

Security concerns in Open-source Vs Closed source LLMs

Data Privacy and cost on premises

Data Privacy and cost on Cloud services

Introduction to LLM

LLMs Need, Architecture, Applications

click to edit