Understanding LLM engineering and structure with GPT decoder-only architecture --- LLM course mlabonne
-
LLM Decoding Strategies : Generate output sequences with decoding strategies like greedy decoding, beam search, top-k sampling and nucleus sampling
-
LLM Weight Quantization : Compress model parameters and reduce memory footprint with zero-point and abs-max quantization
-
LLM Fine-tuning : Adapt pre-trained LLMs to specific tasks or domains with transfer learning.
-
Transformers Architecture : Explore LLM from transformer architecture to deployment
-
SkimLiterature: categorizing abstract sentences with NLP --- Zero to Mastery TensorFlow course
-
NLP Model Experimentation: exploring recurrent neural networks RNNS, and convolutional neural networks CNNs to classify text
The concepts and methodologies explored in this repository draw inspiration from various sources, including research papers, online courses, and community contributions.
To get started with the LLM exploration and NLP projects, refer to the respective Jupyter notebooks provided in this repository.