My Git Cheatsheet
Some commands I often forget when using git.
Some commands I often forget when using git.
1. Basics RLHF: Reinforcement Learning from Human Feedback SFT: Supervised Fine-Tuning RL trains neural networks through trial and error. When finetuning a language model with RLHF, the model produces some text then receives a score/reward from a human annotator that captures the quality of that text. Then, we use RL to finetune the language model to generate outputs with high scores. In this case, we cannot apply a loss function that trains the language model to maximize human preferences with supervised learning. This is because there’s no easy way to explain the score human give or connect it mathematically to the output of the neural network. In other words, we cannot backpropagate a loss applied to this score through the rest of the neural network. This would require that we are able to differentiate (i.e., compute the gradient of) the system that generates the score, which is a human that subjectively evaluates the generated text. ...
Introduction to memory coalescing with Nsight Compute.
This blog post discusses the arithmetic intensity of large language models and how it affects the performance of these models.
A brief talk on speculative decoding in large language models.
How do I work with WSL
How to create a LibTorch project.
My configurations of vim.
This post shows how to configure launch.json in VSCode for debugging C++.
This post shows how to configure launch.json in VSCode for debugging Python.