Deep (Learning) Focus
Subscribe
Sign in
Home
Notes
The Author
Archive
About
Latest
Top
Discussions
Finetuning LLM Judges for Evaluation
The Prometheus suite, JudgeLM, PandaLM, AutoJ, and more...
Dec 2
•
Cameron R. Wolfe, Ph.D.
64
Share this post
Deep (Learning) Focus
Finetuning LLM Judges for Evaluation
Copy link
Facebook
Email
Notes
More
4
November 2024
Automatic Prompt Optimization
Practical techniques for improving prompt quality without manual effort...
Nov 4
•
Cameron R. Wolfe, Ph.D.
78
Share this post
Deep (Learning) Focus
Automatic Prompt Optimization
Copy link
Facebook
Email
Notes
More
8
September 2024
Model Merging: A Survey
From modern LLM applications to the early days of machine learning research...
Sep 16
•
Cameron R. Wolfe, Ph.D.
67
Share this post
Deep (Learning) Focus
Model Merging: A Survey
Copy link
Facebook
Email
Notes
More
8
July 2024
Using LLMs for Evaluation
LLM-as-a-Judge and other scalable additions to human quality ratings...
Jul 22
•
Cameron R. Wolfe, Ph.D.
100
Share this post
Deep (Learning) Focus
Using LLMs for Evaluation
Copy link
Facebook
Email
Notes
More
14
June 2024
Summarization and the Evolution of LLMs
How research on abstractive summarization changed language models forever...
Jun 3
•
Cameron R. Wolfe, Ph.D.
54
Share this post
Deep (Learning) Focus
Summarization and the Evolution of LLMs
Copy link
Facebook
Email
Notes
More
3
April 2024
Modern Advances in Prompt Engineering
Distilling and understanding the most rapidly-evolving research topic in AI...
Apr 29
•
Cameron R. Wolfe, Ph.D.
108
Share this post
Deep (Learning) Focus
Modern Advances in Prompt Engineering
Copy link
Facebook
Email
Notes
More
7
DBRX, Continual Pretraining, RewardBench, Faster Inference, and More
A deep dive into recent and impactful advancements in LLM research...
Apr 1
•
Cameron R. Wolfe, Ph.D.
37
Share this post
Deep (Learning) Focus
DBRX, Continual Pretraining, RewardBench, Faster Inference, and More
Copy link
Facebook
Email
Notes
More
6
March 2024
Mixture-of-Experts (MoE): The Birth and Rise of Conditional Computation
What two decades of research taught us about sparse language models...
Mar 18
•
Cameron R. Wolfe, Ph.D.
75
Share this post
Deep (Learning) Focus
Mixture-of-Experts (MoE): The Birth and Rise of Conditional Computation
Copy link
Facebook
Email
Notes
More
16
Decoder-Only Transformers: The Workhorse of Generative LLMs
Building the world's most influential neural network architecture from scratch...
Mar 4
•
Cameron R. Wolfe, Ph.D.
85
Share this post
Deep (Learning) Focus
Decoder-Only Transformers: The Workhorse of Generative LLMs
Copy link
Facebook
Email
Notes
More
14
February 2024
Dolma, OLMo, and the Future of Open-Source LLMs
How making open-source LLMs truly open will change AI research for the better...
Feb 20
•
Cameron R. Wolfe, Ph.D.
29
Share this post
Deep (Learning) Focus
Dolma, OLMo, and the Future of Open-Source LLMs
Copy link
Facebook
Email
Notes
More
2
A Practitioners Guide to Retrieval Augmented Generation (RAG)
How basic techniques can be used to build powerful applications with LLMs...
Feb 5
•
Cameron R. Wolfe, Ph.D.
73
Share this post
Deep (Learning) Focus
A Practitioners Guide to Retrieval Augmented Generation (RAG)
Copy link
Facebook
Email
Notes
More
4
January 2024
Sleeper Agents, LLM Safety, Finetuning vs. RAG, Synthetic Data, and More
Notable advancements and topics in LLM research from January of 2024...
Jan 22
•
Cameron R. Wolfe, Ph.D.
41
Share this post
Deep (Learning) Focus
Sleeper Agents, LLM Safety, Finetuning vs. RAG, Synthetic Data, and More
Copy link
Facebook
Email
Notes
More
9
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts