I enjoyed this article by Ken about production LLM use cases with OpenAI models. When it comes to prompts, less is more
Logs
I enjoyed Martin's article on preserving your shell history. I implemented some of his approaches in my system config.
Gemini Pro 1.5 up and running. I've said this before but I will say it again -- the fact that I don't need to deal with GCP to use Google models gives me joy.
Today, I learned about Command-R model series from Cohere from Shawn's great AI newsletter (ainews). I searched to see if a plugin was available for llm and Simon had literally authored one 8(!)...
A great article by Manuel about forever-growth of companies. I too wish we'd be more willing to celebrate enough.
I've been digging more into evals. I wrote a simple Claude completion function in openai/evals to better understand how the different pieces fit together. Quick and dirty code:
I can't believe I am saying this but if you play around with language models locally, a 1 TB drive, might not be big enough for very long.
As someone learning to draw, I really enjoyed this article: <https://maggieappleton.com/still-cant-draw>. I've watched the first three videos in this playlist so far and have been sketching random...
I'm taking a break from sketchybar for now.
I got this result twice in a row.