Local VLMs Have Improved
About 6 months ago, I experimented with running a few different multi-modal (vision) language models on my Macbook. At the time, the results weren't so great.
5 entries
About 6 months ago, I experimented with running a few different multi-modal (vision) language models on my Macbook. At the time, the results weren't so great.
One challenge I've continued to have is figuring out how to use the models on Huggingface. There are usually Python snippets to "run" models that often seem to require GPUs and always seem to run into some sort of issues when trying to install the various Python dependencies. Today, I learned how...
I spent some time experimenting with multi-modal model (also called vision models on the ollama site) to see how they perform. You try these out with the CLI ollama run <model> but I opted to use the ollama Python client.
On macOS, a Launch Agent is a system daemon that runs in the background and performs various tasks or services for the user. Having recently installed ollama, I've been playing around with various local models. One annoyance about having installed ollama using Nix via nix-darwin, is that I need to...
I've spend almost a week, on and off, trying to install ollama using Nix in such a way that ollama serve will be run and managed automatically in the background. Initially, I had tried to install ollama via home-manager. This was straightforward, but finding a way to have ollama serve run...