quickly accessing llama3.2 from a terminal (or any other model)
I spend a considerable amount of my time in terminal(s) and I've gotten used to incorporating large language models into my workflow. But occasionally its just too annoying to switch over to a browser to ask it something.
My previous solution for that was that I had