quickly accessing llama3.2 from a terminal (or any other model)
I spend a considerable amount of my time in terminal(s) and I’ve gotten used to incorporating large language models into my workflow. But occasionally its just too annoying to switch over to a browser to ask it something. My previous solution for that was that I had a telegram bot that I’d chat with (which was convenient and faster than web) but I felt something was missing. And today with my procrastination spiking I’ve finally solved it! ...