Clouds
Current Wx: Temp: 57.13°F Pressure: 1013hPa Humidity: 91% Wind: 3.69mphWords: 1067
Researching about the new house, warmer weather luring me outside, a grueling lower-body workout and getting sidetracked into investigating local LLMs, the marmot has been neglected.
This was the sky last night. It had been overcast for much of the day, but it was warm out so I spent some time outside and in the garage. It cleared up toward the latter part of the afternoon, and we sat outside on the porch just watching the clouds blow by and making some vitamin D whenever the sun appeared. It was very windy, which seems to be a regular thing around here. I've put two 15lb dumbbells in the grill to keep it from getting blown over. Seems to be working, but I'd rather have the weights for side raises.
Well, getting back to the when we last met, Brad the Builder came by on Saturday and we had a productive meeting reviewing the revised draft our designer had sent us. Brad recommended a change that will save us some money, going from 8" concrete cores to 6". It will also get us an extra 4" in both interior dimensions if we keep the same footprint, which we are. And we can use those inches. It nets out to 24 sq ft, but you can do a lot with an extra 24 sq ft.
Sunday we had a session with our trainer and focused on lower body, which I hadn't worked in a bit of a while. Oof! I probably shouldn't have done 30 minutes on the elliptical before the workout. I spent most of the rest of Sunday in the recliner, and my legs were so sore yesterday. Happy to do it though. I think you get the most benefit in overall health from lower body workouts in terms of the signals it sends to other systems in the body. You need to train your upper body for activities of daily living, like moving furniture, but it's your glutes that really tell your body that you're serious about this and it ought to get with the program. Or so I've read.
When I wasn't in the recliner, I was reading about running local LLMs. It seems like data center AI services are starting to really take off, and there will be some scarcity of compute in the near future. I've been mostly pleased with the results I've been getting exploring home construction in Claude, and I've held my nose and signed up for a month-to-month subscription with ChatGPT for comparing responses. (Claude seems better.)
But what about local LLMs? What are they good for? From my research I can confidently say, I don't know. But they seem like an area that is also rapidly progressing. I installed Apfel on my M3 MBP and used ChatGPT to figure that out, because it was a bit of an ordeal. Apfel gives you access in the terminal to Apple's Foundation model, which is a small LLM built into the OS. I got it working, but I haven't done much with it. It's the model that supports text summaries, and some other services in the OS. It also knows that Paris is the capital of France.
The rabbit hole led to other models like Google's Gemma 4, which comes in a number of sizes. I have a 24GB M3 MBP, which is notionally capable of running Gemma 4:26b, which is supposedly a fairly robust on-device LLM. I didn't get around to installing it yet because I started researching windows for the house, but I hope to get to it today or tomorrow.
My thought here is that cloud-based AI is going to be the most capable system for the foreseeable future, but cost and availability may become problematic. I may not need a frontier model that has been trained on everything, if I can get a model that can help me interpret data that is already otherwise available on the web. For the moment, it appears that 24GB of unified memory is barely sufficient to run something like Gemma 4:26b. I expect that situation to improve over time, where 24GB will be a robust system for hosting decently capable LLM; but for the near term, having something with a little more RAM headroom would probably offer a greater range of potential solutions and better performance.
All of that led to a certain amount of irrational anxiety, which I addressed by buying a refurbished M5 14"MBP with 32GB of RAM and a 2TB SSD for $2K. (The 14" M3MBP I'm writing this with was $3K with 24GB and 2TB when I bought it a couple of years ago.) The M5 gets me a couple more cores, faster memory bandwidth an extra 8GB. The higher tier processors are just too expensive, in my opinion. Though if you consider the inflation adjusted prices of all the early Apple IIs I bought back in the day, they're probably comparable.
Anyway, I freely admit that I know next to nothing about AI and LLMs; but I'm getting the very strong impression that they can be a genuine asset in exploiting the capabilities of the computer they're hosted on, and as an aid to understanding data found on the web. I want to try to explore that possibility. I think the M3 with 24GB could probably do most of what I'd like to do, but I think I'd be bumping up against some frustrating limitations.
Probably a dumb idea, particularly in light of other demands on my income at the moment. Wouldn't be the first time I wasted money on something. Wise or foolish? I don't know. Maybe time will tell.
We had a productive meeting with the designer yesterday afternoon, and I'm still feeling pretty excited about the progress we've made. Mitzi has some concerns about the windows and the west elevation, which is supposed to be the highlight feature of the design. I'm pretty happy with them now. We'll see how that evolves over time.
We should get another draft tomorrow and then we'll meet again early next week to try and nail everything down. Perhaps that's a pun?
At least all this is distracting me from the unfolding catastrophe taking place before us. I only hope that the consequences don't impair our ability to get this project to completion.
In the meantime, the beat goes on...
✍️ Reply by email