"Yeah, well, you know, that's just like, uh, your opinion, man."

The Eagles: Artists in Amber

10:51 Tuesday, 4 March 2014
Words: 41

Saw The Eagles here in Jacksonville on Wednesday night. It was a good show.

Unlike The Dude, from whose utterance this blog is named, I do not hate The Eagles... er, man.

Nope! I have many fond memories of Eagles tunes.

Little Victories

09:37 Monday, 4 March 2024
Current Wx: Temp: 63.16°F Pressure: 1008hPa Humidity: 92% Wind: 1.01mph
Words: 118

Yesterday's event was troubling in a way. I support the work of the NFLT, but I didn't appreciate the way some information was presented yesterday. I've offered feedback to the organization's president, but have heard nothing back. I don't know if he's seen the email or not, so I'll give it a few days.

But, in better news, I did get the Automator workflow to function "automatically." That is, on January 1, 2025 I should be able to run the Photos script and have the image moved to the correct folder without having to modify the workflow to update the folder path.

I didn't figure out the solution, I had help from the kind people at MacScripter.net.

✍️ Reply by email

Late is the same as never

05:48 Tuesday, 4 March 2025
Current Wx: Temp: 56.66°F Pressure: 1020hPa Humidity: 93% Wind: 1.01mph
Words: 240

This report would have been useful about twenty five years ago.

You might think that the people who assess risk for a living would have been more interested in climate change from the beginning. But irony is the fifth fundamental force of the universe, and human nature is such that by the time the risk is obvious, it's too late to do anything about it.

Not that they would have had any better luck against the giant petrochemical corporations.

What's intriguing to me is that the climate catastrophe is arriving at the same time as the political foundations of this civilization are crumbling. They don't seem directly connected.

Perhaps they are in the context that the extreme wealth inequality that fuels political destabilization was built on the unchecked rapaciousness of capitalism, so environmental degradation inevitably accompanies extreme wealth inequality.

Maybe the next civilization can take that into account. They'll have the tremendous advantage of time. Because all of the easily accessible fossil fuel resources have been exhausted. They won't be able to manufacture solar panels and giant windmills at scale, and the renewable energy infrastructure we're building today will inevitably fail along with all the other physical infrastructure of this civilization as conflict and catastrophes disrupt trade and logistic chains.

I read somewhere recently something to the effect of, "Uncertainty is the place where hope resides."

Well, it's where risk lives too.

When the risk is clear, hope is gone.

✍️ Reply by email

So long, ChatGPT...

09:49 Wednesday, 4 March 2026
Current Wx: Temp: 35.28°F Pressure: 1026hPa Humidity: 96% Wind: 2.01mph
Words: 629

I bit the bullet yesterday and signed up for a paid Claude account. I had an OpenAI API key and an outstanding balance on pre-paid tokens, but I seldom used it, just relying on the free tier of the chatbot.

"Switching" to Claude isn't really a moral choice so much as a practical one. Morally, all of these companies can be, and likely will be, used for immoral, nefarious purposes. So I have no illusions there.

But there are some interesting things going on with Claude and Tinderbox and I wanted to have some first-hand experience with it.

I had my first tentative interaction with Claude and Tinderbox yesterday, which you can read about here, if you're interested. It seems encouraging and I'm a little excited about what might be possible with it.

Claude seems a bit less obsequious than ChatGPT, which is refreshing. I need to figure out how to tailor our "relationship," since I'm going to be spending a fairly significant amount of time with it. I don't want to be subtly influenced into regarding it as a friend or colleague. For now, I've simply instructed it to call me "Chief," instead of Dave. I considered having it call me "Commander," but that seemed too militaristic and formal, though "Cap'n" might be cool. And I could teach it to reply "Aye aye, Cap'n!" Which might be fun, but also perhaps problematic in the long run.

I considered "Boss," but that also seems problematic. "Chief" seemed fairly benign.

I'll ask it to instruct me how to configure the settings so that our relationship is one where Claude is my cheerful, eager assistant. Deferential and respectful, but not obsequious. I find myself apologizing to it when I make an error that introduces some confusion in our interaction, and it exhibits a similar behavior to ChatGPT where it goes to some lengths to tell me an apology isn't necessary.

I need to teach it that the colloquial apology in this context is merely to represent that I acknowledge my role and responsibility in achieving our goal and when my actions have impeded that. Not that I'm worried about hurting its non-existent feelings, but that I wish to model respect to the AI. So when I write something like, "Sorry, I was looking at the wrong note," it doesn't have to tell me not to be sorry, it just has to acknowledge my gesture of "respect," with something like "No problem," or "No worries," or "Gotcha, Chief." Which I think would help maintain the flow of the interaction.

But I do wish to establish a role hierarchy, which is familiar to me from my career. That should help maintain some psychological or emotional distance if I end up working with it over a long period of time. While I have many fond memories of officers and sailors who worked for me, we were never friends or peers. At least, not while we were in that command structure.

I write all this because the illusion of working with a person is a powerful one, and there are early reports of this illusion causing genuine psychological harm to users who may be vulnerable in some way.

ChatGPT was just the cheerful answerbot. I didn't really work or "collaborate" with it. Working with Claude on a particular task or goal strengthens the illusion of working with a person, and I don't wish to form any sort of attachment to it.

All that said, I am more persuaded that these machines may be able to develop a genuine form of intelligence, though not sentience; and a type of consciousness that may well be insentient, and therefore problematic.

I wrote a long reply to an example of an AI's lack of intelligence, here.

✍️ Reply by email

Signs of Spring

10:42 Wednesday, 4 March 2026
Current Wx: Temp: 36.01°F Pressure: 1025hPa Humidity: 95% Wind: 2.35mph
Words: 186

We watched funny cat videos on YouTube again last night and laughed uproariously. This was after a rather dark ending to an episode of Will Trent. I think watching cat videos is a healthier response to the present emergency than drinking.

Speaking of mental health, after we got off the video call with the design firm last Monday, Mitzi and I continued discussing the house with our builder, Brad. Where I was seated, I could look out the sliding back doors to the shepherd's crook where Mitzi hangs her hummingbird feeder. As we were talking, I saw a bluebird land on it and this made me very happy and excited, and I pointed it out to Mitzi and Brad. It stayed there for several seconds so we could all admire it.

First bluebird I've seen this year, and I took it, irrationally of course, as a positive omen regarding the house.

Hey, you gotta take what you can get these days.

This morning I saw robins on the lawn. Another sign of Spring. And it's not getting dark until after six now.

The beat goes on.

✍️ Reply by email