Kuwaiti F-18 Shot Down the Strike Eagles?
14:58 Wednesday, 4 March 2026
Current Wx: Temp: 46.33°F Pressure: 1023hPa Humidity: 72% Wind: 1.21mph
Words: 41
If so, it would very much seem to be an intentional act.
War is chaos, and we have a president who is a chaos agent, so does that mean this catastrophe is exponentially worse?
Probably.
Strap in, it's gonna get bumpy.
✍️ Reply by emailLoren Debates ChatGPT
13:35 Wednesday, 4 March 2026
Current Wx: Temp: 42.71°F Pressure: 1023hPa Humidity: 84% Wind: 0.85mph
Words: 69
Loren Webster has a fascinating post on a discussion he had with the AI about Thomas Hardy's The Darkling Thrush, a discussion that began in this post in February and continued over two more.
I wonder if ChatGPT's interpretation, if that's not granting it too much "intelligence," is an artifact of its evolution as a "brain in a vat," insentient, without an emotional dimension to its "understanding"?
It's interesting.
✍️ Reply by emailSigns of Spring
10:42 Wednesday, 4 March 2026
Current Wx: Temp: 36.01°F Pressure: 1025hPa Humidity: 95% Wind: 2.35mph
Words: 186
We watched funny cat videos on YouTube again last night and laughed uproariously. This was after a rather dark ending to an episode of Will Trent. I think watching cat videos is a healthier response to the present emergency than drinking.
Speaking of mental health, after we got off the video call with the design firm last Monday, Mitzi and I continued discussing the house with our builder, Brad. Where I was seated, I could look out the sliding back doors to the shepherd's crook where Mitzi hangs her hummingbird feeder. As we were talking, I saw a bluebird land on it and this made me very happy and excited, and I pointed it out to Mitzi and Brad. It stayed there for several seconds so we could all admire it.
First bluebird I've seen this year, and I took it, irrationally of course, as a positive omen regarding the house.
Hey, you gotta take what you can get these days.
This morning I saw robins on the lawn. Another sign of Spring. And it's not getting dark until after six now.
The beat goes on.
✍️ Reply by emailSo long, ChatGPT...
09:49 Wednesday, 4 March 2026
Current Wx: Temp: 35.28°F Pressure: 1026hPa Humidity: 96% Wind: 2.01mph
Words: 629
I bit the bullet yesterday and signed up for a paid Claude account. I had an OpenAI API key and an outstanding balance on pre-paid tokens, but I seldom used it, just relying on the free tier of the chatbot.
"Switching" to Claude isn't really a moral choice so much as a practical one. Morally, all of these companies can be, and likely will be, used for immoral, nefarious purposes. So I have no illusions there.
But there are some interesting things going on with Claude and Tinderbox and I wanted to have some first-hand experience with it.
I had my first tentative interaction with Claude and Tinderbox yesterday, which you can read about here, if you're interested. It seems encouraging and I'm a little excited about what might be possible with it.
Claude seems a bit less obsequious than ChatGPT, which is refreshing. I need to figure out how to tailor our "relationship," since I'm going to be spending a fairly significant amount of time with it. I don't want to be subtly influenced into regarding it as a friend or colleague. For now, I've simply instructed it to call me "Chief," instead of Dave. I considered having it call me "Commander," but that seemed too militaristic and formal, though "Cap'n" might be cool. And I could teach it to reply "Aye aye, Cap'n!" Which might be fun, but also perhaps problematic in the long run.
I considered "Boss," but that also seems problematic. "Chief" seemed fairly benign.
I'll ask it to instruct me how to configure the settings so that our relationship is one where Claude is my cheerful, eager assistant. Deferential and respectful, but not obsequious. I find myself apologizing to it when I make an error that introduces some confusion in our interaction, and it exhibits a similar behavior to ChatGPT where it goes to some lengths to tell me an apology isn't necessary.
I need to teach it that the colloquial apology in this context is merely to represent that I acknowledge my role and responsibility in achieving our goal and when my actions have impeded that. Not that I'm worried about hurting its non-existent feelings, but that I wish to model respect to the AI. So when I write something like, "Sorry, I was looking at the wrong note," it doesn't have to tell me not to be sorry, it just has to acknowledge my gesture of "respect," with something like "No problem," or "No worries," or "Gotcha, Chief." Which I think would help maintain the flow of the interaction.
But I do wish to establish a role hierarchy, which is familiar to me from my career. That should help maintain some psychological or emotional distance if I end up working with it over a long period of time. While I have many fond memories of officers and sailors who worked for me, we were never friends or peers. At least, not while we were in that command structure.
I write all this because the illusion of working with a person is a powerful one, and there are early reports of this illusion causing genuine psychological harm to users who may be vulnerable in some way.
ChatGPT was just the cheerful answerbot. I didn't really work or "collaborate" with it. Working with Claude on a particular task or goal strengthens the illusion of working with a person, and I don't wish to form any sort of attachment to it.
All that said, I am more persuaded that these machines may be able to develop a genuine form of intelligence, though not sentience; and a type of consciousness that may well be insentient, and therefore problematic.
I wrote a long reply to an example of an AI's lack of intelligence, here.
✍️ Reply by email