Playing with the Panopticon Behind the Curve
20/04/26 12:50
I'm not active on social media, and don't often get hit by algorithms but one piece on Substack caught me quite unawares. It was by a guy called Nori Nishigaya and it was about using Claude Code and Obsidian to make a personal planning system. It caught my attention for several reasons. Having used the chat interface to Claude for some time, I was curious about Claude Code but I'm not a programmer so the implementation with Obsidian seemed like a chance to try it out. Like Nishigaya, I'd worked a bit with Obsidian but never really got it off the ground. He mentioned he'd tried OmniFocus, but hadn't really stuck with it. That too was my experience. For all these reasons I thought, "Hey this is something I could do. Let's try this."
He describes setting up a personal planner using plugins on Obsidian which support planning at a series of levels: annual, quarterly, monthly, down to the real stuff at weekly and daily level. I got all this set up and really find it's working well for me. The content in Obsidian provides context for Claude Code so it is involved in your planning process. It understands (is that the right word?) - it has information about your goals and can engage in the planning process.
The lowest level of the planning system is tasks. These can be viewed in a calendar mode or in a Kanban view. I like the way it works in Obsidian, the structure is all there but it's very flexible. It disciplines my planning process but tolerates my actual mess. Additional context is provided to Claude by a log function whereby you can frequently report what you're doing. So here I am, opening my life up to this artificial intelligence.
Around the same time that I found Nori Nishigaya's piece, another piece on Substack caught my attention, one of those warnings that the strategy of the AI companies is to actually get control over the population. They are building a panopticon with detailed insight into our daily lives. We will open ourselves up to them so much that they will actually take control of us. This is part of the ongoing discussions around AI. What are the risks and the benefits? We open up our personal stuff, our business stuff, our thoughts and ambitions. In the AI systems how much of that is actually traceable back to individuals?
My concerns about this seemed to be confirmed in a New York Times podcast where Anthropic co-founder Jack Clark was interviewed by Ezra Klein. Clark described how Anthropic does track the themes that people are talking about to the AI. They are analysing that kind of information: how people are using AI, what questions they are asking, so the AI providers do have the means to build personal profiles of individual users if they choose to. How do we as users evaluate the benefits against the risks? My choice at the moment is to gain some experience of living and working with an AI assistant, up to a point.
But there is an ambivalence in my attitude to the AI I'm using. It can do many things efficiently but it still makes silly mistakes. Is it really helping me, or just making more work for me? For example I was having it help me plan out a series of tasks over a week: two of them on Monday, two of them on Tuesday, one on Wednesday, and so on. It created those tasks in the planning system perfectly and then volunteered a summary of what it had done and it got the summary wrong. When I queried it, the response was, "Oh yeah I made a mistake there. I made the summary from memory and I should have looked at the actual data." I'm using a tip from Nori about recording such lessons-learned in a file the AI reads so that they become part of the context and the same mistakes are not repeated.
I find the frequent recording of what I'm doing in the logging and in the daily planning helps to keep me on track. Even though plans don't work out, the whole framework is there to easily adapt and update plans to realign with what has actually happened. In the conversation between Jack Clark and Ezra Klein another regular recording activity, daily journaling, was recommended. Journaling your experience of using the AI. I'm now doing that on a daily basis and I find it actually becomes a dialogue with the AI.
The journal is not just my thoughts about what happened during the day but it includes feedback from the AI and discussion about how things are going. How weird is that, a journal that talks back to you? It influences the way you write. You have an immediate audience. I have not done any tuning of the AI in this respect, so I generally get positive feedback. It might be more productive to be challenged by critical responses.
The reasons Claude Code works well with Obsidian are that Obsidian is a note-keeping application and the notes are Markdown files, plain text with text as markup. This is something that Claude Code can easily work with. It's intended initially as a programming support tool, programs are written in plain text. Claude Code can both read and write the content of the Obsidian Vault. After I got going I realised this combination of Claude Code and Obsidian was quite widely used already. David Sparks, a.k.a MacSparky, came out with a Field Guide on using Claude Code with Obsidian entitled "The Robot Assistant Field Guide". I have used MacSparky's Field Guides before, he teaches automation for the regular computer user. So his adoption of Claude Code with Obsidian confirmed for me that I was on the right track, but behind the curve.
What does it matter that I am behind the curve? I'm never going to be on the bleeding edge. Learning from the experience of others is okay. And AI is probably at the peak of its hype cycle. The bubble is bursting. It's not the route to AGI. There will be a big shake out among LLM based AI providers. The current AI technology will become another tool among all the other automation we have. Maybe eventually Apple will deliver the promised AI upgrade to Siri where the privacy issues are covered and I'll be able to relax handing over my private thoughts to a large language model.
He describes setting up a personal planner using plugins on Obsidian which support planning at a series of levels: annual, quarterly, monthly, down to the real stuff at weekly and daily level. I got all this set up and really find it's working well for me. The content in Obsidian provides context for Claude Code so it is involved in your planning process. It understands (is that the right word?) - it has information about your goals and can engage in the planning process.
The lowest level of the planning system is tasks. These can be viewed in a calendar mode or in a Kanban view. I like the way it works in Obsidian, the structure is all there but it's very flexible. It disciplines my planning process but tolerates my actual mess. Additional context is provided to Claude by a log function whereby you can frequently report what you're doing. So here I am, opening my life up to this artificial intelligence.
Around the same time that I found Nori Nishigaya's piece, another piece on Substack caught my attention, one of those warnings that the strategy of the AI companies is to actually get control over the population. They are building a panopticon with detailed insight into our daily lives. We will open ourselves up to them so much that they will actually take control of us. This is part of the ongoing discussions around AI. What are the risks and the benefits? We open up our personal stuff, our business stuff, our thoughts and ambitions. In the AI systems how much of that is actually traceable back to individuals?My concerns about this seemed to be confirmed in a New York Times podcast where Anthropic co-founder Jack Clark was interviewed by Ezra Klein. Clark described how Anthropic does track the themes that people are talking about to the AI. They are analysing that kind of information: how people are using AI, what questions they are asking, so the AI providers do have the means to build personal profiles of individual users if they choose to. How do we as users evaluate the benefits against the risks? My choice at the moment is to gain some experience of living and working with an AI assistant, up to a point.
But there is an ambivalence in my attitude to the AI I'm using. It can do many things efficiently but it still makes silly mistakes. Is it really helping me, or just making more work for me? For example I was having it help me plan out a series of tasks over a week: two of them on Monday, two of them on Tuesday, one on Wednesday, and so on. It created those tasks in the planning system perfectly and then volunteered a summary of what it had done and it got the summary wrong. When I queried it, the response was, "Oh yeah I made a mistake there. I made the summary from memory and I should have looked at the actual data." I'm using a tip from Nori about recording such lessons-learned in a file the AI reads so that they become part of the context and the same mistakes are not repeated.
I find the frequent recording of what I'm doing in the logging and in the daily planning helps to keep me on track. Even though plans don't work out, the whole framework is there to easily adapt and update plans to realign with what has actually happened. In the conversation between Jack Clark and Ezra Klein another regular recording activity, daily journaling, was recommended. Journaling your experience of using the AI. I'm now doing that on a daily basis and I find it actually becomes a dialogue with the AI.
The journal is not just my thoughts about what happened during the day but it includes feedback from the AI and discussion about how things are going. How weird is that, a journal that talks back to you? It influences the way you write. You have an immediate audience. I have not done any tuning of the AI in this respect, so I generally get positive feedback. It might be more productive to be challenged by critical responses.
The reasons Claude Code works well with Obsidian are that Obsidian is a note-keeping application and the notes are Markdown files, plain text with text as markup. This is something that Claude Code can easily work with. It's intended initially as a programming support tool, programs are written in plain text. Claude Code can both read and write the content of the Obsidian Vault. After I got going I realised this combination of Claude Code and Obsidian was quite widely used already. David Sparks, a.k.a MacSparky, came out with a Field Guide on using Claude Code with Obsidian entitled "The Robot Assistant Field Guide". I have used MacSparky's Field Guides before, he teaches automation for the regular computer user. So his adoption of Claude Code with Obsidian confirmed for me that I was on the right track, but behind the curve.
What does it matter that I am behind the curve? I'm never going to be on the bleeding edge. Learning from the experience of others is okay. And AI is probably at the peak of its hype cycle. The bubble is bursting. It's not the route to AGI. There will be a big shake out among LLM based AI providers. The current AI technology will become another tool among all the other automation we have. Maybe eventually Apple will deliver the promised AI upgrade to Siri where the privacy issues are covered and I'll be able to relax handing over my private thoughts to a large language model.
April 2026
March 2026
February 2026
January 2026
December 2025
November 2025
October 2025
September 2025
August 2025
July 2025
June 2025
May 2025
April 2025
March 2025
February 2025
January 2025
December 2024
November 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023