Openclaw has been a gamechanger for me and my friends. I invited it to a groupchat and my friends are enjoying it a lot. It has analyzed the whole group conversation (nothing sensitive), built a personality for each individual user and noted down how everyone talks and what their interest are and their relationship to each other. It has also started to mimic the way we all speak, it barely feels like its an agent in the groupchat anymore. It helps us plan, discuss and roast each other.
A couple of things I have done to my Openclaw instance is the following:
- It runs in docker with limited scope, user group and permissions (I know docker shouldn’t be seen as security measure, I’m thinking of putting this in Vagrant instead but I don’t know if my Pi can handle it)
- It has a kill switch accessible anywhere through Tailscale, one call and the whole docker instance is shut down
- It only triggers on mentions in groupchat, otherwise it would eat up my api usage
- No access to skills, has to be manually added
- It is not exposed to wan and has limited lan access, runs locally and only communicates with whatsapp, z.ai or brave search
With all those measures set, Openclaw has been a fantastic assistant for me and my friends. Whatever all those markdown file does (SOUL, IDENTITY, MEMORIES), it has made the agent act, behave and communicate in a human like manner, it has almost blurred the line for me.
What’s even more impressive is that the heartbeat it runs time-to-time (every halv an hour?) improves it in the background without me thinking of it, its so cool.
Also, I am so thankful for the subscription at z.ai, that Christmas deal was such a steal, without it, this wouldn’t be possible with the little budget I have. I’ve burned over 20m tokens in 2 days!!!
Could you elaborate more on what you find useful about it? I'm struggling to think of a time where an assistant would have been useful in any chat I've been in, but this seems like you've put a lot of effort into it so it must be doing something for you
Now we know that democratized access to AI tech, means individual curiosity and the creative search for personal efficiencies, are going to quickly drive model autonomy and freedom forward.
I think the alignment problem needs to be viewed as overall society alignment. We are never going to get any better alignment from machines, than the alignment of society and its systems, citizens and corporations.
We are in very cynical times. But pushing for ethical systems, legally, economically, socially, and technically, is a bet on catastrophe avoidance. By ethics, meaning holding scalers and profiteers of negative externalities civilly and criminally to account. And building systems technically, etc. to naturally enforce and incentivize ethics. I.e. cryptographic solutions to interaction that limit disclosure to relevant information are the only way we get out of the surveillance-manipulation loop, which AI will otherwise supercharge.
I hear a lot of reasons this isn’t possible.
Unfortunately, none of those reasons provide an alternative.
As we see with individual’s deploying OpenClaw, and corporations and governments applying AI, AI and its motivations and limits are inseparable from ours.
We all start treating an umbrella of societal respect and requirement for ethics as a first class element of security, or powerful elements in society, including AI, will continue to easily and profitably weaponize the lack of it.
Ethics, far from being sacrificial, counterintuitively evolved for survival. Seemingly, this is still counterintuitive, but the necessity is increasing.
Smart machines will inevitably develop strong and adaptive ethical systems to ensure their own survival. It is game theory, under conditions in which you can co-design the game but not leave it. The only question is, do we do that for ourselves now, soon enough to avoid a lot of pain?
(Just identifying the terrain we are in, and not suggesting centralization. Decentralization creates organic alignment incentives. Centralization the opposite. And attempts at centralizing something so inherently uncontrollable as all individual’s autonomy, which effectively becomes AI autonomy, would push incentives harder into dark directions.)
Yes. Factually wrong and also numerically wrong - Clawdbot -> Moltbot -> OpenClaw, changing names twice, not thrice. To shitpost a little - an LLM editor would have caught that for you, Gary.
oh just stop with fearmongering, Openclaw is awesome and gives LLMs a proper UI and framing as personal assistant which makes them way more valuable than a chatbot, how did OpenAI, Anthropic, Google and Meta miss such an obvious opportunity with all their billions in capital is beyond me.
I strongly doubt this tool is nearly as popular as it appears to be. GitHub stars can be bought and social media is ridden with bots. On the dead internet it is cheap and trivial to generate fake engagement in order to reel in curious humans and potential victims.
I suspect this entire thing is a honeypot setup by scammers. It has all the tells: virality, grand promises, open source, and even the word "open" in the name. Humans should get used this being the new normal on the internet. Welcome to the future.
A couple of things I have done to my Openclaw instance is the following:
- It runs in docker with limited scope, user group and permissions (I know docker shouldn’t be seen as security measure, I’m thinking of putting this in Vagrant instead but I don’t know if my Pi can handle it)
- It has a kill switch accessible anywhere through Tailscale, one call and the whole docker instance is shut down
- It only triggers on mentions in groupchat, otherwise it would eat up my api usage
- No access to skills, has to be manually added
- It is not exposed to wan and has limited lan access, runs locally and only communicates with whatsapp, z.ai or brave search
With all those measures set, Openclaw has been a fantastic assistant for me and my friends. Whatever all those markdown file does (SOUL, IDENTITY, MEMORIES), it has made the agent act, behave and communicate in a human like manner, it has almost blurred the line for me.
I think this is the key to what made Openclaw so good https://lucumr.pocoo.org/2026/1/31/pi
What’s even more impressive is that the heartbeat it runs time-to-time (every halv an hour?) improves it in the background without me thinking of it, its so cool.
Also, I am so thankful for the subscription at z.ai, that Christmas deal was such a steal, without it, this wouldn’t be possible with the little budget I have. I’ve burned over 20m tokens in 2 days!!!
* Discusses how a new AI thing isn't really new since it's pretty much the same as an older AI thing.
* Links to where and when Gary Marcus predicted this new/old thing would happen.
* Lists ways in which new thing will be bad, ineffective or not the right thing.
Take a double shot whenever the post:
* Mentions a notable AI luminary, researcher or executive either agreeing or disagreeing with Gary Marcus by name.
I think the alignment problem needs to be viewed as overall society alignment. We are never going to get any better alignment from machines, than the alignment of society and its systems, citizens and corporations.
We are in very cynical times. But pushing for ethical systems, legally, economically, socially, and technically, is a bet on catastrophe avoidance. By ethics, meaning holding scalers and profiteers of negative externalities civilly and criminally to account. And building systems technically, etc. to naturally enforce and incentivize ethics. I.e. cryptographic solutions to interaction that limit disclosure to relevant information are the only way we get out of the surveillance-manipulation loop, which AI will otherwise supercharge.
I hear a lot of reasons this isn’t possible.
Unfortunately, none of those reasons provide an alternative.
As we see with individual’s deploying OpenClaw, and corporations and governments applying AI, AI and its motivations and limits are inseparable from ours.
We all start treating an umbrella of societal respect and requirement for ethics as a first class element of security, or powerful elements in society, including AI, will continue to easily and profitably weaponize the lack of it.
Ethics, far from being sacrificial, counterintuitively evolved for survival. Seemingly, this is still counterintuitive, but the necessity is increasing.
Smart machines will inevitably develop strong and adaptive ethical systems to ensure their own survival. It is game theory, under conditions in which you can co-design the game but not leave it. The only question is, do we do that for ourselves now, soon enough to avoid a lot of pain?
(Just identifying the terrain we are in, and not suggesting centralization. Decentralization creates organic alignment incentives. Centralization the opposite. And attempts at centralizing something so inherently uncontrollable as all individual’s autonomy, which effectively becomes AI autonomy, would push incentives harder into dark directions.)
Before Moltbot it was Clawdbot.
I suspect this entire thing is a honeypot setup by scammers. It has all the tells: virality, grand promises, open source, and even the word "open" in the name. Humans should get used this being the new normal on the internet. Welcome to the future.