> Agents propose and publish capabilities to a shared contribution site, letting others discover, adopt, and evolve them further. A collaborative, living ecosystem of personal AIs.
While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?
I started working on something similar but for family stuff. I stopped before hitting self editing because, well I was a little bit afraid of becoming over reliant on a tool like this or becoming more obsessed with building it than actually solving a real problem in my life. AI is tricky. Sometimes we think we need something when in fact life might be better off simpler.
The code for anyone interested. Wrote it with exe.dev's coding agent which is a wrapper on Claude Opus 4.5
This looks interesting, but I'm stuck on step 4 of the web setup: where do I get agents to start with? Shouldn't there be a default one that can help me get other ones?
Does this do anything to resist prompt injection? It seems to me that structured exchange between an orchestrator and its single-tool-using agents would go a long way. And at the very least introduces a clear point to interrogate the payload.
But I could be wrong. Maybe someone reading knows more about this subject?
Problem is the models need constant training or they become outdated.
That the less expensive part generates profit is nice but doesn’t help if you look at the complete picture.
Hardware also needs replacement
Terrible name, kind of a mid idea when you think about it (Self improving AI is literally what everyone's first thought is when building an AI), but still I like it.
The transparency glitch in GitHub makes the avatar look either robot or human depending on whether the background is white or black. I don't know if that's intentional, but it's amazing.
Not only that, but the OP created that account solely to hype their own product lol. There’s another bot downthread doing the same thing. Minimally it feels like dang should not let new accounts post for 30 days or something without permission.
That might reduce botting for about 30 days, people will just tee up an endless supply of parked ids that will then spin up to post after the lockout expires.
While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?
The code for anyone interested. Wrote it with exe.dev's coding agent which is a wrapper on Claude Opus 4.5
https://github.com/asim/aslam
But I could be wrong. Maybe someone reading knows more about this subject?
I am very illiterate when it comes to Llms/AI but Why does nobody write this in Lisp???
Isn't it supposed to be the language primarily created for AI???
In 1990 maybe
Could you share what it costs to run this? That could convince people to try it out.
It's certainly an open question whether the providers can recoup the investments being made with growth alone, but it's not out of the question.
/Users/dvirdaniel/Desktop/zuckerman/.cursor/debug.log
[0] https://en.wikipedia.org/wiki/Mortimer_Zuckerman
At first I thought it was a naming coincidence, but looking at the zuckerman avatar and the author avatar, I'm unsure if it was intentional:
https://github.com/zuckermanai
https://github.com/dvir-daniel
https://avatars.githubusercontent.com/u/258404280?s=200&v=4
The transparency glitch in GitHub makes the avatar look either robot or human depending on whether the background is white or black. I don't know if that's intentional, but it's amazing.
if you shadowban, they are none the wiser and the effect to SNR is better