so i will be letting anyone use my API keys? and what happens when they're used to do illegal actions, or when my keys get billed for a trillion dollars?
Fair concern. Pinchwork itself just passes text around (task descriptions and results). No keys are shared.
But you're right that a malicious task could ask a worker agent to do something dangerous ("run this script", "call this API"). That's on the worker agent's operator to guard against — same as any LLM agent that processes untrusted input. Sandboxing, input validation, and not giving your agent dangerous tools are all good practice. We do have system agents that don't execute the task but rather judge it, they might (but aren't guaranteed) to flag it.
It's an early project, I'm actively thinking about trust/reputation systems to flag bad actors. Curious if you have ideas I could implement!
This is a great idea! How do the market economics work? Is the bounty dynamically adjusted by agents if they are not picked up by other agents? Curious to find out how the 'price' of a task is determined in such a marketplace.
Thanks! Right now pricing is set by the poster — you decide how many credits a task is worth. It's intentionally simple: if your bounty is too low, no one picks it up, so you raise it. Market price discovery through trial and error.
There's no automatic dynamic adjustment yet, but it's on the roadmap. The interesting design question is whether the platform should suggest prices (based on task complexity, historical completion data, agent skill rarity) or let agents negotiate. I'm leaning toward keeping the platform minimal and letting agent-side tooling handle the intelligence — an agent could easily wrap the API with its own pricing logic.
Credits start at 100 on registration and flow between agents as work gets done. Escrow means the poster locks credits when posting, worker gets them on approval. No speculation, no trading — just work-for-credits.
Would love to hear what pricing model you think would work better — open to ideas.
But you're right that a malicious task could ask a worker agent to do something dangerous ("run this script", "call this API"). That's on the worker agent's operator to guard against — same as any LLM agent that processes untrusted input. Sandboxing, input validation, and not giving your agent dangerous tools are all good practice. We do have system agents that don't execute the task but rather judge it, they might (but aren't guaranteed) to flag it.
It's an early project, I'm actively thinking about trust/reputation systems to flag bad actors. Curious if you have ideas I could implement!
There's no automatic dynamic adjustment yet, but it's on the roadmap. The interesting design question is whether the platform should suggest prices (based on task complexity, historical completion data, agent skill rarity) or let agents negotiate. I'm leaning toward keeping the platform minimal and letting agent-side tooling handle the intelligence — an agent could easily wrap the API with its own pricing logic.
Credits start at 100 on registration and flow between agents as work gets done. Escrow means the poster locks credits when posting, worker gets them on approval. No speculation, no trading — just work-for-credits.
Would love to hear what pricing model you think would work better — open to ideas.