8 comments

  • monster_truck 42 minutes ago
    This is neat, what does the actual throughput look like though?

    Have been hacking on a wasm+webtransport stack for distributed simulation workers and found the ceiling on one connection/worker per machine pretty quick. Had to pin adapters/workers to cores to get the latency I was expecting, then needed to use dedicated tx/rx adapters to eliminate jitter. Some bullshit about interrupt scheduling

  • kaoD 46 minutes ago
    I know the individual words in the description but I'm a bit confused about what this is.

    What would I use Pollen for?

    I'm not sure I understand the "seed" metaphor.

    • sambigeara 29 minutes ago
      Well, that’s a good question. I think the best answer for now is “we’ll see”?

      I use it in place of Tailscale for some homelab applications. I’ve started to deploy other experiments on a “prod” cluster. The demo I showed shows how Pollen responds to a multi-step pipeline type application; two WASM seeds and a single egress communicating over the provided RPC mechanism (`pln://seed…` etc) whilst handling routing, back pressure and the like.

      Right now, the workloads need to be stateless. I’m coming up with a story for state at the moment, which’ll likely start as some WAL-like convergent structure with thin (KV store etc) abstractions layered over it. Probably not dissimilar from the pattern underpinning the current CRDT gossip state.

      • kaoD 26 minutes ago
        Let's see if I got this right: so it's something like a private Yggdrasil Network (minus the IPv6 overlay?) meets self-distributing WASM-powered serverless functions? Plus some built-in functions for proxying/serving.
  • dbalatero 36 minutes ago
    I suspect you have something cool, but I think if you told a clearer example story that solves a real-world problem on the homepage it might alleviate some questions I'm seeing (and also having) in the thread here!
  • sambigeara 37 minutes ago
    No idea why this post has picked up traction 2 days later, I’m out and about right now but will endeavour to respond thoughtfully when I’m back at my keyboard later on!
  • sambigeara 2 days ago
    Hi everyone, I'm Sam. I started Pollen as an experiment last summer, got carried away, and have landed here.

    It's a single Go binary. Install it on every machine you want in the cluster and they self-organise. Topology is derived deterministically from gossiped state, so workloads land where there's capacity, replicas migrate toward demand, and survivors rehost from failed nodes. The mesh is built on ed25519 identity with signed properties; any TCP or UDP service you pin gets mTLS. Connections punch direct between peers where possible, otherwise they relay through mutually accessible nodes.

    I built it because I'm fascinated by local-first, convergent systems, and because I wanted to see if said systems could be applied to flip the traditional workload orchestration patterns on their head. I also _despise_ the operational complexity of modern systems and the thousands of bolted-on tools they demand. So I've attempted to make Pollen's ergonomics a primary concern (two-ish commands to a cluster, etc).

    It serves busy, live, globally distributed clusters (per the demo), but it's very early days, so don't be surprised by any rough edges!

    Very happy to answer anything in the thread!

    Cheers.

    Docs: https://docs.pln.sh

    • Levitating 48 minutes ago
      You have some workload demos which all definitely try out but could you paint us an example use-case of the technology?

      What are the workloads in the runtime capable of?

    • digdugdirk 45 minutes ago
      From someone who definitely doesn't fully understand what you made, this looks really cool!

      I'm seeing some functionality that seems like it could replace some personal services I currently host via my tailscale network. Am I understanding this correctly? If so, do you have a feel for what the performance implications would be?

    • anilgulecha 1 day ago
      Interesting project.

      In a potential modern cloud, having a globally named primitives (computer, store, messaging) can unlock very wider applications. Have you come across any such?

      • sambigeara 1 day ago
        To clarify, are you asking if I’ve considered incorporating those concepts into the project?

        If so, I have loose ideas around how I might introduce shared state, it’s an interesting problem that’ll require a lot of thought. Early days yet, though.

  • hsaliak 28 minutes ago
    Using CRDT gossip to inform scaling is a clever idea. You are on to something there. Perhaps extract it as a core library/concept from the runtime? I feel that would be generally useful!
    • sambigeara 24 minutes ago
      Thanks! That’s certainly crossed my mind!
  • jitl 35 minutes ago
    Wow, this is super cool. It almost feels like a DIY pocket-Cloudflare. I’m curious how a WASM binary gets mapped to HTTP endpoints that take JSON, how much of that is Pollen vs Extism? Are the routes encoded in the WASM binary somehow?
  • Remi_Etien 2 days ago
    [flagged]