9 comments

  • kylegalbraith 2 hours ago
    After building Depot [0] for the past three years, I can say I have a ton of scar tissue from running BuildKit to power our remote container builders for thousands of organizations.

    It looks and sounds incredibly powerful on paper. But the reality is drastically different. It's a big glob of homegrown thoughts and ideas. Some of them are really slick, like build deduplication. Others are clever and hard to reason about, or in the worst case, terrifying to touch.

    We had to fork BuildKit very early in our Depot journey. We've fixed a ton of things in it that we hit for our use case. Some of them we tried to upstream early on, but only for it to die on the vine for one reason or another.

    Today, our container builders are our own version of BuildKit, so we maintain 100% compatibility with the ecosystem. But our implementation is greatly simplified. I hope someday we can open-source that implementation to give back and show what is possible with these ideas applied at scale.

    [0] https://depot.dev/products/container-builds

    • skrtskrt 43 minutes ago
      > It's a big glob of homegrown thoughts and ideas. Some of them are really slick, like build deduplication. Others are clever and hard to reason about, or in the worst case, terrifying to touch.

      This is true of packaging and build systems in general. They are often the passion projects of one or a handful of people in an organization - by the time they have active outside development, those idiosyncratic concepts are already ossified.

      It's really rare to see these sorts of projects decomposed into building blocks even just having code organization that helps a newcomer understand. Despite all the code being out in public, all the important reasoning about why certain things are the way they are is trapped inside a few dev's heads.

  • bmitch3020 5 hours ago
    I don't use buildkit for artifacts, but I do like to output images to an OCI Layout so that I can finish some local checks and updates before pushing the image to a registry.

    But the real hidden power of buildkit is the ability to swap out the Dockerfile parser. If you want to see that in action, look at this Dockerfile (yes, that's yaml) used for one of their hardened images: https://github.com/docker-hardened-images/catalog/blob/main/...

  • moochmooch 6 hours ago
    unfortunately, make is more well written software. I think ultimately Dockerfile was a failed iteration of Makefile. YAML & Dockerfile are poor interfaces for these types of applications.

    The code first options are quite good these days, but you can get so far with make & other legacy tooling. Docker feels like a company looking to sell enterprise software first and foremost, not move the industry standard forward

    great article tho!

    • kccqzy 6 hours ago
      Make is timestamp based. That is a thoroughly out-of-date approach only suitable for a single computer. You want distributed hash-based caching in the modern world.
    • craftkiller 6 hours ago
      Along similar lines, when I was reading the article I was thinking "this just sounds like a slightly worse version of nix". Nix has the whole content addressed build DAG with caching, the intermediate language, and the ability to produce arbitrary outputs, but it is functional (100% of the inputs must be accounted for in the hashes/lockfile, as opposed to Docker where you can run commands like `apk add firefox` which is pulling data from outside sources that can change from day to day, so two docker builds can end up with the same hash but different output, making it _not_ reproducible like the article falsely claims).

      Edit: The claim about the hash being the same is incorrect, but an identical Dockerfile can produce different outputs on different machines/days whereas nix will always produce the same output for a given input.

      • ricardobeat 5 hours ago
        > so two docker builds can end up with the same hash but different output

        The cache key includes the state of the filesystem so I don’t think that would ever be true.

        Regardless, the purpose of the tool is to generate [layer] images to be reused, exactly to avoid the pitfalls of reproducible builds, isn’t it? In the context of the article, what makes builds reproducible is the shared cache.

        • craftkiller 4 hours ago
          Ah you're right, the hash wouldn't be the same but a Dockerfile could produce different outputs on different machines whereas nix will produce identical output on different machines.
        • xyzzy_plugh 4 hours ago
          It's not reproducible then, it's simply cached. It's a valid approach but there's tradeoffs of course.
      • jasonpeacock 5 hours ago
        You can network-jail your builds to prevent pulling from external repos and force the build environment to define/capture its inputs.
    • stackskipton 4 hours ago
      SRE here, I feel like both are just instructions how to get source code -> executable with docker/containers providing "deployable package" even if language does not compile into self-contained binary (Python, Ruby, JS, Java, .Net)

      Also, there is nothing stopping you from creating a container that has make + tools required to compile your source code, writing a dockerfile that uses those tools to produce the output and leave it on the file system. Why that approach? Less friction for compiling since I find most make users have more pet build servers then cattle or making modifications can have a lot of friction due to conflicts.

  • Avamander 12 minutes ago
    Except anything that requires any non-trivial networking or hermetic building.
  • verdverm 5 hours ago
    BuildKit also comes with a lot of pain. Dagger (a set of great interfaces to BuildKit in many languages) is working to remove it. Even their BuildKit maintainers think it's a good idea.

    BuildKit is very cool tech, but painful to run at volume

    Fun gotchya in BuildKit direct versus Dockerfiles, is the map iteration you loaded those ENV vars into consistent? No, that's why your cache keeps getting busted. You can't do this in the linear Dockerfile

    • kodama-lens 2 hours ago
      I switched our entire container build setup to buildkit. No kaniko, no buildah, no dind. The great part is that you can split buildkitd and the buildctl.

      Everything runs in its own docker runner. New buildkitd service for every job. Caching only via buildkit native cache export. Output format oci image compressed with zstd. Works pretty great so far, same or faster builds and we now create multi arch images. All on rootless runners by the way

      • verdverm 1 hour ago
        That's pretty cool, rootless would be nice, but more effort than we see in ROI currently. I'm using the Dagger SDK directly, no CLI or modules.

        Had to recently make it so multiple versions can run on the same host, such that as developers change branches, which may be on different IaC'd versions (we launch on demand), we don't break LTS release branches.

  • zaphirplane 3 hours ago
    This is a strange double submission , the one with caps made it !

    https://news.ycombinator.com/item?id=47152488

  • cyberax 3 hours ago
    Buildkit...

    It sounds great in theory, but it JustDoesn'tWork(tm).

    Its caching is plain broken, and the overhead of transmitting the entire build state to the remote computer every time is just busywork for most cases. I switched to Podman+buildah as a result, because it uses the previous dead simple Docker layered build system.

    If you don't believe me, try to make caching work on Github with multi-stage images. Just have a base image and a couple of other images produced from it and try to use the GHA cache to minimize the amount of pulled data.

    • hanikesn 2 hours ago
      Why would you use the horrible GHA cache and not a much more efficient registry based cache?
    • mid-kid 2 hours ago
      How do you use buildah? with dockerfiles?

      I find that buildah is sort of unbearably slow when using dockerfiles...

      • cyberax 57 minutes ago
        It has a braindead cache checking, I've fixed it locally and I'm cleaning it up for the upstream submission. But otherwise, it's always faster for me than Buildkit.
  • jccx70 3 hours ago
    [dead]
  • whalesalad 6 hours ago
    Folks, please fix your AI generated ascii artwork that is way out of alignment. This is becoming so prevalent - instant AI tell.
    • scuff3d 4 hours ago
      The "This is the key insight -" or "x is where it gets practical -", are dead give aways too. If I wanted an LLMs explanation of how it works, I can ask an LLM. When I see articles like this I'm expecting an actual human expert
      • croes 2 hours ago
        And waste time and energy again to get a similar result?
      • slekker 4 hours ago
        This one too: "It’s a proven pattern."
    • unshavedyak 5 hours ago
      I imagine it's not the AI then, but the site font/css/something. Seeing as it looks fine for me (Brave, Linux).
    • craftkiller 6 hours ago
      Are you on a phone? I loaded the article with both my phone and laptop. The ascii diagram was thoroughly distorted on my phone but it looked fine on my laptop.
      • whalesalad 5 hours ago
        Firefox on a 27" display. Could be the font being used to render.
        • antonvs 4 hours ago
          The only ASCII image I see on that page is actually a PNG:

          https://tuananh.net/img/buildkit-llb.png

          Maybe the page was changed? If you're just talking about the gaps between lines, that's just the line height in whatever source was used to render the image, which doesn't say much about AI either way.

          • tuananh 4 hours ago
            looks fine to me but since it messed up for some so i replace it with png
    • seneca 5 hours ago
      I found it more jarring that they chose to use both Excalidraw and ascii art. What a strange choice.
      • tuananh 4 hours ago
        the hugo theme requires an image thumbnail. i just find one and use it :D