I think it's a bad addition since it pushes people towards a worse solution to a common problem.
Using "go tool" forces you to have a bunch of dependencies in your go.mod that can conflict with your software's real dependency requirements, when there's zero reason those matter. You shouldn't have to care if one of your developer tools depends on a different version of a library than you.
It makes it so the tools themselves also are being run with a version of software they weren't tested with.
If, for example, you used "shell.nix" or a dockerfile with the tool built from source, the tool's dependencies would match it's go.mod.
Now they have to merge with your go.mod...
And then, of course, you _still_ need something like shell.nix or a flox environment (https://flox.dev/) since you need to control the version of go, control the version of non-go tools like "protoc", and so you already have a better solution to downloading and executing a known version of a program in most non-trivial repos.
Yep, unfortunately this concern was mostly shrugged off by the Go team when it was brought up (because it would've required a lot of work to fix IIRC, which I think is a bad excuse for such a problem). IMO, a `go tool` dependency should've worked exactly the same way that doing `go install ...` works with a specific @tag: it should resolve the dependencies for that tool completely independently. Because it doesn't, you really, really shouldn't use this mechanism for things like golangci-lint, unfortunately. In fact, I honestly just recommend not using it at all...
No, it really is a problem with this design and not an issue with golangci-lint.
The trouble is that MVS will happen across all of your dependencies, including direct and other tool dependencies. If everything very strictly followed Go's own versioning guidelines, then this would be OK since any breaking change would be forced off into a separate module identity. However, even Google's own modules don't always follow this rule, so in reality it's just kind of unrealistic.
You don't need something huge like golangci-lint to run into problems. It's just easier to see it happen because the large number of dependencies makes it a lot more likely.
As long as a team agrees to have .golangci-version as the source of truth, the people using the tool don't have to worry about having the right version installed, as the wrapper fetches it on demand.
Having the wrong version installed between collaborators is problematic as then they may get different results and spend time wondering why.
Interesting - would something like `make lint` (which then installs to `$PWD/bin`) work? That's how I've been doing it on the projects I've been working on, and it's worked nicely - including automated updates via Renovate (https://www.jvt.me/posts/2022/12/15/renovate-golangci-lint/)
A simpler approach is to use `go run` with a specific version. e.g.:
go run github.com/golangci/golangci-lint/cmd/golangci-lint@v1.63.4 run
Easy enough to stuff in a Makefile or whatever.
Even better in Go 1.24 since according to this article the invocations of go run will also be cached (rather than just utilizing the compilation cache.) So there shouldn't be much of an advantage to pre-emptively installing binaries anymore versus just go running them.
Why do ecosystems continue to screw up dependency management in 2025? You would think the single most widely used feature by programmers everywhere would be a solved problem by now.
Go launched without any dependency management. Go people believe that anything non-trivial is either a useless abstraction, or too difficult for the average developer. So their solution is to simply not add it, and tell anyone claiming to need it that their mind has been poisoned by other languages...
Until that time when they realize everyone else was right, and they add an over simplified, bad solution to their "great" language.
Your reply is polemical and somewhat detached from the experience of working with Go.
From my experience, Go's dependency management is far better than anything else I've worked with (across languages including Java, Scala, Javascript/Typescript, Python).
Your criticism is perhaps relevant in relation to language features (though I would disagree with your position) but has no basis in relation to tooling and the standard library, both of which are best in class.
You seem to only use languages with bad dependency management. Which sounds tongue in cheek, but it's true. These are the languages (along with Go) where people hate the dependency management solutions.
Haha perhaps - though I think you'll find that (with the exception of Scala) these are some of the most popular programming languages.
Having sad that, I don't really know anyone with significant Go experience who dislikes its dependency management - most critiques are now quite out of date/predate current solutions by several years.
Finally, I'm not really sure which language you are proposing has better dependency management than Go - whether in relation to security, avoiding the 'diamond' problem, simplicity, etc. Rustaceans like to critique Go (no idea if you are one but my experience is that criticism is often from this crown) but I'm not sure how, from my own Rust experience/knowledge, it is actually better. In fact, I think there are several grounds to think that it is worse (even if still very solid compared to some of the aforementioned languages).
I think in this case you should give rust a try if only for cargo. Because the mentioned issues are non existent there. Also because it’s the language mostly referred to as being completely on the other side of the spectrum when it comes to language design philosophy.
I have no beef with Rust, but objectively I think it's dependency management, general tooling, and standard library are significantly worse than Go's. The language, as you say, has a quite different philosophy from Go which many favour - but that is only part of the story.
Could you elucidate which of the 'mentioned issues' you think are present for Go (in relation to tooling) that do not apply to Rust/cargo? Is your critique based solely on the new `go tool` command or more widespread? And are you aware that the parent criticism is at least partially misguided given it is possible to store tooling in a separate .mod file or simply ignore this feature altogether?
Each rust crate is compiled on their own. Means if a library you pull in, needs the same crate as a another dependency, even with incompatible versions, it doesn’t matter. Then cargo understands the difference between test and build dependencies next to the dependencies for the actual lib/binaries.
The fact that each library and its dependencies gets compiled separately adds quite a lot in build time depending how many crates you reference. But you usually don‘t fight with dependency version issues. The only thing which is not expressed in crates is the minimum rust version needed. This is a pain point when you want or need to stay on a specific toolchain version. Because transient dependencies are defined by default like „major.minor.patch“ without any locking, cargo will pull the latest compatible version during an update or fresh checkout (means anything that is still compatible in terms of semantic versioning; e.g 1.1.0 means resolve a version that is >= 1.1.0 && < 2.0.0) And because toolchain version updates usually happen as minor updates it happens that a build suddenly fails after an update. Hope this makes sense.
Speaking as someone who comes from the opposite end of the spectrum (Scala both professionally and by personal preference) and who doesn't enjoy Go as a language, I think there's a lot to be said for a language that evolves that way. Being able to make decisions with literally years of hindsight is powerful. Looking at the top critiques here and how they've been addressed, this seems like a pretty thoroughly baked approach.
I would rather scratch my eyes out than use Go for the kind of code I write day-to-day, but Go looks amazing for the kinds of things where you can achieve high levels of boringness, which is what we should all be striving for.
Go is more like "decades in hindsight", though. Literally the case with generics, just to give you one example - and for all the talk about how other languages aren't doing it right and they're waiting to figure it out, the resulting design is exactly the same as those other languages, except that now they had to bolt it onto the language that evolved without them for a decade, with all the inconsistencies this creates.
Different languages speak to different people. I feel the same way about Scala and most other functional languages. To me, they’re fun and all, but I wouldn’t build anything large-scale with them. My problem space is interesting enough that I don’t have to worry about Go being boring.
To be clear, I mean "boring" in the positive engineering sense of well-defined and reliable. The kind of work that I don't like to do in Go is "interesting" in the negative sense of lots of special cases, complicated branching based on combinations of data, complicated error handling... stuff where pattern matching and use of types like Option and Either shine (hell, even exception handling helps!) Basically the kind of stuff that good engineers would design out of existence if the product managers and salespeople would let us.
If all dependencies live in source code repos, and no one ever migrates their projects elsewhere, and placing URLs as imports directly on the source code, no thanks.
This isn't exactly true, as the dependency management was simple - clone the relevant project. Dependencies being source is great as it encourages open source.
Of course, the problem with that was that it was pretty much impossible to version properly, causing breakages and such.
If you don't like Go, why did you come and comment in this thread? didn't your mother ever tell you that if you don't have something nice to say, you shouldn't say anything?
This situation has improved over the last 9+ years or so. I agree that's where things started and I felt much the same way at the time.
That said, I still strongly dislike that the Go ecosystem conflates a library's repo URL with it's in-language URI. Having `github.com` everywhere in your sourcecode just ignores other use-cases for dependency management, like running an artifactory in an enterprise setting. My point being: there's still room for improvement.
Go has been a masterclass in disparaging industry learnings as "unnecessary" (dep management, generics, null safety), then gradually bolting them into the language. It's kinda hilarious to watch.
Because everyone has their own opinion about it, like most other standards.
Personally, I think the way PHP handles dependencies is vastly preferable to every other ecosystem I've developed in, for most types of development - but I know its somewhat inflexible approach would be a headache for some small percentage of developers too.
I do not, just aware of it as a hip tool in the general space of "I want to install and version tools per-project, but I don't want to learn the nix programming language"
> Using "go tool" forces you to have a bunch of dependencies in your go.mod that can conflict with your software's real dependency requirements, when there's zero reason those matter. You shouldn't have to care if one of your developer tools depends on a different version of a library than you.
Heh, were the people who made 'go tool' the same people who made Maven? Would make sense :P
I agree — tools should be shared artifacts that the team downloads and can guarantee are the same for everyone. I usually setup a flake.nix for everyone but flox, earthly, devenv, jetify, are all great alternatives. Ideally your tools are declaratively configured and shared between your team and your CI/CD environment, too.
The `go tool` stuff has always seemed like a junior engineering hack — literally the wrong tool for the job, yeah sure it gets it working, but other than that it's gross.
The feature itself seems reasonable and useful. Specially if most of your tooling is written is Go as well. But this part caught my attention:
> user defined tools are compiled each time they are used
Why compile them each time they are used? Assuming you're compiling them from source, shouldn't they be compiled once, and then have the 'go tool' command reuse the same binaries? I don't see why it compiles them at the time you run the tool, rather than when you're installing dependencies. The benchmarks show a significant latency increase. The author also provided a different approach which doesn't seem to have any obvious downsides, besides not sharing dependency versions (which may or may not be a good thing - that's a separate discussion IMO).
Ok, I'm trying to suss out what this means, since `go tool` didn't even exist before 1.24.
The functionality of `go tool` seems to build on the existing `go run` and the latter already uses the same package compilation cache as `go build`. Subsequent invocations of `go run X` were notably faster than the first invocation long before 1.24. However, it seems that the final executable was never cached before, but now it will be as of 1.24. This benefits both `go run` and `go tool`.
However, this raises the question: do the times reported in the blog reflect the benefit of executable caching, or were they collected before that feature was implemented (and/or is it not working properly)?
"Shared dependency state" was my very first thought when I heard about how it was built.
Yeah I want none of that. I'll stick with my makefiles and a dedicated "internal/tools" module. Tools routinely force upgrades that break other things, and allowing that is a feature.
same. tools are not part of the codebase, nor dependencies.
you got to have isolation of artefact and tools around to work with it.
it is bonkers to start versioning tools used to build project mixed with artefact dependencies itself. should we include version of VSCode used to type code? how about transitive dependencies of VSCode? how about OS itself to edit files? how about version of LLM model that generated some of this code? where does this stop?
The state of things with some projects I've touched is "We have a giant CI thing that does a bunch of things. It is really very large. It might or might not be broken, but we work around it and it's fine."
I think some of the euphoria around tracking tooling in the repo is "yes, now I can run a command in the repo and it's as if I spun up a docker container with the CI locally, but it's just regular software running on my machine!" This is a huge improvement if you're used to the minutes-or-hours-long CI cycle being your only interaction with the "real environment."
The reductio ad absurdum that you describe is basically "a snapshot of the definitions of all the container images in our CI pipeline." It's not a ridiculous example, it's how many large projects run.
I am with you on the same boat that this better be versioned and reproducible and standardised.
my key concern whether tools to build project have to be in the same pool as project itself (that may or may not use tools to build/edit/maintain/debug it).
It makes sense to some extent when the toolchain can tap into native language specific constructs when you need that REPL-like iteration loop to be tight and fast. But that kind of thing is probably only required in a small subset of the kind of tooling that gets implemented.
The tradeoff with this approach is that you lose any sort of agnosticism when you drop into the language specific tooling. So now if you work at a corporation and have to deal with multiple toolchains every engineer now needs to learn and work with new build tooling X times for each supported language. This always happens to some extent - there’s always going to be some things that use the language’s specific task runner constructs - but keeping that minimal is usually a good idea in this scenario.
Your complaint feels to me that it is about poorly implemented CI systems that heavily leverage container based workflows (of which there are many in the wild). If implemented properly with caching, really the main overhead you are paying in these types of setups is the virtualization overhead (on macs) and the cold start time for the engine. For most people and in most cases neither will make a significant difference in the wall clock time of their loop, comparatively.
This take absolute boggles the mind. You don't want people compiling your code with different versions of tools so you have to debug thousands of potential combinations of everything. You don't want people running different versions of formatters/linters that leave conflicting diffs throughout your commit history.
so where does it stop? let's include version of OS on laptop of people who edit code? it is getting ridiculous.
you got to draw a line somewhere.
in my opinion, "if dependency code is not linked nor compiled-into nor copied as a source (e.g. model weights, or other artefacts) then it must not be included into dependency tree of project source code"
that still means, you are free to track versions/hashes/etc. of tools and their dependencies. just do it separately.
Ideally it stops at the point where the tools actually affect your project.
Does everyone need to use the same IDE? Obviously not. Same C++ compiler? Ideally yes. (And yes you can do that in some cases, e.g. Bazel and its ilk allow you to vendor compilers.)
It's not that uncommon to have OS version fixed in CI/CD pipelines. E.g. a build process intended to produce artefacts for Apple's Appstore is dependent on XCode, and XCode maybe forced to upgrade in case of MacOS upgrade, and it may break things. So the OS version becomes a line in requirements. It's kinda disappointing, but it's the real state of affairs.
how about hardware that software is supposed to run on? that certainly can have effect. let's pin that too into project repo. don't want to continue this thread. but to re-iterate my point, you have to stop somewhere what to pin/version and what to not. I think my criteria is reasonable, but ultimately it is up to you. (so as long whole ecosystem and dependency trees do not become bags of everything, in which case Go here is alright so far. after digging deeper what v1.24 proposing will not cause massive dependency-apocalypsis of tools propagating everywhere and only what you actually use in main is going to be included in go.mod, thanks to module pruning).
yes let's have a meta project that can track the version of my tools "separately", and the version of my repo.
linters, formatters, reproducible codegen should be tracked. their output is deterministic and you want to enforce that in CI in case people forget. the rest doesn't really affect the code (well windows OS and their CRLF do but git has eol attributes to control that).
agree it has to be deterministic. my major concern was whether tools dependencies are mixed with actual software they are used to build. hopefully they are not mixed, so that you can have guarantees that everything you see in dependency tree is actually used. because otherwise, there is not much point in the tree (since you can't say if it is used or not).
again, v1.24 seems to be okay here. go mod pruning should keep nodes in tree clear from polluting each other.
No one says that this is a one-size-fits-all solution, but for some use cases (small tools that are intimately connected to the rest of the codebase, even reuse some internal code/libraries) it's probably helpful...
"Popular" and "good" have no relation to each other. They correlate fairly well, but that's all.
Blending those dependencies already causes somewhat frequent problems for library owners / for users of libraries that do this. Encouraging it is not what I would consider beneficial.
I always think it's a shame that these features end up getting built into ecosystem-specific build tools. Why do we need separate build systems for every language? It seems entirely possible to have build system that can do all this stuff for every language at once.
From my experience at Google I _know_ this is possible in a Megamonorepo. I have briefly fiddled with Bazel and it seems there's quite a barrier to entry, I dunno if that's just lack of experience but it didn't quite seem ready for small projects.
Maybe Nix is the solution but that has barrier to entry more at the human level - it just seems like a Way of Life that you have to dive all the way into.
Nonetheless, maybe I should try diving into one or both of those tools at some point.
> Why do we need separate build systems for every language?
Because being cross-language makes them inherit all of the complexity of the worst languages they support.
The infinite flexibility required to accommodate everyone keeps costing you at every step.
You need to learn a tool that is more powerful than your language requires, and pay the cost of more abstraction layers than you need.
Then you have to work with snowflake projects that are all different in arbitrary ways, because the everything-agnostic tool didn't impose any conventions or constraints.
The vague do-it-all build systems make everything more complicated than necessary. Their "simple" components are either a mere execution primitive that make handling different platforms/versions/configurations your problem, or are macros/magic/plugins that are a fractal of a build system written inside a build system, with more custom complexity underneath.
OTOH a language-specific build system knows exactly what that language needs, and doesn't need to support more. It can include specific solutions and workarounds for its target environments, out of the box, because it knows what it's building and what platforms it supports. It can use conventions and defaults of its language to do most things without configuration.
General build tools need build scripts written, debugged, and tweaked endlessly.
A single-language build tool can support just one standard project structure and have all projects and dependencies follow it. That makes it easier to work on other projects, and easier to write tooling that works with all of them. All because focused build system doesn't accommodate all the custom legacy projects of all languages.
You don't realize how much of a skill-and-effort black hole build scripts are is until you use a language where a build command just builds it.
But this just doesn't match my experience with Blaze at all. For my internal usage with C++ & Go it's perfect. For the weird niche use case of building and packaging BPF programs (with no support from the central tooling teams, we had to write our own macros) it still just works. For Python where it's a poor fit for the language norms it's a minor inconvenience but still mostly stays out of the way. I hear Java is similar.
For vendored open source projects that build with random other tools (CMake, Nix, custom Makefile) it's a pain but the fact that it's generally possible to get them building with Blaze at all says something...
Yes, the monorepo makes all of this dramatically easier. I can consider "one-build-tool-to-rule-them-all isn't really practical outside of a monorepo" as a valid argument, although it remains to be proven. But "you fundamentally need a build tool per language" doesn't hold any water for me.
> That makes it easier to work on other projects, and easier to write tooling that works with all of them.
But... this is my whole point. Only if those projects are in the same language as yours! I can see how maybe that's valid in some domains where there's probably a lot of people who can just do almost everything on JS/TS, maybe Java has a similar domain. But for most of us switching between Go/Cargo/CMake etc is a huge pain.
Oh btw, there's also Meson. That's very cross-language while also seeming extremely simple to use. But it doesn't seem to deliver a very full-featured experience.
I count C++ projects in the "worst" bucket, where every project has its own build system, its own structure, own way to run tests, own way to configure features, own way to generate docs.
So if a build system works great for your mixed C++ projects, your build system is taking on the maximum complexity to deal with it, and that's the complexity I don't want in non-C++ projects.
When I work with pure-JS projects, or pure-Go projects, or pure-Rust projects, I don't need any of this. npm, go, and rust/cargo packages are uniform, and trivial to build with their built-in basic tools when they don't have C/C++ dependencies.
(I worked on source control at FB for many years.)
The main argument for not overly genericizing things is that you can deliver a better user experience through domain-specific code.
For Bazel and buck2 specifically, they require a total commitment to it, which implies ongoing maintenance work. I also think the fact that they don't have open governance is a hindrance. Google's and Meta's internal monorepos make certain tradeoffs that don't quite work in a more distributed model.
Bazel is also in Java I believe, which is a bit unfortunate due to process startup times. On my machine, `time bazelisk --help` takes over 0.75 seconds to run, compared to `time go --help` which is 0.003 seconds and `time cargo --help` which is 0.02 seconds. (This doesn't apply to buck2, which is in Rust.)
This is likely because you are running it in some random PWD that doesn't represent a bazel workspace. When running in a workspace the bazel daemon persists. Inside my workspace the bazelisk --help invocation needs just 30ms real time.
Running bazel outside of a bazel workspace is not a major use-case that needs to be fixed.
> When running in a workspace the bazel daemon persists. Inside my workspace the bazelisk --help invocation needs just 30ms real time.
It still has a slow startup time, bazel just works around that by using a persistent daemon, so that it is relatively fast after as long as the daemon is running.
Bazel prints a message when you invalidate the in-memory cache in a perhaps accidental way; you can supply it with a flag to make this an error and skip the cache invalidation.
If you try to run two Bazel invocations in parallel in the same workspace, one waits for the other to be done.
GraalVM’s native image has been a thing for a while now. This could overcome the daemon issue partially. The daemon does more ofc by as it keeps some state in memory. But at least the binary start time is a solved problem in Java land.
I think the problem is basically because the build system has to be implemented using some ecosystem, and no other ecosystem wants to depend on that one.
If your "one build system to rule them all" was built in, say, Ruby, the Python ecosystem won't want to use it. No Python evangelist wants to tell users that step 1 of getting up and running with Python is "Install Ruby".
So you tend to get a lot of wheel reinvention across ecosystems.
I don't necessarily think it's a bad thing. Yes, it's a lot of redundant work. But it's also an opportunity to shed historical baggage and learn from previous mistakes. Compare, for example, how beloved Rust's cargo ecosystem is compared the ongoing mess that is package management in Python.
A fresh start can be valuable, and not having a monoculture can be helpful for rapid evolution.
> No Python evangelist wants to tell users that step 1 of getting up and running with Python is "Install Ruby".
True, but the Python community does seem to be coalescing around tools like UV and Ruff, written in Rust. Presumably that’s more acceptable because it’s a compiled language, so they tell users to “install UV” not “install Rust”.
Not sure why that's in jest. Perl is pretty much everywhere and could do the job just fine. There's lots of former (and current) Perl hackers still around.
I've had exactly the same thought, after hitting walls repeatedly with limitations in single-language ecosystems. And likewise, I've had the same concerns around the complexity that comes with Bazel/Buck/Nix.
It's been such a frustration for me that I started writing my own as a side project a year or two ago, based on a using a standardized filesystem structure for packages instead of a manifest or configuration language. By leaning into the filesystem heavily, you can avoid a lot of language lock-in and complexity that comes with other tools. And with fingerprint-based addressing for packages and files, it's quite fast. Incremental rebuild checks for my projects with hundreds of packages take only 200-300ms on my low-end laptop with an Intel N200 and mid-tier SSD.
One other alternative I know of that's multi-language is Pants(https://www.pantsbuild.org/), which has support for packages in several languages, and an "ad-hoc" mode which lets you build packages with a custom tool if it isn't officially supported. They've added support for quite a few new tools/languages lately, and seem to be very much an active project.
I agree. In my opinion, if you can keep the experience of Bazel limited to build targets, there is a low barrier to entry even if it is tedious. Major issues show up with Bazel once you start having to write rules, tool chains, or if your workspace file talks to the Internet.
I think you can fix these issues by using a package manager around Bazel. Conda is my preferred choice because it is in the top tier for adoption, cross platform support, and supported more locked down use cases like going through mirrors, not having root, not controlling file paths, etc. What Bazel gets from this is a generic solution for package management with better version solving for build rules, source dependencies and binary dependencies. By sourcing binary deps from conda forge, you get a midpoint between deep investment into Bazel and binaries with unknown provenance which allows you to incrementally move to source as appropriate.
Additional notes: some requirements limit utility and approach being partial support of a platform. If you require root on Linux, wsl on Windows, have frequent compilation breakage on darwin, or neglect Windows file paths, your cross platform support is partial in my book.
Use of Java for Bazel and Python for conda might be regrettable, but not bad enough to warrant moving down the list of adoption and in my experience there is vastly more Bazel out there than Buck or other competitors. Similarly, you want to see some adoption from Haskell, Rust, Julia, Golang, Python, C++, etc.
JavaScript is thorny. You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript. I haven't seen too much demand for JavaScript bindings to C++ wrappers around a Rust core that uses C core libraries, but I do see that for Python bindings.
> You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript.
Rust handles this fine by unifying up to semver compatibility -- diamond dependency hell is an artifact of the lack of namespacing in many older languages.
Conda unifies by using a sat solver to find versions of software which are mutually compatible regardless of whether they agree on the meaning of semver. So, both approaches require unifying versions. Linking against C gets pretty broken without this.
The issue I was referring to is that in Javascript, you can write code which uses multiple versions of the same library which are mutually incompatible. Since they're mutually incompatible, no sat-solve or unifyer is going to help you. You must permit multiple versions of the same library in the same environment. So far, my approach of ignoring some Javascript libraries has worked for my backend development. :)
Rust does permit multiple incompatible versions of the same library in the same environment. The types/objects from one version are distinct from the types/objects of the other, it's a type error to try mix them.
But you can use two versions of the same library in your project; I've done it by giving one of them a different name.
My experience with Bazel is it does everything you need, and works incredibly well once set up, but is ferociously complex and hard to learn and get started with. Buck and Pants are easier in some ways, but fundamentally they still look and feel mostly like Bazel, warts and all
I've been working on an alternate build tool Mill (https://www.mill-build.org) tries to provide the 90% of Bazel that people need at 10% the complexity cost. From a greenfield perspective a lot of work to try and catch up to Bazel's cross-language support and community. I think we can eventually get there, but it will be a long slog
I like that Go decided to natively support this. But since it’s keeping the dev dependencies in the same go.mod, won’t it make the binary larger?
In Python’s uv, the pyproject.toml has separate sections for dev and prod dependencies. Then uv generates a single lock file where you can specify whether to install dev or prod deps.
But what happens if I run ‘go run’ or ‘go build’? Will the tools get into the final artifact?
I know Python still doesn’t solve the issue where a tool can depend on a different version of a library than the main project. But this approach in Go doesn’t seem to fix it either. If your tool needs an older version of a library, the single go.mod file forces the entire project to use the older version, even if the project needs—or can only support—a newer version of the dependency.
No. The binary size is related to the number of dependencies you use in each main package (and the dependencies they use, etc). It does not matter how many dependencies you have in your go.mod.
Ah, thanks. This isn't much of an upgrade from the `tools.go` convention, where the tools are underscore-imported. All it does is provide an indication in the `go.mod` file that some dependencies come from tools.
Plus, `go tool <tool-name>` is slower than `./bin/<tool-name>`. Not to mention, it doesn’t resolve the issue where tools might use a different version of a dependency than the app.
Exactly. You lose build isolation for those tools, but you have the convenience of shipping something tested and proven (ideally) alongside the project they support. At the same time, this mess all stays isolated from the parent environment which may be the bigger fight that devs have on their hands - not everyone is using Nix or container isolation of some sort.
I also see this as sugar for `go build` or even `go run`. Or as something way easier than the `go generate` + `//go:generate go run` hack. So we can look at this as a simple refinement for existing practices.
go tool is only slower when (re-)compilation is needed, which is not often. You'd have to pay the same price anyway at some point to build the binary placed in ./bin.
I'm actually not 100% on this; there is a cache, and it should speed things up on subsequent runs, but maybe not as much as one might think: https://news.ycombinator.com/item?id=42864971
> In Python’s uv, the pyproject.toml has separate sections for dev and prod dependencies.
If you want, you can have multiple ".mod" files and set "-modifle=dev-env.mod" every time you run "go" binary with "run" or "build" command. For example, you can take what @mseepgood mentioned:
> go tool -modfile=tools.mod mytool
Plus, in last versions of the Go we have workspaces [0][1]. It is yet another way to easily switch between various environments or having isolated modules in the monorepo.
This seems handy, but often the tools run by `go generate` are outside of the Go ecosystem, or need to be binaries.
So I think a general solution would work better, and not be limited to Go. There are plenty of tools in this space to choose from: mise, devenv, Nix, Hermit, etc.
Mise is right on the edge of being pretty killer. I’m bullish on it. It also includes a lot of nice to haves that you can declare, like k9s, which isn’t exactly a dev tool but becomes expected
> Due to module pruning, when you depend on a module that itself has a tool dependency, requirements that exist just to satisfy that tool dependency do not usually become requirements of your module.
Go uses Minimum Version Selection (MVS) instead of a SAT solver. There are no ranges in any go dependency specifications. It's actually a very simple and elegant algorithm for dependency version selection
> Minimal version selection assumes that each module declares its own dependency requirements: a list of minimum versions of other modules. Modules are assumed to follow the import compatibility rule—packages in any newer version should work as well as older ones—so a dependency requirement gives only a minimum version, never a maximum version or a list of incompatible later versions.
That sounds like a non-starter. You almost never want to to unintentionally 'upgrade' to the next 'major' ver. There's also occasionally broken or hacked/compromised minor vers.
thankfully major versions in go have different names (module, module/v2, module/v3) enforced by tooling, so you'll never upgrade to the next major (except v0 -> v1).
I don't understand why it's a good idea to couple tooling or configuration or infrastructure (e.g. Aspire.NET, which I'm also not convinced of being a good idea) so tightly with the application. An application should not need to be aware of how whatever tools are implemented or how configuration or infrastructure is managed. The tooling should point to the application as dependency. The application should not have any dependency on tooling.
I appreciate that "tools" that are used to build the final version of a module/cli/service are explicitly managed through go.mod.
I really dislike that now I'm going to have two problems, managing other tools installed through a makefile, e.g. lint, and managing tools "installed" through go.mod, e.g. mocks generators, stringify, etc.
I feel like this is not a net negative on the ecosystem again. Each release Golang team adds thing to manage and makes it harder to interact with other codebases. In this case, each company will have to decide if they want to use "go tool" and when to use it. Each time I clone an open source repo I'm going to have to check how they manage their tools.
My current approach has been setting GOBIN to a local project bin via direnv and go installing bins there. install commands themselves are cached by me with a naive checksum check for the install script itself when I run my commands. Therefore all `go install`s run in parallel if I edit the install script, and go decides what to reinstall or not. At this point I don't feel it's worth migrating to `go tool` having this setup, we'll see when it's stable
what is also concerning, Go team years ago did small vote, small survey of positive occurrences, and decided to enforce it globally for anyone.
old design give people option to use `tools.go` approach, or other, or nothing at all. now they are enforcing this `tools.go` standard. Go looks to be moving into very restrictive territories.
what about surveying opposing views? what about people who did not use `tools.go`
> Due to module pruning, when you depend on a module that itself has a tool dependency, requirements that exist just to satisfy that tool dependency do not usually become requirements of your module.
I don't love the pollution in the go.mod or being forced to have multiple files to track dependencies.
Being able to run tools directly with go generate run [1] already works well enough and I frankly don't need see any benefits compared to it in this new approach.
I think it's great that there's prior art that can be used as examples, proof of value, and opportunities for learning. I commend .NET for investing the resources in researching and developing this feature.
I don't like this. When I install a tool, I want to use it with their dependency versions at the moment they released it.
When I use `go tool`, it uses whatever I have in go.mod; and, in the opposit way, it will update my go.mod for no real reason.
But some people do this right now with tools.go, so... whatever, it's a better version of tools.go pattern. And I can still do it my preffered way with `go install <tool>@version` in makefile. So, eh.
Since i started using nix devShells this is kind of useless.
What if i have 1 tool that isn't go tool so what do i do with it? so i have that one "exception" and here we go again...
This is actually a valid downside of Go, in that it has its own set of tools for things like debugging and whatnot instead of being compatible with existing tools.
A note for the author in case they are reading: "i.e." means "that is", "e.g." means "for example". You should be able to substitute these meanings and find the sentence makes sense. In all cases here you wanted "e.g.".
I wish Go Team would focus on performance rather thann adding new features that nobody asked for. The http stack is barely able to beat NodeJS these days, ffs.
Go is a statically typed, AOT-compiled language, unlike V8. The kind of task that had countless hours of R&D poured into it for decades, much of it public research and open source code.
I'm not sure how GC is relevant here at all given that JS is also a garbage-collected language.
I have tested it and probably will use it but the fact that it pollutes your go.mod's indirect dependency list (without any distinction indicating it's for a tool) is very annoying.
We went back on forth on this a lot, but it boiled down to wanting only one dependency graph per module instead of two. This simplifies things like security scanners, and other workflows that analyze your dependencies.
A `// tool` comment would be a nice addition, it's probably not impossible to add, but the code is quite fiddly.
Luckily for library authors, although it does impact version selection for projects who use your module; those projects do not get `// indirect` lines in their go.mod because those packages are not required when building their module.
Thank you for working on it. It is a nice feature and still better than alternatives.
I'm not a library author and I try to be careful about what dependencies I introduce to my projects (including indirect dependencies). On one project, switching to `go tool` makes my go.mod go from 93 lines to 247 (excluding the tools themselves) - this makes it infeasible to manually review.
If I'm only using a single feature of a multi-purpose tool for example, does it matter to me that some unrelated dependency of theirs has a security issue?
>If I'm only using a single feature of a multi-purpose tool for example, does it matter to me that some unrelated dependency of theirs has a security issue?
How is anyone supposed to know whether there's an issue or not? To simplify things, if you use the tool and the dependency belongs to the tool, then the issue can affect you. Anything more advanced than that requires analyzing the code.
What if I'm already using techniques, such as sandboxing, to prevent the tools from doing anything unexpected? Why bring this entire mess of indirect dependencies into my project if I'm just using a tool to occasionally analyze my binary's output size? Or a tool to lint my protobuf files?
If it's a build dependency, then you have to have it. If you don't like the size of the tool then take it up with the authors. I'm not a Go programmer by the way, this is all just obvious to me.
The functionality we're discussing can be used for tools that are not build dependencies. They may be important for your project and worth having contributors be on the same version but not part of the build.
It will still add the dependencies of those tools as indirect dependencies to your go.mod file, that is what's being discussed.
If you use the tool to develop your project then it is basically a build dependency. That is a sweeping generalization, but it's essentially correct in most cases.
Reachability analysis on a tool that could be called by something outside of the project? We're talking about tools here after all - anything that can run `go tool` in that directory can call it. The go.mod tool entry could just be being used for versioning.
That is a bit much to ask for IMO. In any case, the project may not be aware of how any given developer will use the tool. So who is to say that if you change the order of two parameters to the tool, the tool might not take a different path and proceed to hack your computer? You really don't want any of this problem. What you should ask for is for the tools' dependencies to be listed separately, and for each tool to follow the Unix philosophy of "do one thing well."
There is a lot of merit to this statement, as applied to `go tool` usage and to security scanning. Just went through a big security vendor analysis and POCs. In the middle I saw Filippo Valsorda post [1] about false positives from the one stop shops, while govulncheck (language specific) did not have them. At the same time, there was one vendor who did not false positive with the reachability checks on vulns. While not always as good, one-stop-shops also add value by removing a lot of similar / duplicated work. Tradeoffs and such...
The similar/duplicated stuff can be rolled into libraries. Just don't make the libraries too big lol. I suspect there's less duplicated stuff than you think. Most of it would be stuff related to parsing files and command parameters, I guess.
it probably doesn't, and good vulnerability scanners like govulncheck from the go team won't complain about them, because they're unreachable from your source code.
now, do you care about some development tool you're running locally has a security issue? if yes, you needed to update anyway, if not, nothing changes.
> it boiled down to wanting only one dependency graph per module instead of two
Did you consider having tool be an alias for indirect? That would have kept a single dependency graph per module, while still enabling one reading one’s go.mod by hand rather than using ‘go mod’ to know where each dependency came from and why?
I know, a random drive-by forum post is not the same as a technical design …
Having not looked at it deeply yet, why require building every time it's invoked? Is the idea to get it working then add build caching later? Seems like a pretty big drawback (bigger than the go.mod pollution, for me). Github runners are sllooooow so build times matter to me.
`go tool` doesn't require a rebuild, but it does checking that the tool is up-to-date (which requires doing at least a bit of work).
This is one of the main advantages of using `go tool` over the "hope that contributors to have the right version installed" approach. As the version of the tool required by the project evolves, it continues to work.
Interestingly, when I was first working on the proposal, `go run` deliberately did not cache the built binary. That meant that `go tool` was much faster because it only had to do the check instead of re-running the `link` step. In Go 1.24 that was changed (both to support `go tool`, but also for some other work they are planning) so this advantage of `go tool` is not needed anymore.
Dark background, too many colors, inconsistent spacing, inconsistent font-size and/or family, some links appear fully pink with pink underline, some links aren't pink and only have the underline, inline <code> is blue, but large code blocks are the same color as regular text – on black background, etc.
Sorry to hear that - are there any particular tweaks you think would work to reduce the impact? Is it i.e. the blue used by code snippets? Or because there's also the diff syntax which has green/reds?
I'm not OP, but my brain also quickly noped out of reading that page. I can appreciate the care that went into the formatting of commands and links, but it's a bit much to parse all at once. I think monospaced/preformatted text usually looks best with a different background (like the dedicated code blocks towards the end.) Also on my browser the preformatted text is decently larger than the normal paragraph text. This combined with the blue color is a bit jarring.
Choosing colors is like making music. This color scheme feels discordant, like a jumble of loud notes. Maybe try looking at color palette creators online?
(not OP) some ideas: make the <pre>'s not go all the way to out the edge of the window. Make the diff colors and code colors less dramatically different from its surroundings. Increase the contrast of the default text. Use blue for links. Drop the orange
> are there any particular tweaks you think would work to reduce the impact?
Yeah, remove all CSS.
I clicked on View -> Page Style -> No Style in Firefox and suddenly I could read it. Reader mode in Safari also worked. Reading mode in Chrome didn't work properly.
What’s wrong with copying from other projects if they’re indeed offering good ideas worth copying? You say it as if the Golang community MUST only ever have unique ideas no one else has ever thought of — something that’s increasingly rare and unlikely.
go.mod and the golang tooling is a horror show. I absolutely LOVE the language, but dealing with the tooling is horrendous. I should blog about the specifics, but if you want a short version:
* not using posix args
* obscure incantations to run tests
* go.mod tooling is completely non-deterministic and hard to use, they should have just left the old-style vendor/ alone (which worked perfectly) and wrapped a git-submodules front-end on top for everyone who was afraid of submodules. Instead they reinvented this arcane new ecosystem.
If you want to rewrite the golang tooling, I'll consult on this for free.
MVS, the algo for dep version selection, is deterministic, given the same inputs you will get the same outputs. Go has invested a lot of effort in creating reproducible builds through the entire toolchain
The _tooling_ is not reproducible. Take a not small golang project with some number of dependencies and there should be a single list of the latest versions for the entire project. And exactly what golang commands do you run to generate that list? It's totally broken. This is why so many tools cropped up like go-mod-upgrade and so on.
Everyone downvoting obviously doesn't understand the problem.
You are being downvoted for being wrong and talking about downvoting, which is called out as something not to do in the posting & commenting guidelines
the versions in go.mod are an enforcement of the versions required by your dependencies, and those your module require. asking for it to be reproducible from scratch is like deleting package.json in a node project and asking it to magic all your >= < version constraints out of thin air, it's impossible because you're deleting the source.
Given Go’s approach to “metaprogramming” has long relied on tools written in Go, this does seem like a feature gap that needed closing. Even the introduction of `go generate` long ago formalized this approach, but still left installing the tools as an exercise for the reader. You can’t have consistent code gen if you don’t have consistent tooling across a team/CI.
> Even the introduction of `go generate` long ago formalized this approach
It did, but if you recall it came with a lot of "We have no idea why you need this" from Pike and friends. Which, of course, makes sense when you remember that they don't use the go toolchain inside Google. They use Google's toolchain, which already supports things like code generation and build dependency management in a far more elegant way. Had Go not transitioned to a community project, I expect we would have seen the same "We have no idea why you need this" from the Go project as that is another thing already handled by Google's tooling.
The parent's experience comes from similar sized companies as Google who have similar kinds of tooling as Google. His question comes not from a "why would you need this kind of feature?" in concept, but more of a "why would you not use the tooling you already have?" angle. And, to be fair, none of this is needed where you have better tooling, but the better tooling we know tends to require entire teams to maintain it, which is unrealistic for individuals to small organizations. So, this is a pretty good half-measure to allow the rest of us to play the same game in a smaller way.
> if you recall it came with a lot of "We have no idea why you need this" from Pike and friends
The blog post and design document both authored by Rob Pike at the time[0] contains none of that sentiment. The closest approach comes from the blog post which states:
> Go generate does nothing that couldn’t be done with Make or some other build mechanism, but it comes with the go tool—no extra installation required—and fits nicely into the Go ecosystem.
This, taken alone, would seem to support “we have no idea why you need this,” until you read the hope from the design document:
> It is hoped, however, that it may replace many existing uses of make(1) in the Go repo at least.
These are not words of someone who doesn’t understand why users would need this.
Also, I am at a FAANG and my experience differs from the parent—`go tool` is sorely needed by my teams.
I am both strongly of the opinion that this was already done much better in Bazel, and that the go-native version seems clean, clear, and simple and should probably be adopted by pure go shops.
The digraph problem of build tooling is hardly new, though the ability to checksum all of your build tools and executables and mid-outputs to assure consistency is relatively new to feasibility. Bazel is a heavy instrument and making it work as well as it does was a hard problem even for Google. I don't know anyone making the same investment, and doubt it makes the slightest hint of sense for anyone outside the fortune 500.
I think it's a bad addition since it pushes people towards a worse solution to a common problem.
Using "go tool" forces you to have a bunch of dependencies in your go.mod that can conflict with your software's real dependency requirements, when there's zero reason those matter. You shouldn't have to care if one of your developer tools depends on a different version of a library than you.
It makes it so the tools themselves also are being run with a version of software they weren't tested with.
If, for example, you used "shell.nix" or a dockerfile with the tool built from source, the tool's dependencies would match it's go.mod.
Now they have to merge with your go.mod...
And then, of course, you _still_ need something like shell.nix or a flox environment (https://flox.dev/) since you need to control the version of go, control the version of non-go tools like "protoc", and so you already have a better solution to downloading and executing a known version of a program in most non-trivial repos.
Yep, unfortunately this concern was mostly shrugged off by the Go team when it was brought up (because it would've required a lot of work to fix IIRC, which I think is a bad excuse for such a problem). IMO, a `go tool` dependency should've worked exactly the same way that doing `go install ...` works with a specific @tag: it should resolve the dependencies for that tool completely independently. Because it doesn't, you really, really shouldn't use this mechanism for things like golangci-lint, unfortunately. In fact, I honestly just recommend not using it at all...
i think it's more an argument to not use the dumpster fire that is golangci-lint
No, it really is a problem with this design and not an issue with golangci-lint.
The trouble is that MVS will happen across all of your dependencies, including direct and other tool dependencies. If everything very strictly followed Go's own versioning guidelines, then this would be OK since any breaking change would be forced off into a separate module identity. However, even Google's own modules don't always follow this rule, so in reality it's just kind of unrealistic.
You don't need something huge like golangci-lint to run into problems. It's just easier to see it happen because the large number of dependencies makes it a lot more likely.
I've been working on a version manager for golangci-lint in the quiet moments [1]
It already works, but it's work in progress. Happy to get feedback
[1] https://github.com/anttiharju/vmatch
What's the use case you have over "just" using the official install process (`curl | sh`)?
As long as a team agrees to have .golangci-version as the source of truth, the people using the tool don't have to worry about having the right version installed, as the wrapper fetches it on demand.
Having the wrong version installed between collaborators is problematic as then they may get different results and spend time wondering why.
Works across branches and projects.
Interesting - would something like `make lint` (which then installs to `$PWD/bin`) work? That's how I've been doing it on the projects I've been working on, and it's worked nicely - including automated updates via Renovate (https://www.jvt.me/posts/2022/12/15/renovate-golangci-lint/)
A simpler approach is to use `go run` with a specific version. e.g.:
Easy enough to stuff in a Makefile or whatever.Even better in Go 1.24 since according to this article the invocations of go run will also be cached (rather than just utilizing the compilation cache.) So there shouldn't be much of an advantage to pre-emptively installing binaries anymore versus just go running them.
> Using "go tool" forces you to have a bunch of dependencies in your go.mod
No, it doesn't. You can use "go tool -modfile=tools.mod mytool" if you want them separate.
I built a simple tool to generate binstubs and work around this problem: https://github.com/jcmfernandes/go-tools-binstubs
However, having multiple tools share a single mod file still proves problematic occasionally, due to incompatible dependencies.
This should almost certainly be the default recommendation.
That seems like it should have been the default
Why do ecosystems continue to screw up dependency management in 2025? You would think the single most widely used feature by programmers everywhere would be a solved problem by now.
Go launched without any dependency management. Go people believe that anything non-trivial is either a useless abstraction, or too difficult for the average developer. So their solution is to simply not add it, and tell anyone claiming to need it that their mind has been poisoned by other languages...
Until that time when they realize everyone else was right, and they add an over simplified, bad solution to their "great" language.
Your reply is polemical and somewhat detached from the experience of working with Go.
From my experience, Go's dependency management is far better than anything else I've worked with (across languages including Java, Scala, Javascript/Typescript, Python).
Your criticism is perhaps relevant in relation to language features (though I would disagree with your position) but has no basis in relation to tooling and the standard library, both of which are best in class.
> Java, Scala, Javascript/Typescript, Python
You seem to only use languages with bad dependency management. Which sounds tongue in cheek, but it's true. These are the languages (along with Go) where people hate the dependency management solutions.
Haha perhaps - though I think you'll find that (with the exception of Scala) these are some of the most popular programming languages.
Having sad that, I don't really know anyone with significant Go experience who dislikes its dependency management - most critiques are now quite out of date/predate current solutions by several years.
Finally, I'm not really sure which language you are proposing has better dependency management than Go - whether in relation to security, avoiding the 'diamond' problem, simplicity, etc. Rustaceans like to critique Go (no idea if you are one but my experience is that criticism is often from this crown) but I'm not sure how, from my own Rust experience/knowledge, it is actually better. In fact, I think there are several grounds to think that it is worse (even if still very solid compared to some of the aforementioned languages).
I haven't used Java much in the last few years but it was pretty straight forward with Maven with the slight caveat you use an IDE to do everything.
Python isn't too bad now-a-days with poetry although the repo spec could use more constraints
I think in this case you should give rust a try if only for cargo. Because the mentioned issues are non existent there. Also because it’s the language mostly referred to as being completely on the other side of the spectrum when it comes to language design philosophy.
I have no beef with Rust, but objectively I think it's dependency management, general tooling, and standard library are significantly worse than Go's. The language, as you say, has a quite different philosophy from Go which many favour - but that is only part of the story.
Could you elucidate which of the 'mentioned issues' you think are present for Go (in relation to tooling) that do not apply to Rust/cargo? Is your critique based solely on the new `go tool` command or more widespread? And are you aware that the parent criticism is at least partially misguided given it is possible to store tooling in a separate .mod file or simply ignore this feature altogether?
Each rust crate is compiled on their own. Means if a library you pull in, needs the same crate as a another dependency, even with incompatible versions, it doesn’t matter. Then cargo understands the difference between test and build dependencies next to the dependencies for the actual lib/binaries.
The fact that each library and its dependencies gets compiled separately adds quite a lot in build time depending how many crates you reference. But you usually don‘t fight with dependency version issues. The only thing which is not expressed in crates is the minimum rust version needed. This is a pain point when you want or need to stay on a specific toolchain version. Because transient dependencies are defined by default like „major.minor.patch“ without any locking, cargo will pull the latest compatible version during an update or fresh checkout (means anything that is still compatible in terms of semantic versioning; e.g 1.1.0 means resolve a version that is >= 1.1.0 && < 2.0.0) And because toolchain version updates usually happen as minor updates it happens that a build suddenly fails after an update. Hope this makes sense.
I fail to see how working directly from code with direct url mappings to source code repositories is better than Maven, Graal, NuGET.
Anything is better than npm, with one function per package.
Python, thankfully I only need batteries for OS scripting.
Speaking as someone who comes from the opposite end of the spectrum (Scala both professionally and by personal preference) and who doesn't enjoy Go as a language, I think there's a lot to be said for a language that evolves that way. Being able to make decisions with literally years of hindsight is powerful. Looking at the top critiques here and how they've been addressed, this seems like a pretty thoroughly baked approach.
I would rather scratch my eyes out than use Go for the kind of code I write day-to-day, but Go looks amazing for the kinds of things where you can achieve high levels of boringness, which is what we should all be striving for.
Go is more like "decades in hindsight", though. Literally the case with generics, just to give you one example - and for all the talk about how other languages aren't doing it right and they're waiting to figure it out, the resulting design is exactly the same as those other languages, except that now they had to bolt it onto the language that evolved without them for a decade, with all the inconsistencies this creates.
Different languages speak to different people. I feel the same way about Scala and most other functional languages. To me, they’re fun and all, but I wouldn’t build anything large-scale with them. My problem space is interesting enough that I don’t have to worry about Go being boring.
To be clear, I mean "boring" in the positive engineering sense of well-defined and reliable. The kind of work that I don't like to do in Go is "interesting" in the negative sense of lots of special cases, complicated branching based on combinations of data, complicated error handling... stuff where pattern matching and use of types like Option and Either shine (hell, even exception handling helps!) Basically the kind of stuff that good engineers would design out of existence if the product managers and salespeople would let us.
go mod is best in class dependecy manager, it is way better than what most mainstream languages have, it's not even close.
I worked with C++/ruby/python/js/java and C#, go mod is superior to all them.
The one that is similar is Cargo ( rust ).
If all dependencies live in source code repos, and no one ever migrates their projects elsewhere, and placing URLs as imports directly on the source code, no thanks.
This isn't exactly true, as the dependency management was simple - clone the relevant project. Dependencies being source is great as it encourages open source.
Of course, the problem with that was that it was pretty much impossible to version properly, causing breakages and such.
If you don't like Go, why did you come and comment in this thread? didn't your mother ever tell you that if you don't have something nice to say, you shouldn't say anything?
at least Go doesn't have virtualenvs
...yet
This situation has improved over the last 9+ years or so. I agree that's where things started and I felt much the same way at the time.
That said, I still strongly dislike that the Go ecosystem conflates a library's repo URL with it's in-language URI. Having `github.com` everywhere in your sourcecode just ignores other use-cases for dependency management, like running an artifactory in an enterprise setting. My point being: there's still room for improvement.
Go has been a masterclass in disparaging industry learnings as "unnecessary" (dep management, generics, null safety), then gradually bolting them into the language. It's kinda hilarious to watch.
It would be, if they didn't hit jackpot with Docker and Kubernetes adoption, and now many of us on DevOps space have to deal with it in some form.
A bit like C ignoring previous experience in safe systems, getting lucky with UNIX's adoption, and we are still trying to fix that to this day.
Because everyone has their own opinion about it, like most other standards.
Personally, I think the way PHP handles dependencies is vastly preferable to every other ecosystem I've developed in, for most types of development - but I know its somewhat inflexible approach would be a headache for some small percentage of developers too.
I ask this out of curiosity, not accusation, do you work for Flox? Can't say I've ever seen it mentioned "in the wild".
I do not, just aware of it as a hip tool in the general space of "I want to install and version tools per-project, but I don't want to learn the nix programming language"
Hehe, I was also wondering this.
> Using "go tool" forces you to have a bunch of dependencies in your go.mod that can conflict with your software's real dependency requirements, when there's zero reason those matter. You shouldn't have to care if one of your developer tools depends on a different version of a library than you.
Heh, were the people who made 'go tool' the same people who made Maven? Would make sense :P
At least build time dependencies are separated from runtime dependencies.
I agree — tools should be shared artifacts that the team downloads and can guarantee are the same for everyone. I usually setup a flake.nix for everyone but flox, earthly, devenv, jetify, are all great alternatives. Ideally your tools are declaratively configured and shared between your team and your CI/CD environment, too.
The `go tool` stuff has always seemed like a junior engineering hack — literally the wrong tool for the job, yeah sure it gets it working, but other than that it's gross.
well now it leans in to go's reproducible tooling so it's declaratively configured and shared between your team and your CI/CD environment too.
There are many tools that arent written in golang, and therefore not manageable via this approach. I don’t really understand why I would use this.
control the go version with the `toolchain` directive, replace protoc with buf.build (which is way better anyway).
Found this a lot easier to follow. https://blog.howardjohn.info/posts/go-tools-command/
And didn't quite understand the euphoria.
There's also the (draft) release notes: https://go.dev/doc/go1.24 And the docs: https://go.dev/doc/modules/managing-dependencies#tools
I've done the blank import thing before, it was kinda awkward but not _that_ bad.
The feature itself seems reasonable and useful. Specially if most of your tooling is written is Go as well. But this part caught my attention:
> user defined tools are compiled each time they are used
Why compile them each time they are used? Assuming you're compiling them from source, shouldn't they be compiled once, and then have the 'go tool' command reuse the same binaries? I don't see why it compiles them at the time you run the tool, rather than when you're installing dependencies. The benchmarks show a significant latency increase. The author also provided a different approach which doesn't seem to have any obvious downsides, besides not sharing dependency versions (which may or may not be a good thing - that's a separate discussion IMO).
There is a cache and they're aren't re-compiled unless they change or the cache is cleared.
cached* as of Go 1.24, prior to that they were re-compiled each time
Ok, I'm trying to suss out what this means, since `go tool` didn't even exist before 1.24.
The functionality of `go tool` seems to build on the existing `go run` and the latter already uses the same package compilation cache as `go build`. Subsequent invocations of `go run X` were notably faster than the first invocation long before 1.24. However, it seems that the final executable was never cached before, but now it will be as of 1.24. This benefits both `go run` and `go tool`.
However, this raises the question: do the times reported in the blog reflect the benefit of executable caching, or were they collected before that feature was implemented (and/or is it not working properly)?
"Shared dependency state" was my very first thought when I heard about how it was built.
Yeah I want none of that. I'll stick with my makefiles and a dedicated "internal/tools" module. Tools routinely force upgrades that break other things, and allowing that is a feature.
same. tools are not part of the codebase, nor dependencies.
you got to have isolation of artefact and tools around to work with it.
it is bonkers to start versioning tools used to build project mixed with artefact dependencies itself. should we include version of VSCode used to type code? how about transitive dependencies of VSCode? how about OS itself to edit files? how about version of LLM model that generated some of this code? where does this stop?
The state of things with some projects I've touched is "We have a giant CI thing that does a bunch of things. It is really very large. It might or might not be broken, but we work around it and it's fine."
I think some of the euphoria around tracking tooling in the repo is "yes, now I can run a command in the repo and it's as if I spun up a docker container with the CI locally, but it's just regular software running on my machine!" This is a huge improvement if you're used to the minutes-or-hours-long CI cycle being your only interaction with the "real environment."
The reductio ad absurdum that you describe is basically "a snapshot of the definitions of all the container images in our CI pipeline." It's not a ridiculous example, it's how many large projects run.
I am with you on the same boat that this better be versioned and reproducible and standardised.
my key concern whether tools to build project have to be in the same pool as project itself (that may or may not use tools to build/edit/maintain/debug it).
It makes sense to some extent when the toolchain can tap into native language specific constructs when you need that REPL-like iteration loop to be tight and fast. But that kind of thing is probably only required in a small subset of the kind of tooling that gets implemented.
The tradeoff with this approach is that you lose any sort of agnosticism when you drop into the language specific tooling. So now if you work at a corporation and have to deal with multiple toolchains every engineer now needs to learn and work with new build tooling X times for each supported language. This always happens to some extent - there’s always going to be some things that use the language’s specific task runner constructs - but keeping that minimal is usually a good idea in this scenario.
Your complaint feels to me that it is about poorly implemented CI systems that heavily leverage container based workflows (of which there are many in the wild). If implemented properly with caching, really the main overhead you are paying in these types of setups is the virtualization overhead (on macs) and the cold start time for the engine. For most people and in most cases neither will make a significant difference in the wall clock time of their loop, comparatively.
This take absolute boggles the mind. You don't want people compiling your code with different versions of tools so you have to debug thousands of potential combinations of everything. You don't want people running different versions of formatters/linters that leave conflicting diffs throughout your commit history.
so where does it stop? let's include version of OS on laptop of people who edit code? it is getting ridiculous.
you got to draw a line somewhere.
in my opinion, "if dependency code is not linked nor compiled-into nor copied as a source (e.g. model weights, or other artefacts) then it must not be included into dependency tree of project source code"
that still means, you are free to track versions/hashes/etc. of tools and their dependencies. just do it separately.
Ideally it stops at the point where the tools actually affect your project.
Does everyone need to use the same IDE? Obviously not. Same C++ compiler? Ideally yes. (And yes you can do that in some cases, e.g. Bazel and its ilk allow you to vendor compilers.)
It's not that uncommon to have OS version fixed in CI/CD pipelines. E.g. a build process intended to produce artefacts for Apple's Appstore is dependent on XCode, and XCode maybe forced to upgrade in case of MacOS upgrade, and it may break things. So the OS version becomes a line in requirements. It's kinda disappointing, but it's the real state of affairs.
how about hardware that software is supposed to run on? that certainly can have effect. let's pin that too into project repo. don't want to continue this thread. but to re-iterate my point, you have to stop somewhere what to pin/version and what to not. I think my criteria is reasonable, but ultimately it is up to you. (so as long whole ecosystem and dependency trees do not become bags of everything, in which case Go here is alright so far. after digging deeper what v1.24 proposing will not cause massive dependency-apocalypsis of tools propagating everywhere and only what you actually use in main is going to be included in go.mod, thanks to module pruning).
yes let's have a meta project that can track the version of my tools "separately", and the version of my repo.
linters, formatters, reproducible codegen should be tracked. their output is deterministic and you want to enforce that in CI in case people forget. the rest doesn't really affect the code (well windows OS and their CRLF do but git has eol attributes to control that).
agree it has to be deterministic. my major concern was whether tools dependencies are mixed with actual software they are used to build. hopefully they are not mixed, so that you can have guarantees that everything you see in dependency tree is actually used. because otherwise, there is not much point in the tree (since you can't say if it is used or not).
again, v1.24 seems to be okay here. go mod pruning should keep nodes in tree clear from polluting each other.
Tbh, I think you missed this part: "[..] mixed with artefact dependencies itself".
An artifact depends on the tools used to build it.
This is why we pin versions. Go tool is common sense, allowing for any old tool version in the build chain invites failure.
I'm not debating that. I'm pointing out, that the person replied to, said there's no reason to mix it together with the artifact dependencies.
In other words, no need to mix "dependencies" and "dev dependencies" together.
No one says that this is a one-size-fits-all solution, but for some use cases (small tools that are intimately connected to the rest of the codebase, even reuse some internal code/libraries) it's probably helpful...
The way it's implemented is the way that almost everyone already does it. It's just more convenient now.
"Popular" and "good" have no relation to each other. They correlate fairly well, but that's all.
Blending those dependencies already causes somewhat frequent problems for library owners / for users of libraries that do this. Encouraging it is not what I would consider beneficial.
Ah, okay. So this achieves yet more feature parity with npm/yarn? Sweet.
Yeah it's a nice QOL improvement. Not some game changer ...
I always think it's a shame that these features end up getting built into ecosystem-specific build tools. Why do we need separate build systems for every language? It seems entirely possible to have build system that can do all this stuff for every language at once.
From my experience at Google I _know_ this is possible in a Megamonorepo. I have briefly fiddled with Bazel and it seems there's quite a barrier to entry, I dunno if that's just lack of experience but it didn't quite seem ready for small projects.
Maybe Nix is the solution but that has barrier to entry more at the human level - it just seems like a Way of Life that you have to dive all the way into.
Nonetheless, maybe I should try diving into one or both of those tools at some point.
> Why do we need separate build systems for every language?
Because being cross-language makes them inherit all of the complexity of the worst languages they support.
The infinite flexibility required to accommodate everyone keeps costing you at every step.
You need to learn a tool that is more powerful than your language requires, and pay the cost of more abstraction layers than you need.
Then you have to work with snowflake projects that are all different in arbitrary ways, because the everything-agnostic tool didn't impose any conventions or constraints.
The vague do-it-all build systems make everything more complicated than necessary. Their "simple" components are either a mere execution primitive that make handling different platforms/versions/configurations your problem, or are macros/magic/plugins that are a fractal of a build system written inside a build system, with more custom complexity underneath.
OTOH a language-specific build system knows exactly what that language needs, and doesn't need to support more. It can include specific solutions and workarounds for its target environments, out of the box, because it knows what it's building and what platforms it supports. It can use conventions and defaults of its language to do most things without configuration. General build tools need build scripts written, debugged, and tweaked endlessly.
A single-language build tool can support just one standard project structure and have all projects and dependencies follow it. That makes it easier to work on other projects, and easier to write tooling that works with all of them. All because focused build system doesn't accommodate all the custom legacy projects of all languages.
You don't realize how much of a skill-and-effort black hole build scripts are is until you use a language where a build command just builds it.
But this just doesn't match my experience with Blaze at all. For my internal usage with C++ & Go it's perfect. For the weird niche use case of building and packaging BPF programs (with no support from the central tooling teams, we had to write our own macros) it still just works. For Python where it's a poor fit for the language norms it's a minor inconvenience but still mostly stays out of the way. I hear Java is similar.
For vendored open source projects that build with random other tools (CMake, Nix, custom Makefile) it's a pain but the fact that it's generally possible to get them building with Blaze at all says something...
Yes, the monorepo makes all of this dramatically easier. I can consider "one-build-tool-to-rule-them-all isn't really practical outside of a monorepo" as a valid argument, although it remains to be proven. But "you fundamentally need a build tool per language" doesn't hold any water for me.
> That makes it easier to work on other projects, and easier to write tooling that works with all of them.
But... this is my whole point. Only if those projects are in the same language as yours! I can see how maybe that's valid in some domains where there's probably a lot of people who can just do almost everything on JS/TS, maybe Java has a similar domain. But for most of us switching between Go/Cargo/CMake etc is a huge pain.
Oh btw, there's also Meson. That's very cross-language while also seeming extremely simple to use. But it doesn't seem to deliver a very full-featured experience.
I count C++ projects in the "worst" bucket, where every project has its own build system, its own structure, own way to run tests, own way to configure features, own way to generate docs.
So if a build system works great for your mixed C++ projects, your build system is taking on the maximum complexity to deal with it, and that's the complexity I don't want in non-C++ projects.
When I work with pure-JS projects, or pure-Go projects, or pure-Rust projects, I don't need any of this. npm, go, and rust/cargo packages are uniform, and trivial to build with their built-in basic tools when they don't have C/C++ dependencies.
No I'm not talking about mixed projects (although the ability to do that is very important).
I'm saying that using it for separate C++ and Go projects is extremely convenient and ergonomic.
(I worked on source control at FB for many years.)
The main argument for not overly genericizing things is that you can deliver a better user experience through domain-specific code.
For Bazel and buck2 specifically, they require a total commitment to it, which implies ongoing maintenance work. I also think the fact that they don't have open governance is a hindrance. Google's and Meta's internal monorepos make certain tradeoffs that don't quite work in a more distributed model.
Bazel is also in Java I believe, which is a bit unfortunate due to process startup times. On my machine, `time bazelisk --help` takes over 0.75 seconds to run, compared to `time go --help` which is 0.003 seconds and `time cargo --help` which is 0.02 seconds. (This doesn't apply to buck2, which is in Rust.)
This is likely because you are running it in some random PWD that doesn't represent a bazel workspace. When running in a workspace the bazel daemon persists. Inside my workspace the bazelisk --help invocation needs just 30ms real time.
Running bazel outside of a bazel workspace is not a major use-case that needs to be fixed.
> When running in a workspace the bazel daemon persists. Inside my workspace the bazelisk --help invocation needs just 30ms real time.
It still has a slow startup time, bazel just works around that by using a persistent daemon, so that it is relatively fast after as long as the daemon is running.
That's good to know, thank you!
Do you encounter cache invalidation bugs with daemonization often? I've had pretty bad experiences with daemonized dev tools in the past.
Bazel prints a message when you invalidate the in-memory cache in a perhaps accidental way; you can supply it with a flag to make this an error and skip the cache invalidation.
If you try to run two Bazel invocations in parallel in the same workspace, one waits for the other to be done.
I assumed they meant an error of improperly using cached results. I am sure bazel has its flaws but it assiduously avoids that.
Yes, unless you're using persistent workers. Then you may very well run into the same issues they mention.
GraalVM’s native image has been a thing for a while now. This could overcome the daemon issue partially. The daemon does more ofc by as it keeps some state in memory. But at least the binary start time is a solved problem in Java land.
I think the problem is basically because the build system has to be implemented using some ecosystem, and no other ecosystem wants to depend on that one.
If your "one build system to rule them all" was built in, say, Ruby, the Python ecosystem won't want to use it. No Python evangelist wants to tell users that step 1 of getting up and running with Python is "Install Ruby".
So you tend to get a lot of wheel reinvention across ecosystems.
I don't necessarily think it's a bad thing. Yes, it's a lot of redundant work. But it's also an opportunity to shed historical baggage and learn from previous mistakes. Compare, for example, how beloved Rust's cargo ecosystem is compared the ongoing mess that is package management in Python.
A fresh start can be valuable, and not having a monoculture can be helpful for rapid evolution.
> No Python evangelist wants to tell users that step 1 of getting up and running with Python is "Install Ruby".
True, but the Python community does seem to be coalescing around tools like UV and Ruff, written in Rust. Presumably that’s more acceptable because it’s a compiled language, so they tell users to “install UV” not “install Rust”.
Note that installing python stdlib installs tkinter and thus tcl.
https://wiki.tcl-lang.org/page/Python-Tcl-Interactions
I tend to think that is more Rust community using Python, and RIIR stuff, than Python community themselves.
I know Python since version 1.6, and this has never been a thing until Rust.
Same applies to the RIIR going on JavaScript side.
Including tools that were already written in compiled languages, but of course weren't Rust, or had an idea to make a startup around them.
Partly in jest, you can often find a Perl / bash available where you can't find a Python, Ruby, or Cargo.
Not sure why that's in jest. Perl is pretty much everywhere and could do the job just fine. There's lots of former (and current) Perl hackers still around.
Sounds like the only way out of this is to design language agnostic tooling protocols that anybody can implement.
I've had exactly the same thought, after hitting walls repeatedly with limitations in single-language ecosystems. And likewise, I've had the same concerns around the complexity that comes with Bazel/Buck/Nix.
It's been such a frustration for me that I started writing my own as a side project a year or two ago, based on a using a standardized filesystem structure for packages instead of a manifest or configuration language. By leaning into the filesystem heavily, you can avoid a lot of language lock-in and complexity that comes with other tools. And with fingerprint-based addressing for packages and files, it's quite fast. Incremental rebuild checks for my projects with hundreds of packages take only 200-300ms on my low-end laptop with an Intel N200 and mid-tier SSD.
It's an early stage project and the documentation needs some work, but if you're interested: https://github.com/somesocks/dryad https://somesocks.github.io/dryad/
One other alternative I know of that's multi-language is Pants(https://www.pantsbuild.org/), which has support for packages in several languages, and an "ad-hoc" mode which lets you build packages with a custom tool if it isn't officially supported. They've added support for quite a few new tools/languages lately, and seem to be very much an active project.
Not loving the cutesy names (https://somesocks.github.io/dryad/docs/02-concepts/01-the-ga...). I want my build tool to be boring.
I agree. In my opinion, if you can keep the experience of Bazel limited to build targets, there is a low barrier to entry even if it is tedious. Major issues show up with Bazel once you start having to write rules, tool chains, or if your workspace file talks to the Internet.
I think you can fix these issues by using a package manager around Bazel. Conda is my preferred choice because it is in the top tier for adoption, cross platform support, and supported more locked down use cases like going through mirrors, not having root, not controlling file paths, etc. What Bazel gets from this is a generic solution for package management with better version solving for build rules, source dependencies and binary dependencies. By sourcing binary deps from conda forge, you get a midpoint between deep investment into Bazel and binaries with unknown provenance which allows you to incrementally move to source as appropriate.
Additional notes: some requirements limit utility and approach being partial support of a platform. If you require root on Linux, wsl on Windows, have frequent compilation breakage on darwin, or neglect Windows file paths, your cross platform support is partial in my book.
Use of Java for Bazel and Python for conda might be regrettable, but not bad enough to warrant moving down the list of adoption and in my experience there is vastly more Bazel out there than Buck or other competitors. Similarly, you want to see some adoption from Haskell, Rust, Julia, Golang, Python, C++, etc.
JavaScript is thorny. You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript. I haven't seen too much demand for JavaScript bindings to C++ wrappers around a Rust core that uses C core libraries, but I do see that for Python bindings.
> You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript.
Rust handles this fine by unifying up to semver compatibility -- diamond dependency hell is an artifact of the lack of namespacing in many older languages.
Conda unifies by using a sat solver to find versions of software which are mutually compatible regardless of whether they agree on the meaning of semver. So, both approaches require unifying versions. Linking against C gets pretty broken without this.
The issue I was referring to is that in Javascript, you can write code which uses multiple versions of the same library which are mutually incompatible. Since they're mutually incompatible, no sat-solve or unifyer is going to help you. You must permit multiple versions of the same library in the same environment. So far, my approach of ignoring some Javascript libraries has worked for my backend development. :)
Rust does permit multiple incompatible versions of the same library in the same environment. The types/objects from one version are distinct from the types/objects of the other, it's a type error to try mix them.
But you can use two versions of the same library in your project; I've done it by giving one of them a different name.
My experience with Bazel is it does everything you need, and works incredibly well once set up, but is ferociously complex and hard to learn and get started with. Buck and Pants are easier in some ways, but fundamentally they still look and feel mostly like Bazel, warts and all
I've been working on an alternate build tool Mill (https://www.mill-build.org) tries to provide the 90% of Bazel that people need at 10% the complexity cost. From a greenfield perspective a lot of work to try and catch up to Bazel's cross-language support and community. I think we can eventually get there, but it will be a long slog
Brazil performs dependency resolution in a language-agnostic way.
https://gist.github.com/terabyte/15a2d3d407285b8b5a0a7964dd6...
I like that Go decided to natively support this. But since it’s keeping the dev dependencies in the same go.mod, won’t it make the binary larger?
In Python’s uv, the pyproject.toml has separate sections for dev and prod dependencies. Then uv generates a single lock file where you can specify whether to install dev or prod deps.
But what happens if I run ‘go run’ or ‘go build’? Will the tools get into the final artifact?
I know Python still doesn’t solve the issue where a tool can depend on a different version of a library than the main project. But this approach in Go doesn’t seem to fix it either. If your tool needs an older version of a library, the single go.mod file forces the entire project to use the older version, even if the project needs—or can only support—a newer version of the dependency.
> won’t it make the binary larger?
No. The binary size is related to the number of dependencies you use in each main package (and the dependencies they use, etc). It does not matter how many dependencies you have in your go.mod.
Ah, thanks. This isn't much of an upgrade from the `tools.go` convention, where the tools are underscore-imported. All it does is provide an indication in the `go.mod` file that some dependencies come from tools.
Plus, `go tool <tool-name>` is slower than `./bin/<tool-name>`. Not to mention, it doesn’t resolve the issue where tools might use a different version of a dependency than the app.
Exactly. You lose build isolation for those tools, but you have the convenience of shipping something tested and proven (ideally) alongside the project they support. At the same time, this mess all stays isolated from the parent environment which may be the bigger fight that devs have on their hands - not everyone is using Nix or container isolation of some sort.
I also see this as sugar for `go build` or even `go run`. Or as something way easier than the `go generate` + `//go:generate go run` hack. So we can look at this as a simple refinement for existing practices.
go tool is only slower when (re-)compilation is needed, which is not often. You'd have to pay the same price anyway at some point to build the binary placed in ./bin.
I'm actually not 100% on this; there is a cache, and it should speed things up on subsequent runs, but maybe not as much as one might think: https://news.ycombinator.com/item?id=42864971
> In Python’s uv, the pyproject.toml has separate sections for dev and prod dependencies.
If you want, you can have multiple ".mod" files and set "-modifle=dev-env.mod" every time you run "go" binary with "run" or "build" command. For example, you can take what @mseepgood mentioned:
> go tool -modfile=tools.mod mytool
Plus, in last versions of the Go we have workspaces [0][1]. It is yet another way to easily switch between various environments or having isolated modules in the monorepo.
[0]: https://go.dev/blog/get-familiar-with-workspaces
[1]: https://go.dev/doc/tutorial/workspaces
This seems handy, but often the tools run by `go generate` are outside of the Go ecosystem, or need to be binaries.
So I think a general solution would work better, and not be limited to Go. There are plenty of tools in this space to choose from: mise, devenv, Nix, Hermit, etc.
Mise is right on the edge of being pretty killer. I’m bullish on it. It also includes a lot of nice to haves that you can declare, like k9s, which isn’t exactly a dev tool but becomes expected
better motivation for rewrite it in go...
but are there really that many tools you need in a go project not written in go?
So it's just dev-dependencies?
a bit worse. it is all mixed up. to keep separate dependency tree for tools need use old approach with go.mod. it is actually even worse now.
UPD:
1. it is single tree
2. BUT tools will not propagate through the dependency tree downstream due to go module pruning
check this comment: https://github.com/golang/go/issues/48429#issuecomment-26184...
official docs: https://tip.golang.org/doc/modules/managing-dependencies#too...
> Due to module pruning, when you depend on a module that itself has a tool dependency, requirements that exist just to satisfy that tool dependency do not usually become requirements of your module.
"usually"?
Yes, except it does not support version ranges.
Go uses Minimum Version Selection (MVS) instead of a SAT solver. There are no ranges in any go dependency specifications. It's actually a very simple and elegant algorithm for dependency version selection
https://research.swtch.com/vgo-mvs
> Minimal version selection assumes that each module declares its own dependency requirements: a list of minimum versions of other modules. Modules are assumed to follow the import compatibility rule—packages in any newer version should work as well as older ones—so a dependency requirement gives only a minimum version, never a maximum version or a list of incompatible later versions.
That sounds like a non-starter. You almost never want to to unintentionally 'upgrade' to the next 'major' ver. There's also occasionally broken or hacked/compromised minor vers.
thankfully major versions in go have different names (module, module/v2, module/v3) enforced by tooling, so you'll never upgrade to the next major (except v0 -> v1).
Yep, that's the intent
lol yea
I don't understand why it's a good idea to couple tooling or configuration or infrastructure (e.g. Aspire.NET, which I'm also not convinced of being a good idea) so tightly with the application. An application should not need to be aware of how whatever tools are implemented or how configuration or infrastructure is managed. The tooling should point to the application as dependency. The application should not have any dependency on tooling.
I appreciate that "tools" that are used to build the final version of a module/cli/service are explicitly managed through go.mod.
I really dislike that now I'm going to have two problems, managing other tools installed through a makefile, e.g. lint, and managing tools "installed" through go.mod, e.g. mocks generators, stringify, etc.
I feel like this is not a net negative on the ecosystem again. Each release Golang team adds thing to manage and makes it harder to interact with other codebases. In this case, each company will have to decide if they want to use "go tool" and when to use it. Each time I clone an open source repo I'm going to have to check how they manage their tools.
My current approach has been setting GOBIN to a local project bin via direnv and go installing bins there. install commands themselves are cached by me with a naive checksum check for the install script itself when I run my commands. Therefore all `go install`s run in parallel if I edit the install script, and go decides what to reinstall or not. At this point I don't feel it's worth migrating to `go tool` having this setup, we'll see when it's stable
what is also concerning, Go team years ago did small vote, small survey of positive occurrences, and decided to enforce it globally for anyone.
old design give people option to use `tools.go` approach, or other, or nothing at all. now they are enforcing this `tools.go` standard. Go looks to be moving into very restrictive territories.
what about surveying opposing views? what about people who did not use `tools.go`
what is going on in Google, Go team?
UPD: check this github comment. https://github.com/golang/go/issues/48429#issuecomment-26184...
basically it go tool relies heavily on go module pruning, so transitive dependencies from tools are not propagated downstream.
also, official docs say this: https://tip.golang.org/doc/modules/managing-dependencies#too...
> Due to module pruning, when you depend on a module that itself has a tool dependency, requirements that exist just to satisfy that tool dependency do not usually become requirements of your module.
you don't have to use this feature if you don't like it...
I was so concerned, because it seemed to me "you will not have any other choice".
after digging deeper, it is alright
I don't love the pollution in the go.mod or being forced to have multiple files to track dependencies.
Being able to run tools directly with go generate run [1] already works well enough and I frankly don't need see any benefits compared to it in this new approach.
[1] https://github.com/golang/go/issues/42088
It doesn't look like much of an improvement over `tools/tools.go` with blank imports like this:
``` //go:build tools
package tools
import ( _ "github.com/xxx/yyy" ... ) ```
Funny to see a little go library you wrote [1] show up in a blog post years later. I need to update it now that go has iterators and generics.
Another great blog post [2] covers performance issues with go tool
[1] https://github.com/dprotaso/go-yit
[2] https://blog.howardjohn.info/posts/go-tools-command/
I’m sick of Go modules, and now they want to pile the mess even higher.
This has been a thing in dotnet tool for years, now.
I think it's great that there's prior art that can be used as examples, proof of value, and opportunities for learning. I commend .NET for investing the resources in researching and developing this feature.
proposal is closed and accepted by Go team. [~sigh]
you can still leave comments in discussion issue: https://github.com/golang/go/issues/48429
I don't like this. When I install a tool, I want to use it with their dependency versions at the moment they released it.
When I use `go tool`, it uses whatever I have in go.mod; and, in the opposit way, it will update my go.mod for no real reason.
But some people do this right now with tools.go, so... whatever, it's a better version of tools.go pattern. And I can still do it my preffered way with `go install <tool>@version` in makefile. So, eh.
Since i started using nix devShells this is kind of useless. What if i have 1 tool that isn't go tool so what do i do with it? so i have that one "exception" and here we go again...
This is actually a valid downside of Go, in that it has its own set of tools for things like debugging and whatnot instead of being compatible with existing tools.
And why is that tool always protoc?
Yeah you are right :) i mean i could use buf build but i think the problem is just more general.
A note for the author in case they are reading: "i.e." means "that is", "e.g." means "for example". You should be able to substitute these meanings and find the sentence makes sense. In all cases here you wanted "e.g.".
I wish Go Team would focus on performance rather thann adding new features that nobody asked for. The http stack is barely able to beat NodeJS these days, ffs.
The Go HTTP stack is not optimized for raw performance though; there's a number of alternative HTTP stacks written in Go optimized for performance, like fasthttp (https://github.com/valyala/fasthttp). (source: https://www.techempower.com/benchmarks/)
Likewise, the standard library NodeJS http stack will not be as performant as a performance optimized alternative.
That said, if raw performance is your primary concern, neither Go nor NodeJS will be good enough. There's many more factors to consider.
Agreed on fasthttp.
> There's many more factors to consider.
Erlang / Elixir?
nodejs' v8 has millions of man hours spent in optimzation alone since half the internet frontend runs on v8, if not more.
Go being garbage collected and still beating v8 is one hell of an achievement.
If you need faster Go Http you're using the wrong tool.
Go is a statically typed, AOT-compiled language, unlike V8. The kind of task that had countless hours of R&D poured into it for decades, much of it public research and open source code.
I'm not sure how GC is relevant here at all given that JS is also a garbage-collected language.
I have tested it and probably will use it but the fact that it pollutes your go.mod's indirect dependency list (without any distinction indicating it's for a tool) is very annoying.
Primary contributor to the feature here.
We went back on forth on this a lot, but it boiled down to wanting only one dependency graph per module instead of two. This simplifies things like security scanners, and other workflows that analyze your dependencies.
A `// tool` comment would be a nice addition, it's probably not impossible to add, but the code is quite fiddly.
Luckily for library authors, although it does impact version selection for projects who use your module; those projects do not get `// indirect` lines in their go.mod because those packages are not required when building their module.
Thank you for working on it. It is a nice feature and still better than alternatives.
I'm not a library author and I try to be careful about what dependencies I introduce to my projects (including indirect dependencies). On one project, switching to `go tool` makes my go.mod go from 93 lines to 247 (excluding the tools themselves) - this makes it infeasible to manually review.
If I'm only using a single feature of a multi-purpose tool for example, does it matter to me that some unrelated dependency of theirs has a security issue?
>If I'm only using a single feature of a multi-purpose tool for example, does it matter to me that some unrelated dependency of theirs has a security issue?
How is anyone supposed to know whether there's an issue or not? To simplify things, if you use the tool and the dependency belongs to the tool, then the issue can affect you. Anything more advanced than that requires analyzing the code.
What if I'm already using techniques, such as sandboxing, to prevent the tools from doing anything unexpected? Why bring this entire mess of indirect dependencies into my project if I'm just using a tool to occasionally analyze my binary's output size? Or a tool to lint my protobuf files?
If it's a build dependency, then you have to have it. If you don't like the size of the tool then take it up with the authors. I'm not a Go programmer by the way, this is all just obvious to me.
The functionality we're discussing can be used for tools that are not build dependencies. They may be important for your project and worth having contributors be on the same version but not part of the build.
It will still add the dependencies of those tools as indirect dependencies to your go.mod file, that is what's being discussed.
If you use the tool to develop your project then it is basically a build dependency. That is a sweeping generalization, but it's essentially correct in most cases.
In addition, a good dependency security scanning tool can analyze reachability to answer this question for you
Reachability analysis on a tool that could be called by something outside of the project? We're talking about tools here after all - anything that can run `go tool` in that directory can call it. The go.mod tool entry could just be being used for versioning.
I'm speaking of tools and processes independent of this "go tool" stuff that we already use in our CI pipelines
Big fan of Dagger over this go tool thing
I generally loath the use of comments for things other than comments
That is a bit much to ask for IMO. In any case, the project may not be aware of how any given developer will use the tool. So who is to say that if you change the order of two parameters to the tool, the tool might not take a different path and proceed to hack your computer? You really don't want any of this problem. What you should ask for is for the tools' dependencies to be listed separately, and for each tool to follow the Unix philosophy of "do one thing well."
> for each tool to ... "do one thing well."
There is a lot of merit to this statement, as applied to `go tool` usage and to security scanning. Just went through a big security vendor analysis and POCs. In the middle I saw Filippo Valsorda post [1] about false positives from the one stop shops, while govulncheck (language specific) did not have them. At the same time, there was one vendor who did not false positive with the reachability checks on vulns. While not always as good, one-stop-shops also add value by removing a lot of similar / duplicated work. Tradeoffs and such...
[1] https://bsky.app/profile/filippo.abyssdomain.expert/post/3ld...
The similar/duplicated stuff can be rolled into libraries. Just don't make the libraries too big lol. I suspect there's less duplicated stuff than you think. Most of it would be stuff related to parsing files and command parameters, I guess.
it probably doesn't, and good vulnerability scanners like govulncheck from the go team won't complain about them, because they're unreachable from your source code.
now, do you care about some development tool you're running locally has a security issue? if yes, you needed to update anyway, if not, nothing changes.
> it boiled down to wanting only one dependency graph per module instead of two
Did you consider having tool be an alias for indirect? That would have kept a single dependency graph per module, while still enabling one reading one’s go.mod by hand rather than using ‘go mod’ to know where each dependency came from and why?
I know, a random drive-by forum post is not the same as a technical design …
Having not looked at it deeply yet, why require building every time it's invoked? Is the idea to get it working then add build caching later? Seems like a pretty big drawback (bigger than the go.mod pollution, for me). Github runners are sllooooow so build times matter to me.
`go tool` doesn't require a rebuild, but it does checking that the tool is up-to-date (which requires doing at least a bit of work).
This is one of the main advantages of using `go tool` over the "hope that contributors to have the right version installed" approach. As the version of the tool required by the project evolves, it continues to work.
Interestingly, when I was first working on the proposal, `go run` deliberately did not cache the built binary. That meant that `go tool` was much faster because it only had to do the check instead of re-running the `link` step. In Go 1.24 that was changed (both to support `go tool`, but also for some other work they are planning) so this advantage of `go tool` is not needed anymore.
Thanks for the explanation and contribution! Very much appreciated :-)
There's module graph pruning https://go.dev/ref/mod#graph-pruning
What solution would you propose?
At least a comment if not its own section.
Yeah, I'm still rather excited about it, but less-so given it /does/ impact the `go.mod` for consumers
[dead]
I came to the comment section for this comment.
Dark background, too many colors, inconsistent spacing, inconsistent font-size and/or family, some links appear fully pink with pink underline, some links aren't pink and only have the underline, inline <code> is blue, but large code blocks are the same color as regular text – on black background, etc.
I had to stop reading unfortunately.
Sorry to hear that - are there any particular tweaks you think would work to reduce the impact? Is it i.e. the blue used by code snippets? Or because there's also the diff syntax which has green/reds?
I'm not OP, but my brain also quickly noped out of reading that page. I can appreciate the care that went into the formatting of commands and links, but it's a bit much to parse all at once. I think monospaced/preformatted text usually looks best with a different background (like the dedicated code blocks towards the end.) Also on my browser the preformatted text is decently larger than the normal paragraph text. This combined with the blue color is a bit jarring.
Choosing colors is like making music. This color scheme feels discordant, like a jumble of loud notes. Maybe try looking at color palette creators online?
Thanks - this is based on the Srcery theme (https://srcery.sh/) but maybe needs some tweaks, as per some suggestions in the thread
My first thought was Monokai (the default theme in Sublime Text 3 and earlier)
IMO keep it simple applies here. The page linked below does pretty much everything your page does (minus code diffs) and is MUCH easier to read.
https://go.dev/doc/tutorial/getting-started#code
(not OP) some ideas: make the <pre>'s not go all the way to out the edge of the window. Make the diff colors and code colors less dramatically different from its surroundings. Increase the contrast of the default text. Use blue for links. Drop the orange
> are there any particular tweaks you think would work to reduce the impact?
Yeah, remove all CSS.
I clicked on View -> Page Style -> No Style in Firefox and suddenly I could read it. Reader mode in Safari also worked. Reading mode in Chrome didn't work properly.
[flagged]
Are we just copying .NET now?
What’s wrong with copying from other projects if they’re indeed offering good ideas worth copying? You say it as if the Golang community MUST only ever have unique ideas no one else has ever thought of — something that’s increasingly rare and unlikely.
To be snide, .NET was copying Java, Java was copying C, etc etc etc.
I don't understand your comment when you should know copying proven features is a good thing.
go.mod and the golang tooling is a horror show. I absolutely LOVE the language, but dealing with the tooling is horrendous. I should blog about the specifics, but if you want a short version:
* not using posix args
* obscure incantations to run tests
* go.mod tooling is completely non-deterministic and hard to use, they should have just left the old-style vendor/ alone (which worked perfectly) and wrapped a git-submodules front-end on top for everyone who was afraid of submodules. Instead they reinvented this arcane new ecosystem.
If you want to rewrite the golang tooling, I'll consult on this for free.
MVS, the algo for dep version selection, is deterministic, given the same inputs you will get the same outputs. Go has invested a lot of effort in creating reproducible builds through the entire toolchain
https://research.swtch.com/vgo-mvs
The _tooling_ is not reproducible. Take a not small golang project with some number of dependencies and there should be a single list of the latest versions for the entire project. And exactly what golang commands do you run to generate that list? It's totally broken. This is why so many tools cropped up like go-mod-upgrade and so on.
Everyone downvoting obviously doesn't understand the problem.
`go.mod` contains the dependency list and minimum version required
`go.sum` is a lock file for the exact versions to use (ensures reproducibility)
`go mod graph` will produce the dependency graph with resolved versions
`go list -deps ./...` will give you all packages used by a module or directory, depending on the args you provide
`go get -u ./...` will update all dependencies to their latest version
Here is a post about Go toolchain reproducibility and verification: https://go.dev/blog/rebuild
You are being downvoted for being wrong and talking about downvoting, which is called out as something not to do in the posting & commenting guidelines
the versions in go.mod are an enforcement of the versions required by your dependencies, and those your module require. asking for it to be reproducible from scratch is like deleting package.json in a node project and asking it to magic all your >= < version constraints out of thin air, it's impossible because you're deleting the source.
As someone who’s been using Go since 2013 at commpanies like Apple, Microsoft, and Uber, this all seems quite unnecessary.
That said, if it helps people do “their thing” in what they believe is an easier (more straightforward) way, then I welcome the new changes.
Given Go’s approach to “metaprogramming” has long relied on tools written in Go, this does seem like a feature gap that needed closing. Even the introduction of `go generate` long ago formalized this approach, but still left installing the tools as an exercise for the reader. You can’t have consistent code gen if you don’t have consistent tooling across a team/CI.
> Even the introduction of `go generate` long ago formalized this approach
It did, but if you recall it came with a lot of "We have no idea why you need this" from Pike and friends. Which, of course, makes sense when you remember that they don't use the go toolchain inside Google. They use Google's toolchain, which already supports things like code generation and build dependency management in a far more elegant way. Had Go not transitioned to a community project, I expect we would have seen the same "We have no idea why you need this" from the Go project as that is another thing already handled by Google's tooling.
The parent's experience comes from similar sized companies as Google who have similar kinds of tooling as Google. His question comes not from a "why would you need this kind of feature?" in concept, but more of a "why would you not use the tooling you already have?" angle. And, to be fair, none of this is needed where you have better tooling, but the better tooling we know tends to require entire teams to maintain it, which is unrealistic for individuals to small organizations. So, this is a pretty good half-measure to allow the rest of us to play the same game in a smaller way.
> if you recall it came with a lot of "We have no idea why you need this" from Pike and friends
The blog post and design document both authored by Rob Pike at the time[0] contains none of that sentiment. The closest approach comes from the blog post which states:
> Go generate does nothing that couldn’t be done with Make or some other build mechanism, but it comes with the go tool—no extra installation required—and fits nicely into the Go ecosystem.
This, taken alone, would seem to support “we have no idea why you need this,” until you read the hope from the design document:
> It is hoped, however, that it may replace many existing uses of make(1) in the Go repo at least.
These are not words of someone who doesn’t understand why users would need this.
Also, I am at a FAANG and my experience differs from the parent—`go tool` is sorely needed by my teams.
[0] https://go.dev/blog/generate
[1] https://go.googlesource.com/proposal/+/refs/heads/master/des...
I am both strongly of the opinion that this was already done much better in Bazel, and that the go-native version seems clean, clear, and simple and should probably be adopted by pure go shops.
The digraph problem of build tooling is hardly new, though the ability to checksum all of your build tools and executables and mid-outputs to assure consistency is relatively new to feasibility. Bazel is a heavy instrument and making it work as well as it does was a hard problem even for Google. I don't know anyone making the same investment, and doubt it makes the slightest hint of sense for anyone outside the fortune 500.