Software Engineering
versioning dependencies semantic-versioning
Updated Mon, 13 Jun 2022 01:48:30 GMT

Why are minor versions of dependencies pinned, despite possibly having bugs?

I am an amateur developper and I deploy my (home oriented) code to containers. This is usually Python and JavaScript.

JavaScript, when saving dependencies for a further npm install, will pin the libraries to exact specific versions. It is also possible (and recommended) to do that in Python via requirements.txt.

This allows for strong reproductibility: you always know what is built from what.

The drawback is that one ends up with outdated libraries, which may not be a problem when there are good tests (you may use a "feature" which is actually a bug, but this may not matter since it works and passes the tests), except that there may be security vulnerabilities, and you will be unaware of these.

My question: why is there a default to pin the exact version, instead of the major one?

My understanding is that depending on the libraries, "major" may mean different things and "minor" versions may still break things even if the use of the library was according to the documentation (and not a bypass/shortcut/non-documented feature).

On the other hand, if a library cannot decide on non-breaking changes for documented usage, it may not be a good library to start with.


The trade-off being made is highly reproducible builds over having the latest dependencies.

Why would you want highly reproducible builds? There are a lot of reasons.

You can't rely on the versioning of the dependency. Although Semantic Versioning has rules, there's no guarantee that the third-party dependency is following those rules. Even if they are trying, a mistake could be made that introduces a breaking change into a minor or a patch release of a Semantic Versioning versioned app.

If you have a legacy system, your test coverage may be weak in some areas. You know exactly how a particular version of a dependency behaves and don't want anything else, since you may not easily detect a breaking change in functionality.

If you are operating in an environment where you need to keep a highly managed configuration, you want to account for the use of any modified version of a dependency, perform appropriate risk assessment, and update at a time of your choosing.

There are probably more cases, as well. The short story is that if you're building a system in a professional context, there are more reasons to favor slightly more manual upgrading of dependencies than not. Also, if you're working in a professional context, you're probably using tools that tell you when your dependencies are out of date or that monitor your dependencies for vulnerabilities and report on them, which would trigger a review and perhaps a planned updated based on the value of the changes in the dependency.

It seems like pinning to the exact version by default and forcing the choice to upgrade to a newer version will do the most good for the most number of people. That is every developer building commercial-grade software and the users and customers that they support.

Comments (1)

  • +0 – All good reasons, and I'd add that even if you do have good test coverage, there are right and wrong times for a system to require investigation/fixing. In the middle of a ten step process not primarily relating to that system is very much the wrong time, no matter if the issue is clear. — Jul 13, 2020 at 18:08