Maybe for your use cases that’s OK, but there are many situations where the size and ease of upgrading provided by shared libraries is worthwhile. For example it would suck to need to push a 40+ GB binary to a fleet of systems with a poor or unreliable internet connection. You could try to mitigate this sort of thing by splitting the application up into microservices, but that adds complexity, and isn’t always a viable tradeoff if maximizing compute efficiency is also a concern.
I’m not so sure that dynamic libraries always reduces the size. Specially with libraries that are linked by a single binary.
With static libraries, you can conditionally compile only the features you’re gonna use. With dynamic libraries, however, the whole library must be compiled.
EDIT: just to clarify, I’m not saying that static libraries result always in less size. I’m saying that it’s not a black and white issue.
Maybe for your use cases that’s OK, but there are many situations where the size and ease of upgrading provided by shared libraries is worthwhile. For example it would suck to need to push a 40+ GB binary to a fleet of systems with a poor or unreliable internet connection. You could try to mitigate this sort of thing by splitting the application up into microservices, but that adds complexity, and isn’t always a viable tradeoff if maximizing compute efficiency is also a concern.
I’m not so sure that dynamic libraries always reduces the size. Specially with libraries that are linked by a single binary.
With static libraries, you can conditionally compile only the features you’re gonna use. With dynamic libraries, however, the whole library must be compiled.
EDIT: just to clarify, I’m not saying that static libraries result always in less size. I’m saying that it’s not a black and white issue.