Linking against different libraries than the module was compiled for is never a good idea. If you want to do a static build, you should compile everything that will go in it. Those __isinf and __rawmemchr and similar references are there (usually) because of macros and inline functions in glibcâs headers, not because the source code would mention them. They will go away if you recompile against correct headers.
That is illegal for non-GPL code; and for GPL code you donât have a reason to do it as the user can always recompile it if they have ancient libc in system. LGPL only allows linking non-(L)GPL code dynamically.
Full static binaries are definitely possible on Windows. Well, you still link dynamically kernel.dll and ntdll.dll, but those are how the Win32 API is defined. They should be considered system calls, not libraries.
However, on Windows, code must be compiled differently depending on whether it will be linked statically or dynamically. So you need to make or get appropriate builds of all your dependencies (many ship them, but some, e.g. Python, donât).
Python uses dlopen to load modules and this will cause serious problems in static builds due to one definition rule. On ELF-based systems (like Linux) it will work provided that all modules are linked against the same dependencies. On Windows it wonât work, ever. On Windows when you use any dlls, you must use dynamic runtime.
And for a good reason, too. NSS (name service switch) is a security sensitive piece of code. Whenever it has a security issue fixed, any code that linked to it statically would have to be recompiled. By only permitting dynamic linking the distributions ensure that any fix applies to the whole system.
For the same reason, most distributions donât accept statically linked packages.
On Windows, âlibcâ is msvcrnn and that can be linked statically. The kernel.dll and ntdll.dll are the kernel interface.
As for MacOS I donât know, but iOS does not support shared libraries at all.