Static binary support in rust

It's an approach that effectively uses the container as a packaging format, along with the runtime isolation and resource constraints offered by namespaces and cgroups that you wouldn't get if you were executing the binary directly on the host machine. It's the primary application deployment strategy for projects like CoreOS and Kubernetes. Everything is in a container and all the orchestration tools are built around manipulating containers and scheduling them across a cluster. I use a different Docker image to actually build the program and produce the binary. This means that the runtime container is much smaller because it doesn't contain an OS and all the build tools. The space savings between the two is quite significant when you start to have lots of applications in your cluster.

You’re talking space savings compared to putting the actually used dynamic libraries in there and nothing else, right?

That's actually not something I've seen commonly in the Docker ecosystem today. Most of the time people are just using a slimmed down OS image (commonly Debian) as the base. Technically unnecessary, done either out of convenience for simplifying the build process or pure ignorance. This silliness aside, I still think it's very worthwhile to pursue static binaries in Rust.

I’m bumping this again in the hopes of getting the attention of whomever has control of the build and release process for Rust. I’d really like to see a musl-targeting rustc binary for Linux as one of the install options on (Discussion moved from per Steve’s request.)

/cc @alexcrichton

You’re waiting for this issue I think - now llvm 3.7 has been released there’s no need to engage in svn checkout madness, musl rust can theoretically be set up as a first-class target on the buildbots. It’s also now even easier to build your own!

Unfortunately the rust buildbots are not fully documented so it’s a little tricky if you were to want to help - I understand it’s on a todo list.

Don't worry, we haven't forgotten about this! We're still working on getting the infrastructure in shape to get to the point where we can easily ship new targets and you can install them super easily as well.

In the meantime, though, you can either build MUSL from source (instructions) or I've also started playing around with Docker a lot recently and I've created a bunch of images which should have things like MUSL or Android installed and ready to go by default:

Onlookers: if you want to do it manually (rather than use the dockerfiles) I wouldn't look at the stable docs for now - nightly docs have been updated for llvm 3.7, which doesn't need svn checkouts, per my previous message.

I’m not very advanced in containers, but I thought they are ideal for just the opposite: requesting exactly the dynamic libraries your application wants to use, and having them baked into the container filesystem, isolated from the libraries of the host system. If done right, you can even get the read-only pages of library images shared in memory across containers, rather than replicating the entire userland stack linked into each application binary over and over, trashing the CPU caches with redundant copies of frequently used code.

Mrello, first time posting here, hello! Sorry if I’m necro-bumping but I’ve been waiting for essentially the --target=x86_64-unknown-linux-musl in the regular nightly with bated breath for sometime now, so I don’t have to build my own rust compiler from source, from time to time.

I’ve noticed that this aforementioned issue has been closed almost a month ago.

Is there any hope of this entering into nightly (or stable even)?

Thanks so much :smiley:

It is indeed available now! You can install a MUSL + Rust standard library with our cross standard support and then it should be as simple as cargo build --target x86_64-unknown-linux-musl!


There aren’t enough :heart_eyes: emoji to give you right now

Hello again, so I’ve almost switched my custom toolchain over completely to the new setup (which is amazing btw), but I’m having one last issue.

So the first thing that I noticed was that a static libc is gone, as you might use in

Correct me if I’m wrong, but it looks like you have replaced libc.a, pthreads and libm with liblibc-<rusthash>.rlib (this is a great move w.r.t. static linking).

Second thing is that I’m using an unusually complicated link sequence, which is essentially:

  1. first compile rust code with musl equipped rustc and output an object file
  2. link this object file and pass in special flags, and create a final Position Independent Executable. This last flag is -pie for ld

So, I’ve noticed that when using -pie with the new system, and custom linking against the liblibc wrapper, I get relocation errors related to -fPIC.

Here is a script which illustrates this point:


# assumes you have run ` --channel=nightly --with-target=x86_64-unknown-linux-musl`
RUSTHASH=$(ls $RUSTLIB/ | grep libstd | grep -oe "-[[:alnum:]]*" | grep -oe "[[:alnum:]]*") # yup you can make fun of me it's cool

cd /tmp
rm $ 2&> /dev/null

cat <<$DERP >> $
pub extern fn deadbeef (){
  let deadbeef: u64 = 0xdeadbeef;
  println!("{:x}", deadbeef)

rustc --target=x86_64-unknown-linux-musl $ -g --emit obj -o $DERP.o

# -pie causes link errors like:
# ld: /usr/local/lib/rustlib/x86_64-unknown-linux-musl/lib/liblibc-18402db3.rlib(sysconf.o): relocation R_X86_64_32S against `.rodata' can not be used when making a shared object; recompile with -fPIC
# /usr/local/lib/rustlib/x86_64-unknown-linux-musl/lib/liblibc-18402db3.rlib: error adding symbols: Bad value

ld --gc-sections -soname $SONAME -pie -o $SONAME $DERP.o "$RUSTLIB/libstd-$RUSTHASH.rlib" "$RUSTLIB/libcore-$RUSTHASH.rlib" "$RUSTLIB/librand-$RUSTHASH.rlib" "$RUSTLIB/liballoc-$RUSTHASH.rlib" "$RUSTLIB/libcollections-$RUSTHASH.rlib" "$RUSTLIB/librustc_unicode-$RUSTHASH.rlib" "$RUSTLIB/liballoc_system-$RUSTHASH.rlib" "$RUSTLIB/libcompiler-rt.a" "$RUSTLIB/liblibc-$RUSTHASH.rlib"

if you remove the -pie option, it links correctly, and you receive a ET_EXEC; but I need a ET_DYN for a PIE…

Yeah we’ve actually just slurped up the MUSL libc.a and shoved it into liblibc, so that contains just the raw object files of the MUSL distribution.

The error you’re seeing does seems kinda weird? I wonder if we’re not compiling with -fPIC? Although I thought MUSL did that by default… You can see how we’re compiling MUSL here

MUSL should be compiled with -fPIC.

I have noticed that:

rustc --target=x86_64-unknown-linux-musl -C relocation-model=pic yields a ET_EXEC,

but rustc --target-x86_64-unknown-linux-gnu -C relocation-model=pic yields a ET_DYN (a PIE).

It might seem a little strange to be asking to create a completely static executable which is also a PIE, but to give context to my linking requirements, a dynamic linker needs to be a shared object (since it exports dlopen, etc), and is loaded at arbitrary memory addresses by the kernel and execute starting from e_entry, which is the requirements for a PIE.

Here is the actual script I currently use to create the dynamic linker:

Actually everything can be replaced with the new rustc system, except the final call to the libc.a, which baffles me since liblibc.rlib should just wrap the libc object that is generated…

Yeah unfortunately things like ET_EXEC and ET_DYN are beyond what I know, so I may not be of much help :frowning:

Ah sorry, I should have been more clear. So ET_EXEC or ET_DYN is just the kind of binary in the ELF header struct (the e_type field), or if you run readelf -e or objdump -f it’ll have similar output.

So readelf -e /bin/ls will show the type as EXEC, whereas if you run it on a rustc binary compiled with the standard toolchain on linux (i.e. just rustc, readelf -e main should report DYN.

A PIE is just a shared object (DLL, dynamic library, or ET_DYN) with an entry point, so it can be loaded anywhere in memory (and thus can take advantage of ASLR) and executed. For example, this is a requirement on the android platform now iirc.

So grepping through liblibc-<hash>.rlib and my libc.a from compiling musl the rustlib has the problematic R_X86_64_32S relocation, but the libc.a does not.

After spending a bunch of time on this I just realized that I originally had this problem a while back, and I added this line into the musl ./configure in my script which was adopted from the chapter on advanced linking, and which essentially mirrors your build setup:

CFLAGS=-fPIC ./configure --disable-shared --prefix=$PREFIX

I’ve just tested, and if I build musl without the CFLAGS=-fPIC environment variable given to configure (i.e., ./configure --disabled-shared and not CFLAGS=-fPIC ./configure --disable-shared) then it will have those problematic relocations.

So, your initial intuition was correct!

Additionally if you look at the musl Makefile, you’ll see that if passes -fPIC to the shared object (which we disable):


but not to the static libc version :]

The above CFLAG=-fPIC is an ugly hack, but it seems to work for me.

I believe you can also add this line in the musl Makefile (but it’s more invasive):


How is the liblibc-<hash>.rlib generated? I’d like to create the artifact with the -fPIC'd libc.a to verify it works correctly, etc.

Whoa, thanks for the investigation @m4b! Looks like -fPIC is indeed missing from our MUSL, but sounds like we should add it in?

How is the liblibc-.rlib generated?

This is generated by the compiler using LLVM's implementation of ar (using it as a library). The way this works it to find libc.a on disk, open it up, and transfer all object files within to the liblibc-<hash>.rlib archive file.

I just pushed a commit to enable -fPIC for the MUSL that we build, and I'll try to get that deployed today, so tomorrow's MUSL nightly should have this bundled in!

awesome awesome awesome :heart_eyes_cat:

keeping fingers crossed this resolves it

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.