From "life before main" to "common life in main"

Yes, but more automated, declarative. Also, what if we want to load same dylib twice? (is it even possible?)

Anyways, the mod attribute won't allow this. But a trait one do, imagine: you have a trait describing all the things the library has to offer, including types, statics? and functions. And you have a function for loading a library by given path and returning a trait object with all the things.

This would require:

  • extern types in traits;
  • associated statics;

You can't load a dylib twice. The dynamic linker will merely increment a reference counter when trying to dlopen a dylib twice, returning the same handle, so that dlclose needs to be called twice before the dylib gets unloaded (if the libc implenentation supports unloading in the first place)

The declarations are not enough to link against rust crates. You need the full crate metadata. You can't know every single thing that influences things like symbol names (which can in principle be unique on every compilation. in practice they are more stable, but no guarantees). Also for things like fn foo() -> impl Trait you need to know the function body to know the size of the returned value and to know which drop glue to run when dropping the return value.

Some dynamic linkers actually can if you load it with dlmopen/LM_ID_NEWLM which may require some care, but it is a pretty niche thing to do.

  • This is for full dynamic linking, with generics and existential types enabled. A limited variant which would work in runtime (without shenangians) is reified as object safety rules. There is an ongoing effort to allow some kinds of unsized return types to become object safe... ,but I think that we will end up with some custom syntax to specify where to allocate an unsized return from a trait.
  • Generic module\trait members are, most likely, never going to work (without a .NET like runtime) in fully dynamic cases.

IOW, we need to have a way to declare a set of items exported by a library? Given the above, do extern blocks suit the most?

To put things together:

On library consumer side:

// path option is self-explanatory, for the second option see below
#[library(path ="path/to/lib",crate="crate")] 
extern mod $name {
// list of imported items
}

(As of in general we cannot load a library twice, then we are not going to provide an object like API, and thus there is no connection with traits at all - hence an extern mod)

On library side:

Given that we aim for crate level dynamic linking - we want to produce a linkable artifact from a crate.

Edit: I forgot mention that we may want to add some additional metadata to our dylibs, especially signifying crates they are compiled from

  • We can go with a new crate type, where all items from lib.rs get the same named symbol in resulting artifact. In this case underlying mod declarations cannot be pub and are not named in resulting library.
  • Another variant I imagine is allow a crate to build a set of dynamic libraries: described under lib subfolder in cargo project: we place $name.rs with an extern block containing all exported items from a crate in it.
    All these libraries are intended to be facades for a cargo library project, the type of code sharing between these artifacts depends on crate type (lib,dylib,staticlib,etc).
    In case of crate type being dylib, for example, we'd end up having all specified .so (on linux) files residing in ./target and having a dynamic dependency on crates .so.
    The best about this is that we still make no guarantees about the contents of main crate's artifact.

How is that path specified? I hope it would be the DLL name (PE/Windows), library id (Mach-O/macOS), or SONAME (ELF/*nix) and the search path be more runtime-dependent (possibly with help of information in the loading binary itself). Any full-path solution is likely to be unsolvable given different distro layouts, how Windows tends to work, etc.

Given that platforms that provide such things also seriously limit the number of such namespaces (16 for most distros), this is a seriously strained resource that I'd expect to be left to applications, not libraries.

In case it's useful to anyone else, here's a PR I did to drop the dependency on inventory: tests: Start porting to inventory by cgwalters · Pull Request #2528 · ostreedev/ostree · GitHub

One thing I think I saw somewhere is someone mention that they had a code/pattern that verified that everything starting with test_ was referenced in the hand-maintained static array, or possibly generated via build.rs.

Yes, the path to dylib itself. Relative to executable current directory, no absolute path support (on unix, use symlinks, if artifact is shared). Hence we use only relative pathes, UNIX style path would fit all the needs (we can transform these into windows one for windows targets at compile time).

The path of the executable itself or of getcwd()? If the latter, this is basically the folly of adding . to PATH on Windows. Please don't do that.

How do various runtime loaders handle $ORIGIN rpath references in the presence of these symlinks? Same with macOS and @loader_path/ library id references for that matter.

the elf gabi section on substitution sequences says:

If the name is ORIGIN, then the substitution sequence is replaced by the dynamic linker with the absolute pathname of the directory in which the object containing the substitution sequence originated. Moreover, the pathname will contain no symbolic links or use of . or .. components.

emphasis mine, but at least it is specified what they should do (i.e. ORIGIN gets resolved to an absolute path with symlinks resolved by e.g. realpath).

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.