Can we make a Rusty effect system?

There have been a lot of discussion around adding some sort of effect system (A term I don't like) or asking for the features that would be provided by one. See:

Along with several alternatives:

Most of these proposals because they require substantive changes to Rust and often introducing new keywords. I would like to find a way to solve these problems without language level changes, that feels natural in Rust.

I have been thinking about this issue for a while and I have what I think is a fairly good API, and an unacceptably bad implementation. I am posting this because I hope someone here can figure our how to improve upon it.

The basic idea is to provide a way to supply an instance of a type implementing a trait without having to directly pass it as a parameter to each function between where it is created and where it is used.

As a motivating example, consider the idea of providing additional information on each log line, such as a UserId or a RequestId.

The log crate could define a trait it can make use of. Such as LogContext. Then it can obtain the supplied instance.

//Log context provides additional data you want to be displayed on each log line.
trait LogContext : Display { }

impl LogContext for String { }
//... etc

//...
fn log(&self, record: &Record) {
    //...
    let ctx = get_context!(LogContext);
    //...
}

Here the get_context! call is returning a value which implements the provided trait LogContext. Further up the stack a function can supply the context.

fn process(request: Request) {
       provide_context!(LogContext, &request.id);
       //...
       do_stuff();
       //...
}

Now the logger can include the requestId in the log output when it is invoked inside of do_stuff() or any method do_stuff calls, even though it is not passed as an argument.

This has a number of usecases beyond the obvious ones tracing and logging:

fn example() {
    //...
    let cache: &dyn Cache = get_context!(Cache).or_else(default_cache);
    
}
fn example() {
    if let Some(custom_allocator) = get_context!(Allocator) {
        //...
    }
    //...
}

Because contexts provide references to items on the stack they cannot outlive the object they come from. To support this provide_context can return a guard which when it is dropped removes the provided item. So in example below log calls inside of function_1 and function_2 would see the requestId but those in function_3 would not.

fn process(request: Request) {
    //...
    {
       provide_context!(LogContext, &request.id);
       function_1(&request);
       function_2();
    }
    function_3();
}

As an implementation get_context!(SomeTrait) could expand to:

{
    const key: u64 = TypeId::of::<SomeTrait>().t;
    let context_map: HashMap<u64, TraitObject, BuildNoHashHasher<u64>> = get_context_map!();  
    let value: Option<&dyn SomeTrait> = map.get(key).map(|v| 
        {
            let val: &dyn SomeTrait = unsafe {mem::transmute(*v)}; 
            val
        });
    value
}

Basically there is a hashmap, where the key is a constant, and there is a no-op hasher so the lookup can done in a handful of CPU cycle. The rest is just type conversion. The get_context_map!() could just use thread local storage.

The immediate problems with this are:

  • While llvm has support for thread local storage it isn't guarenteed to be super fast and may have limitations on some architectures.
  • Using a HashMap is going to require an allocator.

The much bigger problem is how to deal with multiple threads. If a thread is spawned it will necessarily have its own independent context and not inherit anything from the parent thread. However with scoped threads, or when using libraries like Rayon, it makes sense to have worker threads use the context_map from the caller. Otherwise as soon as an intermediate function did a .par_iter() the feature would stop working. The same is true for multi-threaded event loops.

One solution would be to make a shallow copy of the context map and assign it as the context for the task. This has a few downsides:

  • Even though it's just an array copy, it still adds overhead to having part of a job run on another thread, and this would be incurred even if the feature is not actually used.
  • It requires all the implementations passed to provide_context to be both send and sync. This is enforceable, but not great.

Does anyone see a way around these limitations?

4 Likes

So, in the above, provide_context! would basically enter the effect LogContext? How would you generalize this to multiple effects?

provide_context!() lets you provide an implementation of a trait. There can be many implementations of many different traits. IE: it's possible to have a LogContext and an Allocator at the same time.

It is not possible to have multiple implementations for the same trait at the same time. If you call provide_context!() with a trait that has already been provided the implementation provided by the inner call will override the one provided by the outer one until it's guard goes out of scope, and then it will be restored to the outer one.

This reminds me a bit of where I’d like to take illicit, although in an ideal world it would be statically typed with stack references passed down (the latter i’m working on, the former needs language support).

Re: multiple threads, the ~Send + Sync~ bounds on types argues for a separate fallback global type registry that can be added to. To unify this with thread or task local APIs will require changes to the accessor APIs I think. Still working it out.

It's probably worth nitpicking terms a bit here.

To me, and in my experience reading all the past proposals, "effect system" refers to some compile-time system, usually part of the type system, for the language to figure out whether certain types or functions have one or more "effects" based on what other types and functions they interact with. A typical hypothetical example is nopanic: the only way foo() can be nopanic is if foo() has no panic!()s and every function that foo() ever calls, even indirectly, has no panic!()s.

That's different from the problem of making a certain context object or service provider available to every function in a call stack. I'm not aware of a specific term for this, but wanting to solve this problem is typical of "Aspect-Oriented Programming". In fact, logging is the classic motivating example for AOP.

So at least to me, the OP sounds much more interested in AOP than in effect systems.

To the best of my knowledge, most exploration of AOP took place in Java and C#, most famously in Java's Spring framework. I think all the major implementations essentially boiled down to code generation or runtime reflection, so I'm not sure how much of that can be meaningfully applied to Rust.


Opinion time!

Effect systems and AOP are both fascinating ideas for programming language design, but I think it's extremely unlikely that either can be applied to the core Rust language in a useful way, i.e. beyond what 3rd party crates and tools can already do today without any dedicated language features.

Effect systems are fundamentally about abstracting over effects. Abstracting over X, Y, Z involves finding things that X, Y, Z all have in common, and separating them from the things that make X, Y, Z different from each other. I believe the obvious effect candidates for Rust simply don't have enough in common to be usefully abstracted. https://github.com/rust-lang/rfcs/pull/2492 is the RFC discussion which convinced me of this. But in other languages that have a non-trivial runtime and eschew low-level features, effect systems make a lot more sense, e.g. Purescript's effect system seems pretty reasonable to me.

AOP I'm less familiar with, so my opinion here is much weaker. From reading about Java/C# and talking to people who work in those languages (including the Spring AOP stuff), I've always gotten a very strong impression that, in practice, it's just more trouble than it's worth. Plus, some of these techniques rely on runtime reflection which an unmanaged language like Rust doesn't have (and will never have as universally as Java/C#/Python do). But Rust does have a pretty great macro system, so I suspect whatever AOP techniques are valuable in Rust could be done in a 3rd party crate (just as they're done with decorators in Python and annotations in Java).

4 Likes

This would probably be implicits, not effects.

3 Likes

Ah, I think "implicits" is the term I was looking for and not finding before. Weird that Java/C#/Python don't seem to have a term for this. But that hint allowed me to just now discover Scala implicits, which I'd never heard of before.

Now I should probably go find out why google autocomplete's first suggestion for "scala implicits" is "scala implicits are bad"... :thinking: (EDIT: initial impressions are that it's mostly Scala-specific issues, i.e. not relevant to the broader AOP/effect subject of this thread)

1 Like

I believe scala implicits are resolved using the lexical scope, and so do not rely on any kind of thread-local storage, although there are a lot of similarities.

I've also seen this used for dependency injection via the "ambient service pattern".

1 Like

Interestingly, Sun has a patent on the idea of child threads "inheriting" these dependencies, which expired recently: https://patents.google.com/patent/US6820261B1/en

1 Like

If the hashMap were an Arc of a "persistent" version of a hashmap such from the IM crate

Then the clone to move from one thread to another would be very cheap. Instead of inserting if it called update it would be a bit slower but mean the map could be immutable, and hence Sync. It would also mean that the added item would not need to be removed by the guard, but rather the old pointer to the map restored. (Which would be faster)

I did a quick benchmark and on my system. Retrieving an item from HashMap with a no-op hasher takes 2.0 ns. Retrieving an item from IM's HashMap takes 2.5ns. So the added overhead is not too much. For comparison Vec::new() takes 4.0ns.

This is still is not good enough for overriding the default Allocator. This is because HashMap still is going to require an Allocator and adding 2.5ns on top of a 4ns call is a 62.5% increase, which will likely be a big deal for some applications. However this approach mitigates most of the threading issues, and gives what would be acceptable performance for a lot of use-cases.

How would you generalize this to multiple effects? :thinking:

First, I don't see how provide_context! and get_context! are meaningfully different from a global god object in theory or practice. They are shared mutable state, for one thing, and that raises the question: how you ensure that provide_context! is called before get_context!? What about data races? I doubt you'll be able to do it statically (with all that that implies). Further down the list of problems, provide_context! seems destined to cause heap allocations: not cost free.

(To be fair to you, you did see these issues and are solving the get-before-set problem by always returning an Optional, the thread-safety issue with threadlocals, and the heap allocation problem with a heap allocation. But, those are some severe tradeoffs that take us well outside cost-free abstraction).

Beyond the implementation difficulties, $dayjob is a large system that makes data flow implicit (because it's based on the Dagger2 "dependency injection" system). After years of soul-crushing refactoring jobs on this behemoth, I am skeptical of all such systems.

It can be nice to update a utility's dependencies without changing all of that utility's callers, but it can (does) cause problems. Unfortunately the problems only manifest at large scale, so describing the problems that a DI system like Dagger can cause using a small example is hard. (Though I might try in a blog post).

Here are properties I want out of these systems:

  • The compiler knows all of the variables that are used in the body of a function and does its normal borrow and ownership checking and monomorphization.
  • In addition to the compiler, all other code analysis that works on regular rust code keeps working on my code after I tack on "implicits". (In other words, it's only implicit in the unexpanded source code, not hidden from analysis).
  • All of the formal parameters that I use in the body of the function are lexically declared...
  • ...except for the single special case of eliding parameters that I only need because I'm passing them along to other function calls.

The main benefit here as I see it is that we want to be able update our utility method's dependencies ("aspects") and only update a few places where the dependency needs to be resolved differently - not every call site.

What if we use a proc macro to rewrite our function signatures and calls with all the goop we don't care about? (Not sure if this is a possible, but I'm interested in finding out).

#[derive(Debug)]
struct Logger {}
impl Logger {
    fn log(&self, msg: &str) {
        println!("{}", msg);
    }
}

// #inject means approximately:
// For every function call in my body, for every parameter that I have not supplied lexically,
// create a new formal input parameter with the same type as the missing parameter,
// and pass the formal parameter through to the call site.
[#inject] 
fn my_func(num: u32, logger: Logger) {
    logger.log(&format!("{}", num));
}

// with!(foo) {} means:
// "In this block, any function that takes a foo parameter which I haven't supplied
// gets this one.
fn main () {
    let logger = Logger {};
    
    with!(logger) {
        my_func(13u32);
    }
}

The macros in tha texample would expand as so:


// Right now, my_func doesn't do anything during expansion, all formal params
// are lexically specified
fn my_func(num: u32, logger: Logger) {
    logger.log(&format!("{}", num));
}

fn main () {
    let some_num = 43;
    // The with! block should expand to this:
    {
        call(13u32, logger);
    }
}

Then later, we find that we'd like to update logger.log to take a new parameter, but unfortunately logger.log is called in 13.3k places so just adding a new formal input parameter is not feasible. But, because we're injecting, we can do it:

trait LogContext : Debug {
}

#[derive(Debug)]
struct BarContext {}
impl LogContext for BarContext;


#[derive(Debug)]
struct Logger {}
impl Logger {
    fn log<C: LogContext>(&self, msg: &str, ctx: C) {
        println!("[{:?}] {}", ctx, msg);
    }
}

/// We will have to supply the context from somewhere , maybe we want to do it in main?
fn main () {
    let logger = Logger {};
    
    with!(logger, BarContext {}) {
        my_func(13u32);
    }
}


// Now, my_func needs to receive a LogContext as an input so it can pass it down,
// So #[inject] kicks in to expand my_func() to this:
// Also note how this causes my_func to be written into a generic now.
fn my_func<C: LogContext>(num: u32, logger: Logger, __a: C) {
    logger.log(&format!("{}", num), __a);
}

// Note that my_func is unchanged pre-expansion ... we didn't have to update the logger.log call site.
[#inject] 
fn my_func(num: u32, logger: Logger) {
    logger.log(&format!("{}", num));
}

The main problem I see with implementing this is that it's hard to make it work with traits. You'd have to recapitulate the entire trait-based constraint satisfaction logic in the macro to find which item in the container satisfies the requirements of an injecting call-site, and it's extremely easy to wind up with a container where two different values in the container can satisfy a function's type constraints. (What happens if someone writes fn my_func<T: Debug>(debug: T) {}? Just about every type in the container will satisfy that. Yet, I don't think this system feels rusty unless it works with traits).

2 Likes

The trick with proc macros is that they only have syntactic information of what they wrap, and no information about the rest of the code. So they could do parameter injection, but have no idea what methods/functions they should inject into unless you syntactically mark them (at which point, just put the parameter(s)).

So if you want an implicits system, you need the compiler's help.

I'm curious, what do you think of Scala's implicits system? (I've not used it, so I have no opinion. I've just heard that it's overused and abused.)

2 Likes

@masonk I the above is actually a lot closer to what you are looking for than you may have realized.

The provide_context! macro described above only provides the context in the current scope, it is removed when it goes out of scope. This means all the normal lifetime and borrowing rules apply. This is solves the "get-before-set problem" in the most obvious way, you have to explicitly call set higher up in the stack in the code path that lead to the call.

I do agree that this property is not clear. You solve that more neatly with the with! syntax that explicitly adds a scope. I like that better because it makes this property clear. Rewriting the above example to use that syntax:

fn process(request: Request) {
       with_context!(LogContext, &request.id) {
           //...
           do_stuff(); //Can get the request id.
           //...
       }
       do_stuff(); //Can't get the request id.
}

I think you highlight the main issue with attempting to do simple type based matching:

That is why I explicitly avoided going the single parameter route and the global scope route. I have it explicitly passing not just the impl but the trait that they want it associated with. So code asks for an instance of Debug it will only get an instance that was provided via with_context!(Debug, &foo) as opposed to everything that happens to implement Debug. This avoids having to re-invent the trait system, because the only thing it needs to enforce was that you provided an instance of the trait you specified.

The nesting also helps here because there can be multiple layers where a value is overwritten, but then goes back when the stack unwinds.

I’d be very interested to hear your thoughts about a trait-oriented interface for illicit, which has the scoping and stack behaviors you describe but currently indexes by TypeId.

@CAD97, thanks for the reality check. I can see how we're far from able to do this with proc macros. The semantics of this API require a whole-program understanding. In addition, I suppose macro expansion happens too early for the compiler to even expose that required understanding to the macro, so it's not like this can be added just by adding functionality to macros.

As for Scala implicits, I've never used them either. I tried to read about them last night, but they're big and confusing and seem to do multiple orthogonal things. The Scala BDFL is apparently reworking them in Scala 3, so it might be useful to read the lessons he learned from Scala 1 implicits. But he seems to think that implicits are Scala's answer to Rust's trait system and Haskell's Typeclass system, which really confuses me. In addition I note that you can't have implicit arguments added to your function calls unless you've imported those implicit arguments into scope. This seems to spoil the main advantage of implicits for me, which is, again, updating deps without touching a huge number of files.

@tkaitchuck, your system enforces the Rust guarantees at runtime, but not at compile time. In addition, the compiler lays out the stack, so if the compiler doesn't know about your data (the entire call stack in which it is passed down), then it's always going to be implemented using heap allocations.

By the way, I think your concession to say, "When someone asks for Debug, I specifically want them to have this one" is the right way to handle the trait difficulties.

@anp, I'll check it out when I get a chance!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.