RFC: macro functions

(I unfortunately don't have the bandwidth to actually put this through a full RFC for language inclusion, but this is a legitimate Request For Comments and proposal for improving the capability of Rust. This was written in one afternoon/evening on my smartphone; please forgive any formatting/grammar/procedural issues. A few sections have todos left to mark where I think detail is probably missing; please feel free to ask clarifying questions even/especially if they ask about the todo areas, as I'm just marking them as likely needing more clarification, not trying to defer the clarification.)

Summary

Add a new type of item, macro fn, which is defined and used almost exactly like a function but which allows deferring name resolution and type checking to instantiation-time like a macro. This allows Rust to express the benefit of C++-style duck-typed function templates where macro_rules! might otherwise be used while still providing a developer experience better than using C++ templates or Rust macro_rules! for the same task.

Motivation

Wait, why are we introducing template errors into Rust? Aren't parametric generics (like Rust traits) which provide pre-monomorphization checks and prevent (most) post-monomorohization errors better? Didn't C++20 add concepts specifically to enable moving some template type checking forward?

Yes, but:

@Gankra_@twitter.com
say what you will, completely unprincipled trial-and-error specialization based on "did the emitted text compile?" does let you express some absolutely jacked up stuff that parametric systems balk at

@__phantomderp@twitter.com
Now, this might be blasphemy, but...

I kind of want both systems to exist in one language. Mostly because even templates have structure and checks to them.

The main Achilles Heel of parametric generics is that you have to be able to express the consumed API surface with the language provided by the trait system. Additionally, the fact that traits are nominally typed, requiring types to explicitly choose to implement them, while good for API evolution and ensuring semantic agreement, is still more restrictive than a structural system where any type that provides the desired API shape can be used.

Rust's trait system continues to get better over time — GATs and initial async in traits recently stabilized, and future improvements like the "keyword generics" initiative are in the pipeline — but fundamentally, traits will never be able to express the full gamut of what developers would like to express.

The point is not to introduce any sort of template metaprogramming to Rust, but instead to bridge the gap between generic functions and macro_rules!. Macro functions should improve code that would otherwise be using macro_rules! or just not get written.

It's also important to note that these are explicitly not post-monomorohization errors in the way those are typically discussed w.r.t. the current compiler. There are more details in the reference-level explanation, but in short: the errors are post-instantiation as part of type checking the caller pre-monomorphization. This instantiation may itself produce a function which still needs further monomomorphization if called from a monomorphic call site, or even require further instantiation and type inference if called from another macro function.

Guide-level explanation

(after discussing traits and generics)

Rust also offers a more powerful alternative to generic functions: macro functions. Defining a macro function uses the exact same syntax as defining a normal generic function, except that you add the macro keyword qualifier.

macro fn add<T: Add>(lhs: T, rhs: T) -> T::Output {
    lhs + rhs
}

If you compile this code, however, you will see that a warning is generated:

warning: macro function `add` does not need to be a macro
  |
L | macro fn add<T: Add>(lhs: T, rhs: T) -> T::Output {
  | ^^^^^    ^^^
  |
  = note: `#[warn(needless_macro)]` on by default

Because the signature of add fully specifies the signature of the function, it doesn't need to be a macro function, and the compiler is telling us that. What makes a macro function more powerful than a normal generic function is that you can omit some of the type information.

macro fn add<T: Add>(lhs: T, rhs: T) -> _ {

Replacing a type in a macro function's signature means that the type is inferred. However, in this case, we still get a warning

warning: placeholder type `_` should be fully specified
  |
L | macro fn add<T: Add>(lhs: T, rhs: T) -> _ {
  |                                         ^
  |
  = help: replace with the inferred return type: `<T as Add>::Output`
  = note: `#[warn(needless_signature_inference)]` on by default

because in this case the output type is fully constrained by types that we did specify, and as such the compiler is telling us what the inferred type is so that we can specify it. This same diagnostic works with normal functions as well, although there it's an error.

If we also allow the input types to be inferred, we get a proper warning-free macro function.

macro fn add(lhs: _, rhs: _) -> _ {

In fact, this has made our function more general, as now we can use it to add two different types.

However, using type inference with macro functions isn't without a cost. Because the argument types are inferred from the use site, the macro function can't be type checked on its own. Additionally, when calling add with arguments which don't provide the required API, the error message can no longer just reference the signature and instead needs to show an instantiation trace, making the errors much noisier.

With a generic function, the error can be reasonably clear:

error: cannot add `&str` to `&str`
  |
L |     add("", "");
  |     ^^^ no implementation for `&str + &str`
  |
  = help: the trait `Add` is not implemented for `&str`
note: required by a bound in `add`
  |
L | fn add<T: Add>(lhs: T, rhs: T) -> T::Output {
  |    ^^^ required by this bound in `add`

but gets a bit more difficult with a macro function:

error: cannot call `add` with the arguments `&str, &str`
  |
L |     add("", "");
  |     ^^^ invalid instantiation
  |
note: in `add`, cannot add `&str` to `&str`
  |
L | macro fn add(lhs: _, rhs: _) -> _ {
  |                   ^       ^ inferred to be `&str`
  |                   inferred to be `&str`
L |     lhs + rhs
  |     --- ^ --- &str
  |     |   |
  |     |   `+` cannot be used to concatenate two `&str` strings
  |     &str
  |
  = note: string concatenation requires an owned `String` on the left
help: create an owned `String` from a string reference
  |
L |     lhs.to_owned() + rhs
  |         ++++++++++

This is a fundamental limitation of duck typing in this way; a signature serves as a source of truth as to both what a function caller needs to provide and what the function body can rely on. With macro functions, that source of truth doesn't exist, and it could be either the caller or the function body at fault.

Because of this, normal generic functions are preferred when they can be used. This is why the compiler is eagerly emitting warnings when it can tell inference with a macro function isn't necessary. The type signatures of a function aren't just for the compiler; they're also for the humans writing and maintaining the code. The type signature serves as an important abstraction barrier, allowing the function's caller to not care about the function body, and allowing the function body to change without worry of breaking callers. Using a macro function abandons those benefits.

So when should you use macro functions, then? The primary use case is private helper functions, similarly to using closures for the same purpose, but usable in more places due to being a full namespace item rather than a value tied to an ownership scope. They're useful any time that expressing the required functionality as a trait bound is more difficult than the benefit of having it excused, and you might otherwise reach for an expression macro_rules! macro.

Compared to macro_rules! macros (discussed next), function macros have the benefit of being used like any other normal function. A bang! macro can accept an arbitrary input syntax and expand to arbitrary code, evaluating its "argument" expressions in whatever order, potentially zero or multiple times, as well as being able to potentially return or break from its calling scope. It's also possible that a bang! macro doesn't represent an expression, but expands to statements or items now inlined into in the calling namespace.

Macro functions, on the other hand, always behave like functions. If instantiation succeeds, calling the function is semantically exactly equivalent to calling an instantiation of any other generic function, providing all of the same predictability resulting from the structured interprocedural control flow. This includes using macro functions as values (e.g. fn() or impl Fn()) in combinators, which can't easily[1] be done with functionlike bang! macros. Additionally, if some argument types aren't left to be inferred, then they're still type checked as much as possible, where caller-inferred types aren't involved.

Power-user details

Some extra user-facing details to highlight are:

  • Inferred argument types must be known before calling a macro function, as when calling inherent methods, and may not rely on constraints back-propagating from lexically latter uses.
  • Inferred return types must be fully concretized by the function body and may not be generic post-instantiation. If a generic return type is desired, it must be a named generic.
  • A named generic can also have its interface inferred (including access to inherent items) by bounding it with an inference placeholder, e.g. fn add<T: _>(lhs: T, rhs: T) -> T. (Opt-out default bounds still apply.)
    • If such is used as a return type, it must either also be fully inferable from the argument types or specified by the caller via turbofish.
  • Macro functions are instantiated on an as-needed basis, so can be significantly easier on the compiler than macro-generated trait implementations across a combinatorial set where only a small number of combinations are actually used.
  • Macro functions act like generic functions w.r.t. duplication of the function body; namely, there's no guarantee that two function pointers to the same concretization are actually equivalent, but like generic functions and unlike macro_rules! the compiler tries to avoid unnecessary duplication before inlining. This means using macro functions instead of an equivalent macro_rules! can reduce the amount of IR that the compiler middle- and back-end have to process, making compilation quicker.
  • Macro functions can be used as methods on concrete types, but not in traits.

Reference-level explanation

(todo: more details)

Name resolution

Name resolution for parametrically bound generic types and any types determined from them is done in the defining scope, equivalently to non-macro generic functions. Name resolution for named types/items not derived from generic types is done in the defining scope. Name resolution for _ inferred types and all types derived partially from them is done exclusively in the calling scope's namespace. If the function would like to introduce traits not necessarily in scope in the calling context, it can use them in the function body. (The use path is resolved in the defining namespace scope.)

As they are items not associated to types, macros are resolved in the defining scope and expanded with the tokens they received there. (Notably, if a generic inferred type name is used, the macro sees the type name token, and not the name of the instantiating type.)

(todo: more details)

Implementation concerns

Name mangling macro function monomophizations directly matches that of generic functions', as they are likewise uniquely† identified by their instantiation types (modulo duplication across compilation units, which applies equally to generics so is already handled by disambiguator).

Due to how instantiation of macro functions interleaves with type checking of the caller, this relies on the compiler architecture being able to do macro function instantiation during and blocking type inference of the calling scope, rather than relying on the previously name resolved signature. This doesn't require new capabilities from the compiler, as it's no worse than the macro function's body being (hygienically) inlined, but it's worth noting.

What may require new functionality from a compiler is actually handling instantiations as items of their own. (However, this should be no more difficult than doing the same for C++ template instantiations, so function instantiation unification is something compilers are familiar with doing.) As multiple instantiations of the same macro function with the same (potentially still generic) type arguments should be treated as the same item†, the compiler needs to be able to instantiate each one only once (per compilation unit). Because generic function monomorphizations are allowed to be (and in practice are) duplicated even within the same crate, it is technically legal to instantiate macro functions uniquely for each caller, just like with generic functions, but this is generally a bad idea, and a key benefit of macro functions over macro_rules! is not doing so unnecessarily.

Additionally, macro functions potentially interact with identifier/macro hygiene in a new way, since some identifiers are resolved in the definition scope and some in the callers'.

† Name resolution causes an important caveat to the two claims annotated above: just the instantiation types alone aren't enough to uniquely identify a macro function instantiation, as e.g. x.fmt(…) may resolve to different functions based on traits in scope and even visibility of inherent items if x has an inferred bound. This needs to be resolved in some manner, likely by coalescing name resolution information into a disambiguator hash.

As with the example in the guide-level section, errors resulting from instantiation should display the context stack of what caused the error. However, any suggestions in the stack should not be considered machine-applicable, as they may break other instantiations. Suggestions for changing code outside the current workspace should be suppressed.

Variants, alternatives, and extensions

  • The closest current alternative is using a closure calling a functionlike! macro. This has the advantage of already being stable, and matches the post-expansion semantics very closely, but type inference is impeded by the unanotated closure arguments, whereas a major point of macro functions is to provide access to inherent name resolution.
    • If a typeof!($expr) built-in were to be provided to surface-level code (even if perma-unstable), this could be prototyped decently well externally by a macro wrapping the IIFE pattern. A proc macro doing simple syntactic dataflow analysis could even partially provide the partial pre-instantiation type checking available to macro functions.
  • Inference of the return type is deliberately very restricted to limit the scope of the newly introduced inference. This may be reasonable to relax.
  • No semantics have been provided for what a T: Trait + _ bound means. There are multiple options, each with some merit:
    • forbid this, and say types must either be exclusively inferred or completely parametrically bound;
    • allow this and apply the parametric bound as a requirement on the caller pre-instantiation like other generic bounds, name resolving and type checking uses of the trait at the declaration site, preventing trait items from being shadowed by later instantiation; or
    • allow this and apply the parametric bound as a requirement, but defer name resolution and type checking until after instantiation, allowing trait items to be shadowed by inherent ones.
  • Generalizing over function qualifiers (e.g. const) is not provided for beyond what's available parametrically. A notable feature of C++ templates not provided by macro functions is that constexpr can be used on templated functions with it being dropped from the instantiation if it does something non-constexpr.
    • This may result in a funny state where it's possible to be const-generic over parametric functionality with ~const but not over inferred functionality. Ideally, whatever solution applies to trait bounds should also work for the _ inference type/bound.
  • Macro functions in traits are conservatively not allowed, but could be fairly easily (although obviously not object safe).
  • It may be desirable to restrict macro functions from being (reachable) pub, due to the significant semver hazard and crate-localized inference being more agreeable than cross-crate. A warning or perhaps even deny lint certainly seems reasonable (machine-applicable fix: pub(crate)), but even a hard error can be justified.

Importantly, all semantics of macro functions should be limited to _-bound types, such that their existence does not restrict the evolution of standard parametric generic functions. Notably, this means that macro functions should not allow for variable argument counts until parameter packs for generic functions are decided on. Depending on the semantics decided on for generic function parameter packs (assuming they happen), macro functions may not even need to extend parameter packs beyond the obvious of allowing an _ inference bound to be used. The same goes for optional/default arguments and most other extensions to function syntax that bang! macros are sometimes used to emulate.

Adding macro fn leads the question about other items, like macro impl or macro struct. This RFC does not consider those beneficial like macro fn. Importantly, macro fn is still primarily a repackaging of functionlike! macros which can offer some extra niceties due to being more regular, whereas macro impl or macro struct would significantly change what can be expressed, and as such run a much larger risk of significantly degrading the average developer experience of using Rust.

Drawbacks and mitigations

All of the concerns around clarity of post-monomorohization errors apply. This RFC attempts to specify macro function instantiation such that most of the concerns are mitigated, but the most visible problems with C++-style template errors are fundamental and apply just as much to macro functions:

  • Errors caused by instantiation have to provide a stack of "in the expansion of" for each level of nesting in order to communicate what the error is. This is partially mitigated by the availability and encouragement to use traits and generics instead, as well as the ability for macro functions to be partially parametric, with that part checked at the definition. Additionally, we can elide intermediate frames (similarly to how overly large types are handled, putting them in a details file) as likely irrelevant; this is more likely to be true than with C++ templates since macro functions deliberately have no equivalent
  • Instantiating a template results in a potentially significant chunk of code getting type checked, and potentially multiple cascading errors. This is unfortunately just an exercise in building good diagnostics which provide the correct amount of context without being overwhelming. The only mitigation Rust provides is the same encouragement to prefer parametric API wherever possible.
  • It's not clear from the signature that gets included in documentation what the requirements are to call a macro function. This is a fundamental property, but at least macro functions are slightly better than functionlike! macros in this regard.

The analogy to C++ templates opens the path to more requests to support C++ template/concept/constexpr if style metaprogramming. This RFC explicitly does not advocate for such—in fact, explicitly advocates for making more patterns which want it expressible parametrically instead of cfg and macros—but it certainly moves in the direction of that looking reasonable.

Macro functions introduce a new logical step to compiling (macro function "instantiation") which has to be implemented and learned alongside the other transformation steps (e.g. macro "expansion" and generic "monomorphization"). How differentiated this step is is unclear; e.g. the process of concretizing/monomorphizing generics is also sometimes refered to as instantiation.

Overuse of macro functions could significantly adversely impact type checking times, as it becomes significantly easier to end up with inference through a lot of code. The restrictions on inference flow through macro functions is intended to help mitigate this risk. Additionally, it should be possible to provide a machine applicable refactoring to turn a macro function into a nonmacro (potentially generic) function if it's known to only be used from a single call site. (This should probably be clippy or rust-analyzer's domain rather than rustc's.)

One important nonobvious non-drawback to highlight: it is in fact possible to wrap around / abstract away a macro function with a normal generic function, just the same as you can do for a functionlike! expression macro. If you can satisfy the inferred API of the macro function via trait bounds, then you can use the macro function from a nonmacro function, and the instantiation will use the API you've available.


(Wow I wrote more than expected. This is significantly lagging the Android Chrome text editor, even.)


  1. It is possible to turn a functionlike expression macro into a proper function value by wrapping it in a closure, e.g. |a, b| m!(a, b). ↩︎

22 Likes

... @scottmcm how/why did you heart the version of this I accidentally posted early (and quickly deleted) while drafting

Well it showed up for me, so I looked at it. And in the abstract at least I like the idea -- like I did when you mentioned something at least similar a year ago: Blog post: View types for Rust - #35 by CAD97

2 Likes

Why would one prefer

macro fn add(lhs: _, rhs: _) -> _ {
    lhs + rhs
}

over

macro add($lhs:expr, $rhs:expr) {
    $lhs + $rhs
}

? The only notable difference I see is that one requires ! to invoke while the other doesn't. Errors are a secondary thing that could be handled for either situation. Sure, macros could do weird things like return/break or expand to an unexpected AST item, but is that truly an issue? A properly written macro does what one would expect. I don't think I've ever seen a macro that has truly unexpected behavior.

11 Likes

Personally I think the biggest problem with macro_rules! macros is that it's a whole second language you have to be able to read and write. Having a way to write "duck typed" code that doesn't require figuring out what a tt is, or the nuances of allowed repetitions could potentially be a big win for those who don't yet fully understand the macro system.


I'm not sure about calling them macro fns though, that seems fairly confusing with the macros 2.0 declaration syntax of macro

14 Likes

Macros can do anything. They don't even need to expand to blocks, or statements, or expressions, and it's used all the time in the real world. Worse, the semantics of macros are such that there is literally nothing you could say about them pre-instantiation. You can't expect them to produce valid Rust code (even as a syntactic subset). You can't do absolutely any name resolution. You can't even do formatting! They are hard to write, hard to read, hard to maintain, and usually way too powerful for what one really needs to do. They violate hygiene of unsafe and async, they can contain hidden control flow, or inject types and impls into your context.

Imho Rust really needs something less powerful and more sane than macros, but less restrictive than actual functions.

That's cursed and I like it! I've been ruminating on similar ideas recently, although I want to have a less powerful version (no duck-typing in general, but allow non-local control flow and inter-procedural compiler analysis, primarily borrow checking).

However, introducing duck-typing into Rust is really scary stuff. We are all well familiar with the hell that template errors can become. Saying "don't use it" won't help much, if macro functions are way easier to use than proper generics.

I don't think it's reasonable. Macros can already be pub, and they carry much greater hazards. If you think about macro fn not as a semver-breaking function but as a more restrictive macro, then why wouldn't you want them to be your public API, instead of macros?

Also, some applications I have in mind would make no sense with this restriction. Specifically, macro functions could be used to implement &/&mut getters in a way which doesn't run afoul of the borrow checker on multiple borrows (since the caller would "know" when disjoint fields are borrowed).

One downside of this design which isn't mentioned is that there is no way to do any separate compilation for macro functions. Until the parameters are inferred from context, there is literally nothing you could do with the body. This is a problem if macro functions become widely used (and they likely will be, if they're made useful). In particular, nested macro functions will likely be common, and a common source of problems. Personally I would want to restrict macro functions in a way which would allow at least partial separate compilation.

6 Likes

Just because they can doesn't mean they do. Hence my remark about having never seen a macro with truly unexpected behavior. If a macro does something unexpected, that's likely a bug with the macro.

Name resolution is part of the long-unstable decl_macros. Some macros can already be formatted, namely those that use Rust-like syntax. The example I provided is almost certainly able to be handled by rustfmt. Hygiene is a solvable issue — unsafe macro and async macro are logical extensions to the decl_macros feature.

Ultimately I believe that authors are able to exercise restraint in what they're doing with macros. They already are, generally speaking.

5 Likes

I don’t think I have that many macro_rules! with only one branch and no repetitions…except for those that generate trait impls. Which, admittedly, I would appreciate having a language feature for (“crate-scoped blanket impls”?), but that’s off-topic. I’m fully prepared to say this is a failure of imagination on my part, but what specific well-known macros that exist today could even be written as macro fn?

1 Like

What is an example of something you will never be able to do with traits, but you can do with macro fn?

3 Likes

While I like the general idea, this seems to overlap with macros 2.0.

Personally, I like the circle-lang design for this which is the best of both worlds between this design and the current macro_rules! one. In particular, macros 2.0 should be imo regular compile time functions with Rust code, simply using a few predefined types and APIs (E.g that represent syntax elements) with the additional effect that the return value is emplaced into the target source code.

1 Like

One very minor inconsistency is add(1, 2, ), for the macro you would have to have an explicit $(,)? to allow a trailing comma; which also affects rustfmt, it doesn't really deal with multiline expressions in macros properly:

r#try!(aoudoreid
    .abreodapruedo()
    .arduerocaduearc()
    .adariodiureoadiro()
    .eadriabiraa());
fn_try(
    aoudoreid
        .abreodapruedo()
        .arduerocaduearc()
        .adariodiureoadiro()
        .eadriabiraa(),
);

A second language that is already stable Rust. Adding a new, overlapping feature may be too many features...

... I have of course run into a number of cases where code can be copy+pasted/macro-repeated but not pushed into a function due to differing types (and lack of existing traits covering required operations). This proposal at least appears reasonably simple to understand and use, but like with macros appears to rely heavily on documentation for public macro fns.

1 Like

Generic functions are still functions, which may be used as function pointers, and work with impl Fn callbacks. They can be methods and participate in type-based dispatch (but I still want method-like macros).

2 Likes

Borrow checker transparency. The simple

fn foo_mut(&mut self) -> &mut Foo { &mut self.foo }
fn bar_mut(&mut self) -> &mut Bar { &mut self.bar }

functions don't use any traits, but has an issue that you can't call both of them at the same time, since the first call would mutably borrow self. At the same time, using both &mut self.foo and &mut self.bar simultaneously is perfectly fine, because the borrow checker knows that the fields are disjoint. A macro function could pass that information to the call site, without explicitly exposing fields.

Is there any design of macros 2.0 anywhere? The tracking issue doesn't contain anything which could be called "design of new macro system", and afaik it doesn't exist anywhere. It's just a bag of internal tricks in the compiler.

Regardless, the use cases are overlapping but different. Macro functions are not expected to cover all possible uses of macros. They can't use custom syntax, they can't generate types or patterns or arbitrary code, but on the other hand they give stronger semantic guarantees than macros.

I don't know what's "unexpected" to you. I'd say recursive macros do something extremely complicated and hard to analyze, even if one can claim they do "expected" things. For example, in the field projection RFC it was mentioned that one of the motivations is that the declarative macro in the pin-project-lite crate is very unreadable.

I'd say quote! macro counts as surprising. At the surface layer it's Rust syntax, but it really isn't, it's just a sequence of tokens. The bodies of branches are simple, but the macro as a whole is absolutely not.

Even if you can claim that absolutely most macro transcribers are really valid Rust (which may well be true), it doesn't help you much in practice. Macros don't respect normal symbol visibility. Looking at the macro definitions, you have no idea what the symbols will resolve to. They will be resolved in the caller's context, and that may be a feature. One may claim that all types mentioned in macros should be fully qualified and all method calls should instead use the explicit UFC syntax, but that's not how most macros are usually written. And again, it doesn't tell you much, because even "absolute paths" in macros may in practice resolve to anything, depending on which crate was imported with the given root name.

From a static analysis standpoint, there is literally nothing you can claim about macros. You can try to make educated guesses, which will be right more or less often depending on the boldness of your guesses and on your empiric data, but there's nothing you can actually prove.

Would be, if we were designing macros from scratch. But we aren't, and there is too much code that will be broken if we try to retrofit it onto an existing system. We can add them to macros 2.0, but at this point it's a wild guess whether macros 2.0 or macro function would be implemented and stabilized first.

7 Likes

One thing which I would want from macro functions is proper interaction with the name resolution and symbol privacy. Currently Pin has to expose a perma-unstable public field, which exists only because pin! needs to access it. Now, pin! specifically likely can't be a macro fn, but there are other macros which similarly expose private functionality which should never be touched. A macro function should be evaluated and privacy-checked in the context of its definition rather than usage, as much as possible. That would allow to provide crate-level encapsulation coupled with extra flexibility of duck-typed interfaces.

In fact, in my opinion it's wrong to think about macro functions as duck-typed. They are as statically typed as any other functions. Instead, they jettison the rule that type inference cannot be affected by the function's body and that all types must be explicitly declared in the signature.

6 Likes

Is this allowed with macro fns? I was under the assumption that they operate the same wrt. the borrow checker.

I don't remember seeing it in the OP, but why wouldn't they? If we can derive types from context and internal usages, why shouldn't we allow the same for lifetimes? This is a feature that I would very much want to see, because it solves all the major motivations of view types proposals in an entirely transparent and very cognitively light-weight way. The major motivation for view types are disjoint mutable getters and extracting common code into private functions, which cause borrow checker trouble due to lack of signature transparency.


The syntax in the RFC doesn't seem to allow for non-linear constraints on the inferred types. E.g. I can't make fn add(x: _, y: _) -> _ such that the types of x and y must be the same, even if I expect it in the body. Similarly, I didn't notice a way to put a trait bound on _, even if I expect a certain specific bound to always hold. A trait impl can have complex semantic API guarantees, so if we can always demand that a specific trait is implemented, it's an API win, even if the details of that type or its inherent methods will be inferred from context.

I propose a different syntax (bikeshed warning): declare the inferrable generic parameters as macro type T. E.g. the add example above could look like

macro fn add_and_frobnicate<macro type T>(x: T, y: T) -> T 
    where T: Add<Output=T> 
{
    if y.is_doodad() {
        x.frobnicate();
    }
    x + y
}

We definitely know that we want to add 2 instances of T, so we encode it in the API, but there are also open-ended frobnicate and is_doodad calls, which may be hard to encode as a trait for whatever reason (depend on borrow checker transparency, or we don't control the implementing types, or it would violate orphan rules, or introducing the traits would mean a lot of boilerplate for a single generic function call, etc). We declare the function as macro fn to opt into signature transparency semantics, but then we also explicitly annotate each generic parameter which should be inferred in a signature-transparent way. Explicit trait bounds may be checked without looking at the function body. If we add a second type parameter R which isn't a macro type, then it will be an ordinary generic parameter which can be used only where its trait bounds allow us.

Alternative: just like type generics are declared T instead of type T, we could shorten the above to macro T instead of macro type T. Similarly, we can imagine macro const N which could e.g. infer the bounds on N, or even its type, but that's out of scope here. Perhaps we should annotate borrow-checker transparent borrows with a macro 'lifetime?

8 Likes

this is actually a nice idea imo

I'm very skeptical of this. I think the complexity of macro_rules is actually a feature because it pushes people away from using them unless they really have to. Using traits properly is already somewhat annoying and giving people worse shortcut will lead to worse code.

I think C macros are a great historical example of this. I saw shitload of code abusing them where function would've worked but because they are so easy to write people wrote them anyway. In a large, complicated codebase. Maintaining that was not fun.

One of the things I love about Rust is it tends to discourage these "easy" hacks leading to long-term problems. I don't have to argue with people to write better code because they are too lazy to write bad code. (Within some limitations, but still...) I feel like adding such shortcut would undermine this amazing property of Rust.

And no, the simple warning won't cut it as you only need to drop the type signature to silence it. If rustc could be smart enough to identify the possibility even without type signature and perhaps have a easy way to auto-refactor it I might consider it. But I think this is near-impossible.

5 Likes

On the contrary, macros in C are an excellent reason for this feature, not against. The problem is emphatically not the fact that people abuse them as you put it, but rather that they are an entirely separate language with different semantics.

I'd also point out that while it's true that trivial macros in C are simpler than functions, this doesn't translate to anything beyond that. Actually writing macros requires dealing with parentheses and newlines and other esoteric nonsense which is arguable less straightforward than regular C functions.

Macro_rules suffer from the same issue - when people do use them (despite being difficult and therefore discouraged as you say) we get code that is imo harder to read and reason about.