Rust 2019: Towards Richer Language Extensions

As of Rust 1.30, the Rust language now has support for a very expressive feature called procedural macros. Procedural macros are great because they enable Rust to do more, without baking a lot of potentially domain-specific information into the language. For example, automatic serialization of data types and expressive, type-safe abstractions for web servers can both be implemented using procedural macros.

However, while expressive, procedural macros can’t do everything. The most glaring limitation is that procedural macros are inherently local: they don’t have access to the context surrounding the code being modified. So if you were to implement a hypothetical #[task] function macro, it wouldn’t have the access to the surrounding scope required to determine which function calls might be to other tasks. This is pretty much a deal-breaker for implementing something like Regent on top of Rust.

The use of Regent here is just an example, because I’m the creator of Regent and intimately familiar with it. To be clear, I have no immediate plans to rewrite Regent, and the statements here are my personal opinions and do not represent the views of any past or present employers. However, I think it’s illustrative to think through what it would take to implement Regent in Rust, because Regent is representative of a wide variety of language extensions one might want to implement, and if one could implement Regent, one could probably also implement a large number of other extensions as well.

Why Language Extensions?

First though, why language extensions at all? In my opinion:

  1. Language extensions enable Rust to be simpler.

    There are a variety of domain-specific things that aren’t really appropriate to add directly to Rust itself, but would be very nice to have. Without language extensions, there would be a constant pressure to keep adding features to Rust in order to satisfy these users. Adding language extensions allows the core Rust language to stay simpler, by allowing these features to live outside the core language. This keeps domain-specific abstractions where they should be: in libraries. It just happens that some libraries provide features that extend the language itself.

  2. Language extensions enable better user experience by directly encoding domain semantics.

    Ultimately, language extensions are about providing a better experience. This can be seen in a limited way in how the Rocket library allows users to write very expressive code while remaining type safe. But it’s possible to go much further with this: e.g. Ebb is a language extension that enables certain kinds of parallel loops to be automatically run on the GPU, while Regent as mentioned above allows apparently-sequential programs to run on supercomputers with thousands of nodes. These languages simply aren’t capable of being encoded in Rust’s existing procedural macro infrastructure; they require too much context for that to work.

  3. Language extensions unlock novel capabilities.

    To make a specific example with a language I’m familiar with: Regent is a language with sequential semantics that automatically runs on parallel and distributed supercomputers with performance competitive with MPI and C/C++ (and in some cases CUDA, though this is experimental). Needless to say, this is not something that is easy to implement without extensions in an existing general purpose language. Language extensions make it possible to explore novel techniques that have the potential to radically improve on the state of the art in certain fields. Adding language extensions to Rust would make these approaches accessible to Rust users without requiring aggressive changes to the Rust compiler with every new feature.

The bottom line is that language extensions expand on Rust’s promises by growing the set of things that can be done in the language while preserving type guarantees and enabling optimizations that are beyond anything one could reasonable expect to see from a general-purpose language; and they do so while incurring relatively minimal cost to the core Rust language and infrastructure.

But What About Language Feature X?

I’ve hinted at this above, but let me be fully explicit: there is no combination of existing or planned language features for Rust which can accommodate these language extensions I have in mind, either as procedural macros or libraries. For example, Regent is (very, very) mildly dependently typed. A number of language features and key optimizations, which are critical to Regent’s ability to scale to thousands of nodes, depend on Regent’s ability to perform analysis based on its expressive type system.

I do not think it would be prudent for Rust to add these features in order to enable Regent to be encoded directly in Rust, because the additional features would add a burden to all Rust users, but would only benefit those using this specific language extension.

Why Not Fork the Compiler?

In the past, one proposed alternative was to effectively fork the Rust compiler, and add your language extension directly to Rust itself. This is undesirable for two reasons:

  1. The cost of adding language extensions is high.

    This is, for better or worse, simply a matter of life for any mature compiler infrastructure. Compilers are big and complicated, and if the recommended way to add language extensions is to fork and modify an existing compiler, there is a lot of complexity (both inherent and accidental) that comes along with that. While it may seem like an appealing approach because it allows deep integration with the host language, in my experience it isn’t worth it in the end.

    Practically speaking, I’ve seen this approach tried at least three times as extensions to C/C++, and in all three cases I believe the choice of extending a C/C++ compiler was ultimately considered to be a net burden on the project. In the case of one unpublished project, the project effectively never got off the ground because of the complexity of dealing with the Clang C++ infrastructure. In another case, the complexity inherited through the C++ compiler was one of the nails in the coffin that eventually lead to the project being abandoned, because making changes to the language became too difficult. In the last case, as far as I know the language extension was successfully implemented, but keeping up with Clang changes continues to be a large, ongoing burden.

    (I am not comfortable mentioning the specific project names in public, but would be happy to discuss in private if anyone has more questions.)

    It’s worth noting that it doesn’t have to be this way! As a counterexample, I would submit Terra, a language designed for meta-programming that has very strong support for language extensions. I’ll get into more detail about why Terra is great for language extensions below, but for now I would simply make the comment that of all the compiler infrastructures I’ve ever worked with, Terra has been by far the fastest and easiest to get started with. At the very least, I think it’s worth learning from Terra’s approach.

  2. Language extensions can’t interoperate if everyone forks the compiler.

    One of the best parts of having first-class support for language extensions is that it allows language extensions to interoperate. For example, it’s theoretically possible to use Ebb, Regent, Opt and Darkroom in the same program. Why? Because these are all language extensions implemented in the Terra programming language. Rust could serve a similar role if it also had first-class language extensions.

Why Rust and Not Another Language?

At this point you might be wondering why we should bother with Rust at all, if these other languages already exist and work adequately.

In my opinion, Rust is a good host for these languages because it has the right balance of simplicity, type system expressivity, minimal runtime (no GC or heavy VM infrastructure), and interoperability with the native system infrastructure (i.e. the C programming language). These benefits have been widely discussed so I won’t belabor the point here.

As a bonus, Rust also has a vibrant user community, which makes me relatively comfortable investing in Rust infrastructure for the long term. A language is a large investment, so any would-be language extension creator has to carefully weight the risks that come along with the host language. For example, because Regent uses Terra, and Terra uses LLVM, one pain point for us has been the maintenance burden imposed by LLVM’s refusal to support a stable API. (LLVM’s C API is stable-ish, but doesn’t support what Terra needs to get the job done.) Rust also uses LLVM, but the advantage here is that because Rust’s developer community is larger, hopefully the cost of upgrading LLVM can be amortized across the more users.

Also, speaking as someone who has developed what is probably the largest Terra language extension ever, while I meant what I said about Terra being the easiest to get started with, there are also pain points as well. For some of these, it’s not obvious how one would even address these within the Terra infrastructure. Therefore, my recommendation would be to view Terra as an opportunity for learning lessons, and not so much as a direct competitor. In my view, Terra and Rust occupy different niches anyway.

Lessons Learned from Terra

First of all, it may be worth reviewing what Terra is, in order to ground the following discussion.

At a basic level, Terra should be viewed as a way to meta-program a C-like language from Lua. This is probably easiest to see if we walk through a basic example.

local a = 5
terra add5(x : int)
  return x + a

The top level of a Terra program is basically a Lua script. When you run this file, it’s literally running in the Lua interpreter. The first line creates a Lua variable, named a, set to the Lua value 5.

The second line starts a Terra function. This is still executing in the Lua interpreter! Think of the script execution as being like compile time in a language like Rust. So when we “execute” the second line in Lua, we’re compiling the function in Terra. In this case we’re defining a function with one parameter. Any unbound references inside the function are looked up in the enclosing Lua scope. In this case, a is a Lua variable at the file level scope. Lua variables are spliced into the Terra function body before the function is compiled. So the body of the function is effectively equivalent to writing return x + 5. Note that this means that Lua variables are effectively read-only from Terra; if you wrote a = 3 in Terra that would expand to 5 = 3, which would result in a compiler error.

After the end keyword we return to normal Lua execution. At this point, add5 is basically a global Lua variable that happens to hold a Terra function object. We can call that function, as in the last line, which results in the code being JIT’d with LLVM, or we could call terralib.saveobj and dump a .o that we could link into another program.

Terra’s support for language extensions basically amounts to two features:

  1. Terra’s parser can be augmented with arbitrary keywords.

    For example, Regent adds a task keyword, which can occur anywhere a Lua statement can start. When Terra sees this keyword, it basically hands the lexer over temporarily to the Regent compiler (which is simply a Lua function). Regent implements everything from the parser on up. When Regent is done, it hands the lexer back, and returns a Lua object representing the task that has been defined. Then control returns to Lua just like when defining a Terra function.

  2. Terra language extensions have full access to Lua, and Terra’s Lua APIs and language features.

    This is surprisingly powerful. For example, Terra functions are Lua objects, that just happen to support a certain API. If Regent returns an object that supports the same API, it can be used interchangeably. The same with Terra types.

    This also means that Terra objects (functions, types) can be introspected. Regent can even invoke the Terra compiler if it wants to. This feature ends of being extremely useful, because it makes it unnecessary to e.g. replicate parts of the Terra type system. If there is an aspect of Terra’s type system which is non-trivial to reimplement by hand, one can simply invoke the Terra compiler to do the heavy lifting. So for example, if you want to check that a type cast is valid in Terra, one can simply do:

    function can_cast(from_type, to_type)
      local function helper()
        local terra f(x : from_type)
          return to_type(x)
      return pcall(helper)

    And then you can use can_cast(S, T) to check if a cast is valid between any types S and T. The cool part is that the code in the body of the internal function f can be arbitrary Terra code, so the same basic strategy can be used to type check any sort of Terra statement or expression. Compare this to the amount of work you’d have to do if you were simply writing this using rustc or Clang internal APIs!

    Similarly, when doing code generation, Terra language extensions can simply use Terra’s support for meta-programming to build up the desired Terra code. Writing a code generator as a series of quasi-quoted expressions is much, much easier than generating LLVM IR by hand.

These two features combine to make writing language extensions quick and surprisingly fun, especially compared to trying to hack something on top of traditional compiler infrastructures. It really is the fastest way to hack something up that I’ve seen to date.

Having said that, I’ve also got a long laundry list of issues I’ve got with Terra. Not all of these are Terra’s fault, and some are inherited from Terra’s dependencies, but I think it’s still illustrative to learn from.

The first couple of issues are implementation issues which I believe would pretty much just go away if one were to use Rust. The main exception is CUDA support, which would require more thought if Rust wanted to make it a first-class target.

  1. Mis-features inherited from Lua.

    While in some ways, a minimal, dynamic language is exactly what you want for quickly prototyping a language compiler, Lua also has a number of features which I consider to be flaws and which ultimately end up hurting more than they help in the long run, especially as language extensions become more sophisticated and maintenance becomes more important. These include:

    • Permissive function parameter checking. Lua does not check that the number of arguments matches the number of formal parameters, so it’s possible to get too many, or too few. Unlike Lua’s permissive global variable checking, I’m not aware of any way to fix this without adding manual checks at every call site. In practice, I don’t know anyone who actually writes adequate checking code at the entry points of their functions, so errors tend to propagate far down the line before something gets messed up badly enough to cause the compiler to actually halt.
    • Zero- vs one-based indexing. Lua uses one-based indexing. This might be fine in isolation, but Terra code regularly has to interact with C code which uses zero-based indexing. Terra chooses to align with C here, but either choice would result in a certain amount of cognitive dissonance.
    • No support for user-defined hash functions.
    • No built-in support for pretty-printing basic data structures. Everything in Lua is an object, including basic data structures like lists and maps, so there is no way to determine a-priori how an object should be printed, and if you try to implement a custom tostring that fixes this, it needs to be supported by all transitive dependencies or you just end up with the same problem but at another level in your dependency tree.
    • Very small standard library, and no official package ecosystem. In practice, dependencies get vendored or simply reimplemented locally because the package ecosystems that do exist for Lua don’t support Terra.
    • Mandatory tail-call elision means stack frames disappear from backtraces. There is no way to work around this except to modify the source code. This is especially infuriating when your function is essentially a large dispatch table over AST node types or similar… there is basically no way to figure out which branch you went down except by printf debugging.
    • Poor quality built-in debugger. Maybe I’m just not smart enough, but I still have not figured out how to use Lua’s debugger effectively, so I end up doing a lot of printf debugging.
    • Backwards-incompatible language changes. Lua is infamous for breaking the language with every release. Terra uses LuaJIT, which is more stable, but LuaJIT doesn’t support all the architectures Terra needs to run on, e.g. PPC64le for the Summit supercomputer, so in certain cases we’ve been forced to go back to normal Lua, which means portable Terra programs need to use a subset of Lua features.

    All of these combine to make maintaining a language extension something of a pain. Most of this could be fixed just by using a nice, statically typed language like Rust.

  2. Inherited maintenance burdens.

    Not all of these are exposed to Terra users per se, but the Terra community is small enough that to some extent Terra’s maintenance burdens become users’ maintenance burdens.

    • LLVM (lacks backwards compatibility, as noted above).
    • CUDA’s NVVM can be extremely picky about what LLVM versions it works with. In practice this forces Terra to support a larger set of LLVM versions than would otherwise be required, because users on supercomputers don’t generally get to choose what CUDA version they use.
    • LuaJIT is stable, but doesn’t necessarily support all architectures that Terra needs to run on, e.g. PPC64le for the Summit supercomputer.

The remaining issues are more fundamental to the approach of using a dynamic language as a meta-programming layer. It is honestly not obvious to me if these can be fixed in Terra:

  1. Inability to statically analyze code.

    Terra compilation involves executing an arbitrary Lua script, which may even add novel syntax via the import keyword. This makes it pretty much impossible to statically analyze Terra programs. This also has knock-on effects all sorts of other things you’d like to do with code, e.g. refactoring tools, find-definition, linters, code formatters. This affects not just the downstream users of Regent, etc., it affects the developers of language extensions as well.

  2. Language extension compile time.

    One challenge in developing a high-quality language extension is maintaining fast compile times. Because the compiler itself is written in Lua, and isn’t the sort of code that necessarily JITs well, it is easy for this code to be slow.

    Optimizations that would improve compile times, like incremental compilation, are challenging to implement. If your language extension allows calls to arbitrary Terra functions, then hashing a function to determine if it has changed means hashing not only the language extension’s own AST, but also the Terra ASTs of any functions it calls. And the Terra AST can reference Terra types, which can be arbitrary Lua objects potentially defined dynamically based on program inputs… The problem gets messy really quickly, and the only way I’ve found to reduce this complexity is to cache the code at the LLVM IR level immediately before running LLVM optimizations on it. However, this misses a lot of potential speedups that could be had by avoiding rerunning the language extension (and Terra) compilers.

Some of these would also (I hope) go away by using a more traditional static compiler infrastructure, and writing language extensions in Rust would make it easier to ensure those extensions are fast and high-quality.

Thoughts on Language Extension Support in Rust

I’ve got fewer ideas here, but I’ll list what comes to mind.

One thing that is clear to me from my experience with Terra is that writing a language extension as a meta-program is by far the easiest way to go. Rust is part of the way here with the syn crate, but syn (and Rust procedural macros in general) is not hygienic and does not even attempt to deal with Rust at a semantic level. It has been a while since I’ve worked on the Rust compiler, so I’m not sure what specifically to suggest, but one thing I am sure about is that mucking around with rustc internal APIs does not sound like fun to me. I understand why for engineering reasons rustc is divided into different stages, but it prevents the sort of “just type check this statement” pattern that you can do so easily in Terra.

My recommendation would be not to expose rustc internal APIs (which I expect the rustc team does not want to commit to stabilizing anyway), but to think about higher-level APIs to accommodate making effective language extensions. I’m not quite sure what that would look like, but I’d recommend making it more along the lines of “type check this token stream as an anonymous function in the current module” rather than exposing individual compiler passes like name resolution. Similarly, while clearly some compiler representations will need to be exposed, I’d recommend limiting this as much as possible; probably types have to be exposed at some level, but e.g. MIR probably does not (and possibly not even HIR, as it may be sufficient to talk about code input and output as token streams with ASTs living in syn or similar).

It’s important to think up front about how types and functions will be extended in the various language extensions. For example, in Regent, tasks are Lua objects that mostly conform to the Terra function API, but expose additional methods (e.g. you can look up the CUDA variant of a task, if it exists). Similarly, Regent types are just Terra structs, but have additional metadata associated which Regent uses during type checking.

In the design of any major feature, it is helpful to keep potential users in mind from the beginning. For this purposes I’d recommend using existing Terra language extensions as examples of the sorts of things that language extensions would like to do. The various domain-specific languages associated with the Delite project are also good to keep in mind. Here is a non-exhaustive list of ones I’m aware of:


Language extensions enable better usability and, in some cases, aggressive optimizations beyond what any general-purpose compiler can be expected to accomplish. An infrastructure for language extensions in Rust could position Rust to be a leader in high-performance, high-productivity programming in a variety of usecases.


Would it work for you to treat Rust as a compilation target? i.e. have source code in “RustyRegent” syntax, and emit standard Rust source code from your compiler?

I think it depends on what your goals are. If the goal is to have a mostly standalone language with a modest interop capability, then this would work reasonably well. But often the goal of a language extension is to have very tight interop with the host language, i.e.:

  1. The ability to use most or all syntax of the host language in the extension language.
  2. The ability to use host and extension languages in the same source file.
  3. The ability to seamlessly call between host and extension languages.

There are a couple reasons to want this sort of strong interop between the host and extension language:

  • (1) means there's less for users to learn. The extension can be summarized by the things added to and removed from the host.

  • (2) and (3) make it easier to keep the extension small, because users can just write code in the host language for anything that can't be done in the extension. (E.g. I/O is something often omitted from compute-oriented languages.) Otherwise, there would be a constant pressure to grow the extension language to encompass more and more use cases.

In fact, the very first version of Regent (which never made it to publication) was a source-to-source compiler to C++. This early Regent version wasn't a C++ extension at all; it had a completely novel syntax and just used C++ as a source target. First, this made Regent harder to learn since it was a completely novel syntax. Second, interop was a major source of pain because you couldn't mix C++ and early Regent in the same file, and the early Regent language wasn't expressive enough to do a lot of what people wanted to do. When we moved Regent to a Terra language extension, it was a breath of fresh air because the tight interop with Terra instantly made a whole class of problems around I/O etc. go away. In all likelihood, Regent would never have made it to the point of being a practically useful language if it had stayed on its original trajectory.

Here are some other thoughts on the tradeoffs between proper extensions and compilation targets, in no particular order:

  • Using a language extension model makes it possible to mix multiple extensions in the same source file. This allows each extension to be minimal, orthogonal, and to focus just on what it's good at.

    This also makes it possible to mix language extensions written by different authors in the same source. This would be valuable for the same reason that using e.g. Serde and Rocket in the same source file is valuable today.

  • Here's a thought experiment: suppose async/await were implemented as a Rust language extension, rather than directly in rustc. I think it's pretty obvious that you'd want (1-3) above, because you'd want the extension to be as minimal and orthogonal as possible and not a completely different source-level language.

    This isn't so different from what you'd want from a hypothetical Regent in Rust. One way to view Regent is that it provides distributed task execution instead of state-machine style concurrent execution on a single node, but otherwise it shares many of the same goals as the async/await support that is scheduled to be baked directly into the Rust language.

  • When (1) is a goal, one of the major challenges is reimplementing the bits of Rust that you carry over into the extension language. If the extension is implemented as a compiler plugin, you can reuse bits of the Rust compiler where necessary and appropriate (though there are open questions about how to do this).

    But if Rust is a compilation target, it's not obvious what you'd do. While it would be theoretically possible to build a complete Rust type checker in a third-party library (similar to syn but for Rust semantics), this seems like a lot of duplication of effort and is likely to be buggy and brittle. On the other hand the portion of rustc you'd need to slice out would be pretty significant, because some type checking only occurs at the MIR level. It seems pretty sticky any way you go about it.

  • On the other hand, one of the nice things about source-to-source is having an explicitly reified representation of the generated code. Raw ASTs are frequently not so nice to look at.

  • Debugging. Source maps are ok but only take you so far. For example, if the language extension includes some sort of generics with a non-trivial mapping into Rust's generics, just pointing to the place where the original definition occurred loses a lot of information. Similarly, it's nice to be able to expose variables as they exist in the original program rather than as they exist in the generated code.

TL;DR: I'd suggest thinking about language extensions as libraries that just happen to add new syntax. Rust should support features that promote extensions that are small, orthogonal, and integrate well with other Rust code and the broader ecosystem, rather than having every language extension be a silo that exists mostly on its own.


I find the idea interesting and in many ways it reminds me of tactics in say Coq or Idris. I think however that the main problem facing you is not a question of motivation (i.e. "is this a good idea") but rather of how to get it done well technically without stagnating Rust as a language. So the devil is in the details.

You mentioned it before, but one significant challenge will be stage separation (i.e. the type system does not yet exist when parsing happens) and how the Rust compiler is architected today and how significant changes need to be made. There's also the issue of locking in Rust's specification into a specific compilation model. I think the language team will be cautious to give too many guarantees here.

This is a good example; The AST/HIR/HAIR/MIR representations are not part of Rust's spec and I don't think we should expose them. Giving TokenStreams seems more viable but will not be able to have very stable guarantees about the contents of those streams. Other potentially viable queries might be "does the type referred to by this TokenStream implement the trait referred to by that TokenStream"... Generally you should look for solutions that don't impede future language design but still gives you the info you want.

freeform CTFE? (file I/O, global state, etc)

You will need to elaborate…

well, const fn are supposed to be const. but I mean something more of a freeform CTFE where you can just call “arbitrary” non-const fn (as long as miri can handle it).

so you can do some compile-time I/O and metaprogramming with full power Rust and inline the results.

with some work, we’d have the ability to generate tokens (? statements maybe) from it and assemble them together but I’m not quite sure how that’d work. but I believe the first step is this freeform CTFE.

If by CTFE you mean “compile time function evaluation”, then if I understand the above you are suggesting that MIRI, which interprets the MIR output of a mid-level phase of rustc, would somehow generate high-level input to an earlier phase of rustc while it’s processing the same compilation unit. The required temporal order seems backward: early phases of rustc precede later phases, rather than follow them.

There are languages/compilers with full introspection, such as zig, that are capable of such meta-programming within the compilation unit, but the structure of the rustc compiler was not conducive to such operation when I last examined it a year ago.

1 Like

Can’t this be done purely via In theory it can find and process extension source files, transform them into pure Rust and continue as usual. Also I guess you could chain several extensions this way as well. Of course we will need specialized crates and maybe some compiler support to make things nicer (e.g. to be able to report errors in the extension source file and not in the generated output), but I think experimentation in this field can begin even today.

1 Like

Doesn’t have the same set of problems as current procedural macros? I.e. your extension can’t see anything outside of the immediate context of the code it’s currently processing. It’s true that you’d at least get to see the entire file, instead of just a single function, but you still wouldn’t have any way to type check calls to other files or other crates.

In my understanding can have access to all files in the current crate, so I don’t think so. You even can overwrite original rs files, but of course for obvious reasons it’s not recommended in practice. Maybe it’s possible to write modified rs files into a different directory and change src path to it, I am not sure here.

Ok, so then I guess the main thing you can’t do at that point is type check calls to other crates. I suppose in some cases you could work around that by allowing external calls to be untyped, and then allowing rustc to be responsible for type checking those. It still feels ugly to me, but you’re right that if you wanted to hack on this today it sounds like this would be the way to get started.

That still leaves the question of reimplementing bits of Rust included in the extension language. Maybe you could hack that part by calling out to rustc as an external program. Don’t expect it to be fast, though…

As an alternative to looking into all the source files, I wonder how far we are from being able to stick a custom procedural attribute at the top of the crate. That way you’d also get the whole crate, but as a TokenStream instead of having to walk through a filesystem, looking for files.

Indeed, making this work well would require a more flexible compilation model. Instead of "run typechecking on whole crate, run codegen on whole crate, ...", rustc would likely need a more demand-driven model, like "codegen for item X requires typechecking item X which requires typechecking item Y and Z"

Fortunately, I think rustc is already moving in this direction, mainly for the sake of incremental compilation, as well as IDE support, parallelism, etc.:

As described in the high-level overview of the compiler, the Rust compiler is current transitioning from a traditional "pass-based" setup to a "demand-driven" system.

source: Queries: demand-driven compilation

Similarly, the rust-analyzer project, which is supposed to support IDEs with very low-latency incremental updates and "on-demand" queries, will probably move even further in that direction, if it's actually completed.

On the other hand, intermixing phases within a single logical compilation (and in a single compilation unit) does create the risk of paradoxes. For example (pseudocode):

if !does_type_impl_trait("Foo", "Bar") {
    emit("impl Bar for Foo {}");

However, I believe this can be alleviated by adding restrictions on what types of items can be added in what contexts.

Alternately, it might be possible for the compiler to keep a log of all past queries and explicitly identify when newly introduced items would change their result, in which case it would error out. That might have too much overhead, though...

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.