Feature request: make every String a smartstring

I discovered the smartstring crate recently and thought "why doesn't the String type do this already"? Would this be desirable in the standard library? smartstring - Rust

1 Like

I don't think this is possible because the layout of String is already guaranteed and thus cannot be changed due to backwards compatibility guarantees.


E. g. there's a good amount of crates relying on String having a stable address. stable_deref_trait::StableDeref - Rust

1 Like

I am not sure whether implementing the small string optimization in std is a good idea or not, but I think it should be possible to do in a backwards compatible way. Unfortunately, breaking of a bunch of crates which rely on unsound assumptions about String internals along the way.

According to my vague memories, std does not have the optimization because:

  • It simplifies the code base.
  • Behavior and performance of strings does not depend on compilation target.
  • Code does not have branching during string data access.

Can you link documentation which specifies those guaranties? I don't remember anything like this. If anything, the IP address debacle would point in the other direction. The closest thing is the representation section, but it only says that String has a pointer, a length, and a capacity, it has no restrictions on potentially implementing the small string optimization.

I think such reliance is quite suspicious and AFAIK, strictly speaking, is unsound.

Relevant older topic:

1 Like

Yes, I think the backcompat issue here probably stops something like this in its tracks. Although I don't want to underestimate the creativity of humans. I wouldn't be too surprised if there was some clever way to get something like this working in a backwards compatible fashion. But I imagine it might be hard to do without imposing some addition and unacceptable cost somewhere. (I say this while fully acknowledging @newpavlov's comments, which if correct, might mean we could do this change in a "compatible" manner, but only where "compatible" is interpreted very narrowly. In practice, I do not think we could get away with it.)

As far as why we didn't do this originally, I think the explanation for that comes straight from the Vec docs that address this specifically:

Vec will never perform a “small optimization” where elements are actually stored on the stack for two reasons:

  • It would make it more difficult for unsafe code to correctly manipulate a Vec. The contents of a Vec wouldn’t have a stable address if it were only moved, and it would be more difficult to determine if a Vec had actually allocated memory.
  • It would penalize the general case, incurring an additional branch on every access.

While String doesn't have the same docs, it is internally just a Vec<u8>. We may(?) be within our rights to change that, but in practice, I don't think it would fly.

I would be interested in seeing whether we can update the String docs to more solidly guarantee what folks have come to rely on, similar to what we did with Vec. (Which is going in the opposite direction of the SSO.)

1 Like

I agree, I think it would be a better approach than implementing SSO.

hard to do, given String::as_mut_vec exists


Hard to do but not impossible I think. For example, if we implemented SSO, then I imagine as_mut_vec could un-inline the string if necessary and turn it into a Vec. This is kinda what I meant by saying that maybe it's possible to do, but it would likely impose extra costs in places that would be unacceptable. But of course, this still requires using a Vec in the un-inlined case I presume.


How would String::into_bytes work for sso? Would it sometimes unexpectedly allocate? Would it return some sort of new smallopt-enabled vec?

Because it's important that there be a basic & predictable version atop which other things can build smarter things.

You can wrap the simple version as part of making fancy versions, but you can't unwrap the fancy stuff to avoid the (slight) overheads it adds if you don't need those things. And, more generally, there's many possible choices to make about "smart"ness, and reasonable people will want different things.

So I think Rust does a good job here: most things interoperate using str, so things that don't need to care which kind can use that, but then the programmer can choose String, Box<str>, Arc<str>, SmartString, SmolString, ... as needed.


This is simply incorrect. The current docs you linked to says this:

A String is made up of three components: a pointer to some bytes, a length, and a capacity. The pointer points to an internal buffer String uses to store its data. The length is the number of bytes currently stored in the buffer, and the capacity is the size of the buffer in bytes. As such, the length will always be less than or equal to the capacity.

This buffer is always stored on the heap.

Which means no sso for std, since we guarantee that the pointer points to the heap for nonempty strings.

Emphasis mine.


I like the simplicity of String. It may be slow for small strings, but it's always slow. Having allocation "cliff" at >0 chars is simpler than at a length dependent on pointer width and UTF-8 char widths. The representation is always straightforward. All operations on it are easily predictable.

Having said that, I wouldn't mind having some other SSO type in std, and/or an explicit ArrayString<N>, because there are many cases where they're useful.

Or perhaps Cow<'static, str> could be specialized to be clever all the way?


I would note that prior experience with std::string (C++) is not all rosy. There are performance issues with the optimizers having trouble optimizing out the branches for reads/writes due to the indirection, issues that do not occur for more straightforward types.

This is sufficient, as far as I am concerned, to avoid using such a "clever" trick as the default representation, and instead aim at offering a parameterizable implementation of String which can be declined in multiple formats:

  • InlineString<N>: len in 0..=N, cap = N.
  • SmallString<N>: originally inline if len <= N, then heap-allocated.
  • String: regular, default, heap-allocated.

Then people can choose what makes sense for their own usecase, and do not need to pay for fancy features they do not use.

Disclaimer: I am biased.

The idea of declining collections in such a way is what has motivated me to work on the Store API RFC.

The obvious follow-ups of the RFC would be the "Store-ification" of the standard library collection types: Box, Vec, String, etc... and putting them in core.

This, in itself, is not enough to have a truly compact small string. Just swapping out the pointer + allocator of Vec for a handle + store allow defining an InlineVec<N> or SmallVec<N>, but those still retain the two usize fields for len and cap. However, since nobody depends on core::Vec and core::String today since they do not exist, merely defining them -- unstably, at first -- would open the possibility of adding more parameterization to the types, such as defining Length and Capacity traits, or any other combination.

The re-exports made by the alloc or collection crates can then take care of exposing an API matching the current one, for example: pub type String = core::String<usize, usize, Global>;.


Setting aside the (important and in this case fatal) issue of compatibility guarantees for a moment...

My understanding of the small/short string optimization is that it's a neat trick that's often an improvement, but not so often or so reliably that you can safely apply it (with a single fixed size) to an entire general-purpose programming language. Compare arena allocators: When used correctly they're often a huge performance win, but you wouldn't necessarily want a programming language to implicitly arena-ify all of your memory allocations for you; that would cause at least as many problems as it solves (and one could easily argue certain managed languages do have problems like this).

SSO is also the kind of optimization that can't do its best work while staying hidden from application code, because the optimal value of the buffer depends heavily on the application. A buffer that's too small or too large can easily be worse than no SSO at all. Plus, many of the apps which juggle lots of short strings should also be thinking about string interning, which is an even more powerful optimization if it applies, but it can't be hidden from app code at all.


I'd add that this is a philosophy thing too.

There's a bunch of languages where there are 10% pessimizations peanut-buttered across most things, but that's considered worth it because those things do a bunch of smarts to make it difficult to do anything particularly bad. And that's often a solid choice for the vast majority of projects, IMHO.

Whereas Rust is more willing to say that it's ok to be slow if someone chooses a poor way, because that flexibility is sometimes important to allow doing it the really-fast way.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.