Why are SocketAddrV4/SocketAddrV6 based on low level sockaddr_in[6]?

A port is always 16 bits (2 bytes). An IPv4 address is 4 bytes. This means the optimal size for SocketAddrV4 is 6 bytes (8 bytes with alignment 4 for Ipv4Addr). However, size_of::<SocketAddrV4>() == 16 (on Linux). This is 2x the expected size :open_mouth:

Hmm.. Yes, this is because the implementation is:

pub struct SocketAddrV4 {
    inner: c::sockaddr_in,
}

And sockaddr_in has more data than just the IP and port.

It's also not (currently) possible to make the new constructors for these socket addresses const fns. That's because how it must transform the Ipv4Addr and u16 arguments into the sockaddr_in representation. See https://github.com/rust-lang/rust/pull/67315.

Because of this system level representation, the standard library also has to use unsafe in SocketAddrV4::ip.

So, the above is a list of cons of having this fundamental address type be represented directly as the system representation. What are the benefits? I guess one benefit is that whenever a SocketAddrV4 or SocketAddrV6 is sent to, or received from the system, conversion is just a simple cast. See https://doc.rust-lang.org/1.47.0/src/std/net/addr.rs.html#558-569 for this. This simple cast performance benefit is paid for by having to do the conversion in the new constructors instead. I benchmarked this, and found that a const fn new for a naively implemented struct holding just an Ipv4Addr and u16 was almost 2x as fast as the current constructor.

So... A struct looking like this:

struct SocketAddrV4 {
    ip: Ipv4Addr,
    port: u16,
}

Would...

  • occupy less memory (~2x)
  • have faster methods (maybe not all, but some)
  • use less unsafe
  • have been full of const fns many Rust versions ago
  • have been in core a long time ago
  • and be way easier to read and understand.

Since the internal representation is not exposed from std (right?), I assumed any benefits of this system representation would be limited to usage of sockets from std... But I was a bit shocked to find that mio (and by extension tokio) relies on these structs indeed being implemented directly as libc::sockaddr: https://github.com/tokio-rs/mio/blob/27fbd5f04bb5f52a4d1c358cf0c04c6074a3d46b/src/sys/unix/net.rs#L78-L85. I don't find this specified in the documentation for std::net::SocketAddrV4/6. Does mio invalidly rely on this representation, or have I missed where this contract was set up between std and the rest of the world?

I guess (hope) there is a good explanation for this implementation of std::net::SocketAddrV4 (and V6). The above is a list of all the confusing things I found when I just wanted to check why SocketAddr::new was not yet a const fn. I hope someone will post a good explanation, and people coming after me can find this thread instead of falling down a rabbit hole :smiley: . Given that mio relies on this representation I guess it's too late to change the representation, even if the assumptions they have made are not correct :frowning:

4 Likes

There was some brief discussion on this as part of

Μy recollection of the pushback against moving to a Rust native encoding of the types is that the current representation is considered to be more performant, since it does not need translating when calling into OS APIs; but I don’t believe this has ever been tested, and I doubt its validity given how trivial the translation is, compared with the potential benefits of the optimisations of repr(Rust) types.

I strongly disagree, the stdlib can’t be scared of fixing things just because a large library has taken invalid assumptions about layout.

13 Likes

I agree with you here! I'm also 100% against stopping progress for almost any reason. Especially because someone else broke the rules. But my experience is that the Rust community is very conservative and super afraid of anything even remotely breaking. Getting us stuck with old mistakes. No one would be happier than me if my assumption on not being able to change it is wrong :slight_smile:

I don't think it's reasonable to assume that it's safe to cast a reference to SocketAddrV4 to a pointer and assume it has the same layout as the underlying platform type. I certainly don't think that's part of the promised ABI.

I do think that changing it shouldn't be done lightly, and in particular, changing it should come with benchmarks to show that it's not a performance hit to do so.

9 Likes

I'll see if I can create some benchmarks. I agree it should not be changed without good reason. But being able to move it to core, getting const fn methods and cutting the memory usage is pretty good reasons already. So if the benchmarks does not show noticeable regressions I would be in favor of changing it.

4 Likes

To get the mio issue moving I've opened https://github.com/tokio-rs/mio/issues/1386, but I know there are likely a few other libraries making the same assumption, e.g. socket2 (linked from that issue).

1 Like

See also the previous rust-internals thread:

As noted in that thread and the linked RFC, avoiding dependence on libc::sockaddr would allow potentially moving these types to core, which would be nice for embedded applications.

1 Like

I also highly doubt doing the conversion on the edge where it's used would be more expensive than doing it on instance creation. Every time it is used as the system representation it is going into or coming out of a syscall. And I think doing this conversion is going to be very cheap compared to the syscall itself, so any performance difference is going to be dwarfed by the syscall?

The thread you link to does not talk about performance anywhere I can find. Does anyone happen to know what kind of operation they are afraid might take a performance hit? I want to benchmark, but not sure exactly what to benchmark. I can create a UDP server and client that will burst traffic on loopback and see if it has any measurable performance difference.

3 Likes

You might make sure there isn’t some kind of raw sockets functionality that it’s used for. This would also be a case where someone would want high performance. Although I think it’s pretty rare.

Calling UdpSocket::send_to in a tight loop on an unbound socket would probably be the easiest way to measure any performance hit from making SocketAddrV[46] not an OS-level sockaddr. I would suggest sending to a remote destination that will throw them away and not send back anything, so that the receive side of the network stack on the sending machine isn't under any load.

If there is a performance hit, perhaps it could be resolved by adding new OS_sockaddr_in{,6} types that impl ToSocketAddrs and are guaranteed to be a wrapper around a sockaddr_in{,6}, with an explicit conversion from SocketAddrV[46]. (Abnormal naming convention suggested so it can match what these types are called in C.) This could then be documented as only for use in tight loops calling send_to.

The other thing I might be a little concerned with is minimizing conversion operations in between getaddrinfo and connect: not for performance, but because every conversion in that path is an opportunity for a bug to sneak in. I'm not sure how this works now, but if there were a type that wrapped a struct addrinfo chain and made it impl ToSocketAddrs, then that could maybe also short-circuit conversion of OS_sockaddr_xxx to SocketAddrVx and back.

Yes. This is more or less exactly what I had in mind as well. UdpSocket::send_to was the best thing I could come up with in terms of ratio between conversion of SocketAddr and actual useful work.

Some preliminary benchmarks indicate just what I thought. It won't make any measurable change at all. Converting the SocketAddr to a libc::sockaddr hovers in the single digit nanosecond region while UdpSocket::send_to is in the single digit microsecond region. So even without doing the conversion you would not be able to save more than 0.1% of the time of the entire operation. But I will work more on the benchmarks. Maybe I'm testing this completely wrong.

Here is my benchmark code as of right now. Waiting for some more toolchain compilation before I can present any stable results. But I'd love feedback on the benchmark code:

2 Likes

I now managed to run my benchmarks in a clean fashion on two (hopefully) correctly built toolchains. One is the master branch at b1d9f31e043 and the other one is with my patches applied. Will likely open a PR soon so the code changes can be discussed.

If you are not familiar with Criterion this might look messy. Look for the lines starting with change: to see the time difference. -4.3372% means the time it took to do this operation improved with 4.3% with my patches (Rust native encodings of network types) applied.

Ipv4Addr::new           time:   [1.6508 ns 1.6617 ns 1.6739 ns]                           
                        change: [-5.1101% -4.3372% -3.6618%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe

Ipv6Addr::new           time:   [5.4042 ns 5.4340 ns 5.4733 ns]                           
                        change: [-2.1922% -1.1287% -0.1790%] (p = 0.03 < 0.05)
                        Change within noise threshold.
Found 6 outliers among 100 measurements (6.00%)
  2 (2.00%) low mild
  1 (1.00%) high mild
  3 (3.00%) high severe

SocketAddrV4::new       time:   [1.3607 ns 1.3663 ns 1.3734 ns]                               
                        change: [-32.722% -32.132% -31.563%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
  1 (1.00%) low mild
  3 (3.00%) high mild

SocketAddrV6::new       time:   [7.9127 ns 7.9329 ns 7.9537 ns]                               
                        change: [-22.636% -22.255% -21.841%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
  2 (2.00%) low mild
  2 (2.00%) high mild
  1 (1.00%) high severe

Ipv4Addr::fmt           time:   [126.93 ns 127.94 ns 129.28 ns]                          
                        change: [-1.5430% -0.5077% +0.4429%] (p = 0.34 > 0.05)
                        No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
  3 (3.00%) low mild
  2 (2.00%) high mild
  5 (5.00%) high severe

Ipv6Addr::fmt           time:   [164.84 ns 165.72 ns 166.99 ns]                          
                        change: [+0.2453% +1.4088% +2.4333%] (p = 0.01 < 0.05)
                        Change within noise threshold.
Found 11 outliers among 100 measurements (11.00%)
  4 (4.00%) low mild
  3 (3.00%) high mild
  4 (4.00%) high severe

SocketAddrV4::fmt       time:   [162.51 ns 163.08 ns 163.69 ns]                              
                        change: [-2.6521% -1.5210% -0.1030%] (p = 0.01 < 0.05)
                        Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) low mild
  2 (2.00%) high severe

SocketAddrV6::fmt       time:   [262.10 ns 263.34 ns 264.70 ns]                              
                        change: [-0.8402% -0.1416% +0.6499%] (p = 0.71 > 0.05)
                        No change in performance detected.
Found 6 outliers among 100 measurements (6.00%)
  1 (1.00%) low mild
  1 (1.00%) high mild
  4 (4.00%) high severe

UdpSocket::send_to (v4) time:   [1.7204 us 1.7266 us 1.7332 us]                                     
                        change: [-0.5591% +0.3037% +1.1165%] (p = 0.51 > 0.05)
                        No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
  2 (2.00%) high mild
  1 (1.00%) high severe

UdpSocket::send_to (v6) time:   [1.6581 us 1.6664 us 1.6754 us]                                     
                        change: [-2.1811% -1.0916% +0.4453%] (p = 0.07 > 0.05)
                        No change in performance detected.
Found 5 outliers among 100 measurements (5.00%)
  1 (1.00%) high mild
  4 (4.00%) high severe

As you can see, Ipv4Addr::new, SocketAddrV4::new, SocketAddrV6::new improved measurably. All other benchmarks had changes so small Criterion classify them as within the error margin. Measuring changes in actual network calls seem silly since the entire struct conversion from C representation to Rust representation and back is just a few nanoseconds.

Feedback and critique towards the way this is measured is welcome. But my initial interpretation is that this change would only improve performance, and not regress it. Please note that I have only done this under Linux so far. Since the network types have different sizes etc on different platforms I expect the results to be slightly different on other platforms.

8 Likes

In the above benchmarks. The types are implemented in this way:

pub struct Ipv4Addr {
    octets: [u8; 4],
}

pub struct Ipv6Addr {
    octets: [u8; 16],
}

pub struct SocketAddrV4 {
    ip: Ipv4Addr,
    port: u16,
}

pub struct SocketAddrV6 {
    ip: Ipv6Addr,
    port: u16,
    flowinfo: u32,
    scope_id: u32,
}
4 Likes
3 Likes

The benchmarks above only benchmark converting SocketAddr -> c::sockaddr and not the other way around. I realized that UdpSocket::local_addr() is a call that does the conversion the other way, and it also does not involve any network traffic.

Similar results. No measurable change in performance. Around ~500 ns both with the current standard library and the code in my PR:

UdpSocket::local_addr (v4)                                                                             
                        time:   [525.15 ns 527.26 ns 529.65 ns]
                        change: [-0.2401% +0.7189% +1.6319%] (p = 0.14 > 0.05)
                        No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
  4 (4.00%) high mild
  3 (3.00%) high severe

UdpSocket::local_addr (v6)                                                                             
                        time:   [537.96 ns 540.84 ns 544.13 ns]
                        change: [+0.6489% +1.3014% +1.9054%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe
1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.