For example, Alexander Stepanov (the designer and main implementor of the original C++ STL) writes in Elements of Programming Section 3.5 (Parametrizing Algorithms):
Using an operator symbol or a procedure name with the same semantics on different types is called overloading, and we say that the operator symbol or procedure name is overloaded on the type. For example, + is used on natural numbers, integers, rationals, polynomials, and matrices. In mathematics + is always used for an associative and commutative operation, so using + for string concatenation would be inconsistent.
In From Mathematics to Generic Programming he writes:
For many centuries, the symbol â+â has been used, by convention, to mean a commutative operation as well as an associative one. Many programming languages (e.g., C++, Java, Python) use + for string concatenation, a noncommutative operation. This violates standard mathematical practice, which is a bad idea. The mathematical convention is as follows: ⢠If a set has one binary operation and it is both associative and commutative, call it +. ⢠If a set has one binary operation and it is associative and not commutative, call it *. 20th-century logician Stephen Kleene introduced the notation
ab
to denote string concatenation (since in mathematics * is usually elided).
IIRC Herb Sutter and Bjarne Stroustrup have made similar complaints. There is a 2017 Stroustroup paper where he explains how to use C++ concepts efficiently and he makes similar remarks to those made by Alex.
Unless they want to write generic algorithms where they expect commutativity, in which case they require the Add
trait (because it conveys commutativity), and users of that generic code pass it a string and get garbage out.
Sure, you can use all of the algorithms in the bitwise crate on all of the integer types to manipulate their bits.
If I want to manipulate the bits of an integers as raw bits, I should not need to do any conversions for that. E.g. if I want to do a parallel bit deposit on a signed integer, that should just work. I do not want a type error saying I have to convert it to an unsigned integer, nor do I want garbage out due to the wrong kind of shift being invoked.
Why is generic code that always works correctly and efficiently an anti-pattern?
If you find yourself wanting to do logical shifts on a signed int or arithmetic shifts on an unsigned int, it's a sign that you should rethink the design of your code.
Bits are just bits, logical shifts and arithmetic shifts are two different bit manipulation algorithms. Which one you use depends on how do you want to manipulate the bits. If you want to do integer arithmetic at the bit level, what you say might make sense, but arguably then the issue is that you are doing arithmetic at the wrong level of abstraction, e.g., you should be using a power of two function instead of shifting bits nitty willy to better express your intent.