<algorithm> is a very good header, but most of the containers are substandard at best. `std::unordered_map` is required to be node-based, `std::map` can't be a B-tree, MSVC's implementation of `std::deque` is infamously a glorified `std::list`, and so on.
Pretty much everything else (e.g. iostreams) is horrible.
Don't forget <tuple> not mandated to be trivial when it can be (and in fact never being trivial with GCC's stdlib), std::print performance issues (see P3107R5, P3235R3), etc.
Heck, even std::atomic was designed with only x64 in mind (it clearly shows), and is unusable outside it. One is incentivized to write their own "atomic" class until P3330R0 is approved for RMW-centric platforms ISAs like Aarch32 and Aarch64.
The idiomatic way to do RMW (outside simple stuff like fetch-increment) with std::atomic maps 1:1 with x64 assembly and since fetch_update isn't provided, it's the only way to do it. It's way too close for comfort. See [1] for a comparison
> Total hyperbole. It's perfectly usable on ARM and other platforms.
It's not hyperbole. std::atomic is portable, but that's all it is.
std::atomic is about 30% to 40% (with outlined atomics on, which is the default) slower than handrolled asm (or custom reimplementations that provide fetch_update -- same thing). See [2] for a benchmark.
Yes, the design of std::atomic probably favors x64 in certain areas. However, you initially claimed that std::atomic has been designed with only x64 in mind. This is simply not true, which is easily proven by the fact that they explicitly support weak memory models.
> std::atomic is about 30% to 40% (with outlined atomics on, which is the default) slower than handrolled asm
Only for certain CAS operations. A 30% or 40% performance penalty doesn't sound too dramatic and certainly makes it "usable" in my book.
I appreciate your insight, but it could have been delivered with less hyperbole.
Sure, but memory ordering is orthogonal to LL/SC vs CAS.
To me, fetch_update not being present from std::atomic's inception is major design oversight as CAS can be emulated via LL/SC but not the other way round.
Furthermore, fetch_update code is easy to read and less awkward to write than CAS loops (which currently are the only way std::atomic offers, and this is what I'm complaining about)
> Only for certain CAS operations. A 30% or 40% performance penalty doesn't sound too dramatic and certainly makes it "usable" in my book.
I disagree. Atomic variables (atomic instructions) are usually used to implement synchronization primitives, and are thus often meant to be used in very hot paths. 30% perf drops are actually quite bad, in that regard.
Of course if one is restricting themselves to using only member methods (fetch_add, fetch_or, etc.), then all is fine because these methods are optimized.
All in all, C++'s stdlib (the parts that aren't just __builtin wrappers, to be precise) is actually quite fine for most use-cases, like PC applications. Indeed, it is when one has latency constraints and/or severe memory constraints (e.g. < 512 KiB) that the stdlib feels like a hindrance.
> Sure, but memory ordering is orthogonal to LL/SC vs CAS.
Sure, but your original claim was that std::atomic has been designed with only x64 in mind. That's what I meant to argue against.
I agree that the omission of something like fetch_update() has been an oversight and I hope that it will make it into the C++ standard!
As a side note, here's what the Rust docs say about fetch_update():
> This method is not magic; it is not provided by the hardware. It is implemented in terms of AtomicUsize::compare_exchange_weak, and suffers from the same drawbacks.
Looks like their (Rust) main motivator was readability. Whereas P3330R0 has that + performance on non-CAS hardware in mind. In any case, Rust's function could be optimized in the future, if they decide on it.
Neither can be compared cleanly because the languages make such different choices but it's certainly true that C++ has too many garbage "never use this" container types in the standard library. Aria has a brief aside in her "Entirely Too Many Linked Lists" tutorial about why Rust has so few containers in the stdlib (Aria weeded out all the niche container types before Rust 1.0), and it sincerely feels as though 1995 era C++ could have done with an Aria.
> probably one of the best algorithms and containers libraries in any language
Mostly agree about the algorithms. Another good thing in C++ are low-level mathematic parts of the standard library in <cmath> and <complex>.
Containers are OK, but neither usability nor performance are IMO great. Node-based containers like red-black trees and hash maps come with a contract which says pointers are stable, by default this means one malloc() per element, this is slow.
However, there’re large areas where C++ standard library is lacking.
File I/O is questionable to say the least. Networking support is missing. Unicode support is missing, <codecvt> deprecated in C++17 because they found out it no longer implements current Unicode standard and instead of fixing the standard library they dropped the support. Date and calendars support only arrived in C++/20. No built-in way to represent money amount, e.g. C# standard library has fixed-size 16 bytes type for decimal floating-point numbers, Java standard library has arbitrary-precision integers and decimals.
Unicode is a moving target and not something that can be supported in a standard library that cares about long-term backwards compatibility. Every language that has added native Unicode support has suffered for it.
Fixed point types would be nice but can be implemented on your own and integers representing sufficiently small denominations (cents or fractions thereof) work in a pinch to deal with monetary amounts. And for the interface between libraries you will need to deal with things like currencies anyway and that goes well past the scope of a standard library.
Networking is also not something that is all that stable on the OS level beyond the basic socks API and you can just use that from C++ if you want to. There is no benefit from cloning the API into the C++ standard.
Same for filesystems - operating systems are different enough here that applications are better off handling the differences directly as can be seen in the unsatisfying attempt to abstract them in std::filesystem.
Pushing every functionality you can think of into the standard library is a mistake IMO. It should be reserved for truly ossified OS interfaces, basic vocabulary types and generic algorithms. Everything else is bloat that will be obsolete anyway sooner rather than later.
> not something that can be supported in a standard library that cares about long-term backwards compatibility
Standard libraries of Java, JavaScript, and C# are counter-examples.
> you can just use that from C++ if you want to
Technically, C++ standard could feature an abstract interface for a stream of bytes. Would be useful not only for TCP sockets, also for files and pipes.
BTW I’ve been programming C++ for living for decades now, and I never saw production code to use C++ <iostream> header. Instead, C++ developers I worked with have been using either <cstdio> or OS-specific APIs.
> applications are better off handling the differences directly
Many other languages have managed to design high-level platform agnostic abstractions over these things, and implemented them in the standard libraries.
> reserved for truly ossified OS interfaces
By now this applies to files, file systems, pipes, and TCP sockets. While there’re some features hard to abstract away (examples include controlling file permissions, and probably asynchronous I/O), many real-world applications don’t need them. They just need basic stuff like read and write bytes from binary streams, concatenate paths, create/open/delete files, listen and accept TCP sockets.
> Standard libraries of Java, JavaScript, and C# are counter-examples.
You mean because of the 16-bit character type baked into the language even though Unicode has moved past that?
> Technically, C++ standard could feature an abstract interface for a stream of bytes.
That's what iostreams are. Turns out such abstractions come with overhead and other limitations and you need to use OS-specific APIs for even slightly advanced features anyway.
> Many other languages have managed to design high-level platform agnostic abstractions over these things, and implemented them in the standard libraries.
And these lowest common denominator "abstraction" result in developers making software that doesn't work like users of the OS expect.
> By now this applies to files, file systems, pipes, and TCP sockets.
Not at all. Async interfaces are all the rage these days. Meanwhile browsers have moved from TCP to QUIC (which is much more than a stream of bytes so would need a completely different abstraction) and it's not unlikely that other applications will want to move to it too. You can make a basic bitch abstraction for these but if everyone that cares about performance needs to fall back to OS-specific interfaces then that doesn't help that much.
That type is not for characters anymore; all these languages treat these values as UTF-16 units. Unlike C++, their standard libraries provide functions to convert strings between UTF-8 and UTF-16, apply Unicode normalization, decode strings into a sequence of code points, etc.
> That's what iostreams are
iostreams tried to implement binary streams, text streams, and object formatting with the same API. Predictably failed all these tasks, the features are too different. A minimalistic C++ API for a stream of bytes might look something like that:
> result in developers making software that doesn't work like users of the OS expect
Can you elaborate? When I write string content = File.ReadAllText( path ) in C#, the standard library does exactly the same thing as some C++ library could do: open file for read access, read it to the end, and because C# strings are UTF-16 convert the bytes from UTF-8 to UTF-16.
> Async interfaces are all the rage these days. Meanwhile browsers have moved from TCP to QUIC
I think both points are exotic stuff. Not sure I would want to see them in C++ standard library. Also going to be hard to design a useful platform-agnostic abstractions. OTOH, Unicode strings, file systems, and streams of bytes are literally everywhere in software.
The problem with C++ is not _just_ a bad standard library, but it's also that. There's a lot of "don't use this", including iostreams, std::regex, std::unordered_map, std::variant, and more. Not to mention vestigial parts of failed designs, like std::initializer_list.
Every serious C++ project worth its salt includes additional dependencies to patch it up, and it looks like that will be the case in perpetuity, because these problems are unfixable in the holy name of ABI stability.
Don't get me wrong, ABI stability is a worthy goal. The committee should just have realized that the current approach to it is untenable. The result is a very half-baked situation where ABI stability is not technically guaranteed, but nothing can be fixed because of it.
What a mess.
Rust takes a much, much more cautious approach (because of C++'s history), including explicitly not supporting Rust-native ABI stability and flat out discouraging dynamic linking. Also not very great, but it's sensible as long as there are no clearly superior solutions.
> The problem with C++ is not _just_ a bad standard library, but it's also that. There's a lot of "don't use this", including iostreams, std::regex, std::unordered_map, std::variant, and more. Not to mention vestigial parts of failed designs, like std::initializer_list.
Those are the result of a constant stream of people complaining that the C++ standard library is bad because it doesn't contain their pet feature.
Needing additional dependencies beyond the standard library is not problem but how things should work. Because requirements differ and one persons useful dependency is another persons vestigial bloat.
You’re not wrong in principle, but I think it is absolutely fair to expect a standard library to include a good hash map implementation. These aren’t unreasonable demands.
The problem here isn’t that it’s bloated (I don’t particularly think it is), but that the things it provides are often very far from best in class.
The stdlib's unordered_map could certainly have been designed better but there have also been significant developments in hash map implementations since it was added so no matter what they would have specified it would be obsolete by now. Meanwhile adding your favorite hash map to your C++ project doesn't take long at all. The only issue is if you want to pass a hash map between different libraries - there a standard type would be useful. But that also requires a stable implementation that doesn't change which gets the same ossification of sub-par implementations that you already have in the stdlib.
The larger point here is that the ossification is a direct result of the combination of two things: the way that monomorphization works, and wanting to maintain ABI stability in perpetuity.
I don't know how to solve it, but it's clear that C++'s general utility is severely hampered by this ABI stability crutch.
Every time I have to work with text in C++ using stl feels like pulling teeth. It has gotten better over years, but even php gives one stdlib envy.
The idea to base everything on iterators instead of ranges, blowing up the code (and amount of errors) — it would have made sense for pure algorithms, but stl insists on having ownership of object placed in the containers.
No sane way to work with binary data. Inconsistent allocation and exception ideology through the years. Mysteriously missing functionality.
What little ruby, python, php I have written — nothing has felt as clunky as STL.
E.g. vector has no "sort" member function. What great benefit not adding it for convenience has brought?