

That bug does sound bad, but it is not clear to me how a BTRFS specific bug relates to it supposedly being more difficult to recover (or backup) when using whole-disk encryption with LUKS. It seems like an entirely orthogonal issue to me


That bug does sound bad, but it is not clear to me how a BTRFS specific bug relates to it supposedly being more difficult to recover (or backup) when using whole-disk encryption with LUKS. It seems like an entirely orthogonal issue to me


What makes recovery and backup a nightmare to you?
I’ve been running full-disk encryption for many years at this point, and recovery in case of problems with the kernel, bootloader, or anything else that renders my system inoperable, is the same as before I started using full-disk encryption:
I boot up a live-CD and then fix the problem. The only added step is unlocking my encrypted drive(s), but these days that typically just involves clicking on the drive in the file manager, and then entering my password. I don’t even have to drop into console for that.
I am also not sure why backups would be any different. Are you using something that images entire devices?
Astral clearly are using semantic versioning, as should be obvious if you read the spec you linked.
In fact, one of the examples listed in that spec is 1.0.0-alpha.1.
ETA: It should also be noted that ty is a Rust project, and follows the standards for versioning in that language:
https://doc.rust-lang.org/cargo/reference/manifest.html#the-version-field
That’s not quite true: Yes, your $99 license is a life-time license, but that license only includes 3 years worth of updates. After that you have to pay $80, if you want another 3 years worth of updates. Of course, the alternative is just putting up with the occasional nag, which is why I still haven’t gotten around to renewing my license
I’ve started converting my ‘master’ branches to ‘main’, due to the fact that my muscle-memory has decided that ‘main’ is the standard name. And I don’t have strong feelings either was
No gods, no masters


It’s unfortunate that it has come to this, since BCacheFS seems like a promising filesystem, but it is also wholly unsurprising: Kent Overstreet seemingly has an knack for driving away people who try to work with him


For example, the dd problem that prompted all this noise is that uutils was enforcing the full block parameter in slow pipe writes while GNU was not.
So, now uutils matches GNU and the “bug” is gone.
No, the issue was a genuine bug:
The fullblock option is an input flag (iflag=fullblock) to ensure that dd will always read a full block’s worth of data before writing it. Its absence means that dd only performs count reads and hence might read less than blocksize x count worth of data. That is according to the documentation for every other implementation I could find, with uutils currently lacking documentation, and there is nothing to suggest that dd might not write the data that it did read without fullblock.
Until recently it was also an extension to the POSIX standard, with none of tools that I am aware of behaving like uutils, but as of POSIX.1-2024 standard the option is described as follows (source):
iflags=fullblock
Perform as many reads as required to reach the full input block size or end of file, rather than acting on partial reads. If this operand is in effect, then the count= operand refers to the number of full input blocks rather than reads. The behavior is unspecified if iflags=fullblock is requested alongside the sync, block, or unblock conversions.
I can also not conceive of a situation in which you would want a program like dd to silent drop data in the middle of a stream, certainly not as the default behavior, so conditioning writes on this flag didn’t make any sense in the first place


This is interesting, but drawing conclusions from only two measurements is not reasonable. Especially so when the time-span measured is in the order of a few ms. For example, the two instances of clang might not be running at the same clock frequency, which could easily explain away the observed difference.
Plus, you could easily generate a very large number of functions, to increase the amount of work the compiler has to do. So I did just that (N = 10,000), using the function from the article, and used hyperfine to perform the actual benchmarking.
intBenchmark 1: clang -o /dev/null test.cpp -c
Time (mean ± σ): 1.243 s ± 0.018 s [User: 1.192 s, System: 0.050 s]
Range (min … max): 1.221 s … 1.284 s 10 runs
autoBenchmark 1: clang -o /dev/null test.cpp -c
Time (mean ± σ): 1.291 s ± 0.015 s [User: 1.238 s, System: 0.051 s]
Range (min … max): 1.274 s … 1.320 s 10 runs
So if you have a file with 10’000 simple functions with/without auto, then it increases your compile time by ~4%.
I’d worry more about the readability of auto, than about the compile time cost at that point


Besides this change not breaking user space, the “don’t break user space” rule has never meant that the kernel cannot drop support for file systems, devices, or even entire architectures


What you are describing is something I would label “skepticism of science”, rather than “scientific skepticism”.
So out of curiosity, I did a bit of digging. As andioop mentioned, the term “scientific skepticism” has been used to denote a scientifically minded skepticism for a long time. For example, the Wikipedia article on Scientific Skepticism dates back to 2004 and uses this meaning. Similarly the well known skeptic (pro-science/anti-pseudoscience) wiki, RationalWiki, has linked the scientific method and “scientific skepticism” as far back as 2011, and currently straight up equates skepticism with scientific skepticism. You can also find famous skeptics like Michael Shermer using the term back in the early 2000s, in his case in his ‘The Skeptic Encyclopedia of Pseudoscience’, published in 2002. It was also used in papers such as this sociology paper by Owen-Smith, 2001. This is the meaning of the term that I am familiar with.
However, since about 2020, there has been more of the term “scientific skepticism” as a parallel to “climate skepticism” and “vaccine skepticism”. For example, this paper by Ponce de Leon et al is just one of many I could find via a quick Google Scholar search. This, I take it, is how you use the term.
Personally, I’m probably just gonna keep using “scientific skepticism” to mean “scientifically minded skepticism”, but will keep in mind that it can also mean “skepticism of science”


Wouldn’t scientists be the ones employing “scientific skepticism”?
The issues are listed in Supplementary Table S141 (p. 75 in the SI; 10 issues) and in https://github.com/kobihackenburg/scaling-conversational-AI/blob/main/issue_stances.csv (697 issues)


Thanks! And yeah, with Markdown you need an empty line for it to actually add a paragraph break.
Though I just learned that you can also end a line with two spaces or an \ to get a line-break


Please consider adding paragraph breaks to your posts; a wall of text like this is not pleasant to read


Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…
It’s not that BCacheFS would fail on big endian machines, it’s that it would fail to even compile, and therefore impacted everyone who had it enabled in their build. And you don’t need actual big endian hardware to compile something for that arch: Just now it took me a few minutes to figure what tools to install for cross-compilation, download the latest kernel, and compile it for a big endian arch with BCacheFS enabled. Surely a more talented developer than I could easily do the same, and save everyone else the trouble of broken builds.
ETA: And as pointed out in the email thread, Overstreet had bypassed the linux-next mailing list, which would have allowed other people to test his code before it got pulled into the mainline tree. So he had multiple options that did not necessitate the purchase of expensive hardware
One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU’s.
It does not sound to me like standards were dropped for Asahi, nor that their use of Rust had any influence on the standards that were applied to them. It is simply as you said: What’s the point of testing code on architectures that it explicitly does not and cannot support? As long as changes that touches generic code are tested, then there is no problem, but that is probably the minority of changes introduced by the Asahi developers


I did enjoy this comment:
C code with a test suite that is run through valgrind is more trustworthy than any Rust app written by some confused n00b who thinks that writing it in Rust was actually a competitive advantage. The C tooling for profiling and checking for memory errors is the best in the business, nothing else like it.
In other words, a small subset of C code is more trustworthy than Rust code written by “some confused n00b”. Which I would argue is quite the feather in Rust’s cap


IMO, variables being const/immutable by default is just good practice codified in the language and says nothing about Rust being “functional-first”:
Most variables are only written once, and then read one or more times, especially so when you remove the need for manually updated loop counters. Because of that, it results in less noisy/more readable code when you only need to mark the subset of variables are going to be updated later, rather than the inverse. Moreover, when variables are immutable by default, you cannot forget to mark them appropriately, unlike when they are mutable by default
I’m surprised that you didn’t mention Zig. It seems to me to be much more popular than either C3 or D’s “better C” mode.
I’d be curious if you could show any examples of people asking why Rust is const by default being accused of spreading “FUD”. I wasn’t able to find any such examples myself, but I did find threads like this one and this one, that were both quite amiable.
But I also don’t see why it would be an issue to bring up Rust’s functional-programming roots, though as you say the language did change quite a lot during its early development, and before release 1.0. IIRC, the first compiler was even implemented in OCaml. The language’s Wikipedia page goes into more detail, for anyone interested. Or you could read this thread in /r/rust, where a bunch of Rust users try to bury that sordid history by bringing it to light
From what I’ve seen, most unsafe rust code doesn’t look much different compared to safe rust code. See for example the Vec implementation, which contains a bunch of unsafe blocks. Which makes sense, since it only adds a few extra capabilities compared to safe rust. You can end up with gnarly code of course, but that’s true of any non-trivial language. Your code could also get ugly if you try to be extremely granular with
unsafeblocks, but that’s more of a style issue, and poor style can make code in any language look ugly.At this point it feels like an overwhelming majority of the toxicity comes from non-serious critics of Rust. Case in point, many of the posts in this thread