Back in my day, we only had ones and zeros, and sometimes we ran out of ones!!! (From old song, https://youtu.be/p1fBd7UbQPA?t=60 )
Back in my day, we only had ones and zeros, and sometimes we ran out of ones!!! (From old song, https://youtu.be/p1fBd7UbQPA?t=60 )
Can anyone explain why Wayland exists or who cares about it? X has been around forever, it sucks but it works and everything supports it. Alternatives like NeWS came around that were radically better, but were too soon or relied too much on corporate support, so they faded. The GNU project originally intended to write its own thing, but settled for using X. Now there’s Wayland though, which seems like a slight improvement over X, but mostly kind of a lateral move.
If you’re going to replace X, why not do something a lot better? If not actual NeWS, then something that incorporates some of its ideas. I think Squeak was like that but I don’t know much about it.
Height in centimeters? I don’t entirely get this.
Org mode has a time tracking feature, dunno about report generation.
Off the topic of my head, maybe these can get you started:
Hackers, by Stephen Levy
The Hacker Ethic, by Pekka Himanen
True Names, by Vernor Vinge
Free Culture, by Lawrence Lessig
A Fire Upon The Deep (SF novel), by Vernor Vinge
There is a famous Erik Naggum rant about XML at, no wait, I better not link it but you can find it with a search engine if you want, which means you don’t get to complain to me about it since you are the one who went looking for it. Very NSFW and VERY politically incorrect. Naggum died in 2009 but anyone who published a thing like that today would be raked over the coals.
Forth is fun but not really suitable for large, long-lasting projects with huge developer communities. Linux isn’t being bootstrapped, it’s already here and has been around for decades and it’s huge. And, I think bootstrapping-by-poking-around on a new architecture has stopped being important. Today, you have compiler and OS’s targeted to the new architecture under simulation long before there is any hardware, with excellent debugging tools available in the simulator.
I don’t think Ada in the kernel would get any cultural acceptance. Rust has been hard enough. C++ was vehemently rejected decades ago though the reasons made some sense at the time. Adopting C++ today would be pretty crazy. I don’t see much alternative to Rust (or in a different world, Ada) in the monolithic kernel. But Rust seems like it’s still in beta test, and the kernel architecture itself seems like a legacy beast. Do you know of anything else? I can’t take D or Eiffel or anything like that seriously. And part of it is the crappiness of the hardware companies. Maybe it will have to be left to future generations.
I have played with Ada but not done anything “real” with it. I think I’d be ok with using it. It seems better than C in most regards. I haven’t really looked into Rust but from what I can gather, its main innovation is the borrow checker, and Ada might get something like that too (influenced by Rust).
I don’t understand why Linux is so huge and complicaed anyway. At least on servers, most Linux kernels are running under hypervisors that abstract away the hardware. So what else is going on in there? Linux is at least 10x as much code as BSD kernels from back in the day (idk about now). It might be feasible to write a usable Posix kernel as a hypervisor guest in a garbage collected language. But, I haven’t looked into this very much.
Here’s an ok overview of Ada: http://cowlark.com/2014-04-27-ada/index.html
It seems more important to ensure Larry Ellison’s good behaviour than Joe Schmoe’s. Ellison is able to be far more destructive. Maybe some surveillance at Oracle HQ could help.
OOP was a 1990s thing that is still around but don’t worry about it too much at first.
The classic intro book is Structure and Interpretation of Computer Programs aka SICP. You can find it online with a web search. It will give you a good grounding in fundamentals. Then you can figure out what to pursue next.
C, C-like, or Rust
As always, Ada gets no respect.
I’ll look at the wiki article again but I can pretty much promise that Ada doesn’t have dependent types. They are very much a bleeding edge language feature (Haskell will get them soon, so I will try using them then) and Ada is quite an old fashioned language, derived from Pascal. SPARK is basically an extra-safe subset of Ada with various features disabled, that is also designed to work with some verification tools to prove properties of programs. My understanding is that the proof methods don’t involve dependent types, but maybe in some sense they do.
Dependent types require the type system to literally be Turing-complete, so you can have a type like “prime number” and prove number-theoretic properties of functions that operate on them. Apparently that is unintentionally possible to do with C++ template metaprogramming, so C++ is listed in the article, but actually trying to use C++ that way is totally insane and impractical.
I remember looking at the wiki article on dependent types a few years ago and finding it pretty bad. I’ve been wanting to read “The Little Typer” (thelittletyper.com) which is supposed to be a good intro. I’ve also played with Agda a little bit, but not used it for real.
Dependent types only make sense in the context of static typing, i.e. compile time. In a dependently typed language, if you have a term with type {1,2,3,4,5,6,7} and the program typechecks at compile time, you are guaranteed that there is no execution path through which that term takes on a value outside that set. You may need to supply a complicated proof to help the compiler.
In Ada you can define an integer type of range 1…7 and it is no big deal. There is no static guarantee like dependent types would give you. Instead, the runtime throws an exception if an out-of-range number gets sent there. It’s simply a matter of the compiler generating extra code to do these checks.
There is a separate Ada-related tool called SPARK that can let you statically guarantee that the value stays in range. The verification method doesn’t involve dependent types and you’d use the tool somewhat differently, but the end result is similar.
Look on Starlink.com. I don’t expect it’s much worse than your typpical evil ISP or phone caerrier in terms of privacy. Certainly you could route everything through a VPN and that might help a little.
Edit: oh wait, I confused this thread with a different one when I looked at my inbox. Starlink is a high speed service with a roof antenna. For satellite phone stuff, look at skylo.tech.
I’d either get an older model for cheap, or get a 9 because of the satellite capability. I wonder if GrapheneOS supports the latter, and for that matter whether it supports the 9 at all yet.
In Ada? No dependent types, you just declare how to handle overflow, like declaring int16 vs int32 or similar. Dependent types means something entirely different and they are checked at compile time. SPARK uses something more like Hoare logic. Regular Ada uses runtime checks.
That’s no moon. It’s a space station.
In Ada, the overflow behaviour is determined by the type signature. You can also sometimes use SPARK to statically guarantee the absence of overflow in a program. In Rust, as I understand it, you can control the overflow behaviour of a particular arithmetic operation by wrapping a function or macro call around it, but that is ugly and too easy to omit.
For ordinary integers, an arithmetic overflow is similar to an OOB array reference and should be trapped, though you might sometimes choose to disable the trap for better performance, similar to how you might disable an array subscript OOB check. Wraparound for ordinary integers is simply incorrect. You might want it for modular arithmetic and that is fine, but in Ada you get that by specifying it in the type declaration. Also in Ada, you can specify the min and max bounds, or the modulus in the case of modular arithmetic. For example, you could have a “day of week as integer” ranging from 1 to 7, that traps on overflow.
GNAT imho made an error of judgment by disabling the overflow check by default, but at least you can turn it back on.
The RISC-V architecture designers made a harder to fix error by making everything wraparound, with no flags or traps to catch unintentional overflow, so you have to generate extra code for every arithmetic op.
Don’t forget Algol-60. Per Tony Hoare, “Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors.”