Apple may not design for repairability, but what you are saying is not true. I have personally purchased and installed genuine replacement displays on MacBooks with no involvement from Apple.
Apple publishes repair guides for this (e.g., https://support.apple.com/en-us/120768) as does iFixit. Genuine parts are available for purchase and tools are available to rent by individuals (see https://support.apple.com/self-service-repair, which specifically mentions display replacement). Skill and patience are required; replacement by Apple is not.
>Apple may not design for repairability, but what you are saying is not true. I have personally purchased and installed genuine replacement displays on MacBooks with no involvement from Apple.
Which year?? It used to be like that, no anymore.
It is public knowledge that Apple has locked its hardware via firmware. It must be performed by authorised only.
You can check YT, that guy in the USA that defends the "right to repair" movement, etc.
The etymology and physical metaphor of "The Singularity" are a bit confused here, and I think it muddles the overall point.
> the singularity is a term borrowed from physics to describe a cataclysmic threshold in a black hole
In his article which popularized the idea of The Singularity, Vinge quotes Ulam paraphrasing von Neumann, and states, "Von Neumann even uses the term singularity". As von Neumann surely knew, "singularity" was a term widely used in mathematics well before the idea of black holes (etymonline dates first use to 1893). Vinge does not say anything about black holes.
> an object is pulled into the center [of] gravity of a black hole [until] it passes a point beyond which nothing about it, including information, can escape. [...] This disruption on the way to infinity is called a singular event – a singularity.
The point at which "nothing" can escape a black hole is the event horizon, not the singularity. What exactly happens to information and what exactly happens when crossing the event horizon are subjects of debate (see "black hole information paradox" and "AMPS/firewall paradox"); however, it's probably fair to say that the most orthodox/consensus views are that information is conserved through black-hole evaporation and that nothing dramatic happens to an observer passing through the event horizon.
> the singularity became a black hole, an impenetrable veil hiding our future from us. Ray Kurzweil, a legendary inventor and computer scientist, seized on this metaphor
While I'm not prepared to go into my personal views in this comment, it's worth noting that the idea that "exponential curves look the same from every point" is not foreign to, e.g., the Kurzweilian view of The Singularity; nevertheless, fitting dramatic, industrial-revolution-sized progress into the fixed scale of a (contemporary) human lifetime would surely be a big deal. This idea, (whether you believe it will happen or not), is obscured by the spurious black hole metaphor.
While others have addressed the programming case for tagged unions, I want to add that, to a logician, tagged unions are the natural construct corresponding to "logical or".
In intuitionistic logic (which is the most basic kind from which to view the Curry-Howard or "propositions-as-types" correspondence), a proof of "A or B" is exactly a choice of "left" or "right" disjunct together with a corresponding proof of either A or B. The "choice tag" is part of the "constructive data" telling us how to build our proof of "A or B". Translated back into the language of code, the type "A | B" would be exactly a tagged union.
When A and B are disjoint, you don't need the tag unless you for some reason you require that `Maybe<Maybe<T>> != Maybe<T>` holds true and don't like the collapsing semantics of unions where `Maybe<Maybe<T>> == Maybe<T>`
In practice, the cohabitants in Option/Result types are almost always disjoint. So you spend time wrapping/unwrapping Some/Ok/Err for no value add.
I disagree. Something of type "A" should, according to basic propositional logic, also be of type "A or B". That's the case for an untagged union, but not for a tagged union (because of wrapping), which is decidedly illogical.
Well, I have outlined the usual story of logic as it corresponds to programming (as has been accepted for at least some five decades now); it strains credulity to claim that logic is illogical.
Now I do see where you are coming from; under a set-theoretic interpretation with "implies" as "subset", "or" as "union", and "and" as "intersection", the fact that "A implies (A or B)" tells us that an element of the set A is also an element of the set "A union B".
However, this is not the interpretation that leads to a straightforward correspondence between logic and programming. For example, we would like "A and B" to correspond to the type of pairs of elements of A with elements of B, which is not at all the set-theoretic intersection. And while "(A and B) implies A", we do not want to say a value of type "(A, B)" also has type "A". (E.g., if a function expects an "A" and receives an "(A, A)", we are at an impasse.)
So "implies" should not be read programmatically as a subtyping relation; instead, "A implies B" tells us that there is a function taking a value of type A to a value of type B. In the case of "A implies (A or B)", that function takes its input and tags it as belonging to the left disjunct!
Another perspective I must mention: given a proof of "A or B" and another of "(A or B) implies C", how can we combine these into a simpler proof of just "C"? A proof of "(A or B) implies C" must contain both a proof a "A implies C" and a proof of "B implies C", and we could insert into one those proofs a proof of A or a proof of B. But we have to know which one we have! (This is a very short gloss of a much deeper story, where, under Curry-Howard, proof simplification corresponds to computation, and this is another way of describing a function call that does a case-analysis of a tagged union (or "sum type").)
Now "union and intersection" types with the set-theoretic properties you are hinting at have indeed been studied (see, for example, Section 15.7 of Pierce's "Types and Programming Languages"). But they do not replace the much more familiar "sum and product types", and do not play the central role of "or" and "and" in the correspondence of programming to logic.
> However, this is not the interpretation that leads to a straightforward correspondence between logic and programming. For example, we would like "A and B" to correspond to the type of pairs of elements of A with elements of B, which is not at all the set-theoretic intersection. And while "(A and B) implies A", we do not want to say a value of type "(A, B)" also has type "A". (E.g., if a function expects an "A" and receives an "(A, A)", we are at an impasse.)
Intersection types in TypeScript and Scala 3 do work like conventional intersections / conjunctions. Something of type A&B is of type A. For example, a Set&Iterable is a Set. This makes perfect sense and is coherent with how unions work. A&A is then obviously equivalent to A. I'm not sure where you see the problem.
I suppose I erroneously assumed some familiarity with the correspondence between product types (i.e., types of pairs) and the constructive logical interpretation of "and".
Suffice it to say for now: there is an interpretation of logic that gives a tighter correspondence to programming than the set-theoretic one, under the name "Curry-Howard" or "propositions as types, proofs as programs", and which has been known and cherished by logicians, programming language theorists, and also category theorists for a long time. The logic is constructive as it must be: a program of type A tells us how to build a value of type A, a proof of proposition A tells us how to construct evidence for A. From here we get things like "a proof of A and B is a proof of A together with a proof of B" (the "BHK interpretation"), which connects "and" to product types...
I spoke up because I could not leave untouched the idea that "tagged unions are illogical". On the contrary, tagged unions (aka "disjoint unions", "sum types", "coproducts", etc.) arise forthwith from an interpretation of logic that is not the set-theoretical one, but is a more fruitful one from which programming language theory begins. You are not wrong that there is also a correspondence between (untagged) union and intersection types and a set-theoretical interpretation of propositional logic, and that union and intersection types can also be used in programming, but you are missing a much bigger and very beautiful picture (which you will find described in most any introductory course or text on PL theory).
But I'm pretty sure that even in intuitionistic logic, "A" implies "A or B". Which is not the case with tagged unions (as I said, because of wrapping).
It's true that in intuitionistic logic "A implies (A or B)"; the usual computational interpretation of that is that "there is a function taking a value of type A and returning a value of type A + B", where + is the tagged union, and, per above, that function is exactly the one which tags its input as belonging to the left disjunct.
I suspect you are still reading "A implies B" as "A is a subtype of B", derived from a set-theoretic interpretation of propositional logic. But the constructive interpretation is that a proof of "A implies B" is a method to take a proof of A and transform it into a proof of B. Computationally, a value of type "A implies B" (typically rewritten "A -> B") is a function that takes values of type A and returns values of type B.
Thanks. I think in the end the question will be: which is better, an algebraic or a set-theoretic type system? Which is more practical to use? Which is more elegant? Should both be mixed?
(One complicating aspect is that there doesn't yet exist a mainstream language with full set-theoretic type system. TypeScript and Scala 3 currently only support intersections and unions, but no complements, making certain complex types not definable. E.g. "Int & ~0", integers without zero.)
I agree with your point, but it's worth noting that scientific papers are normally and by default copyrighted works. (In some cases the author may assign the copyright to a publisher.)
Eric's draft contains an unusual statement that says "this work [...] may not be built upon without express permission of the author". To the extent that this refers to derivative works which substantially reuse the text of the paper, this is normal copyright law. To the extent that this refers to the use of scientific ideas or discoveries, this is not enforceable under US copyright law. Copyright cannot prevent anyone from citing or responding to a work. See, e.g., https://www.copyright.gov/circs/circ33.pdf.
For late arrivals to this thread: note that the illustration being discussed has been updated (see the Wayback Machine for the old version). The new version probably still does not show the best second-order approximations, but the obvious qualitative errors have been corrected.
Coming from the type theory side with only a passing glance at Ada, I am nevertheless sure: this is not what type theorists mean when they talk about dependently typed languages. Such languages derive from the formulation of Per Martin-Löf (also called Intuitionistic Type Theory), they include dependent sum and dependent product types, and they allow type checkers to prove complex statements about code. (The history of dependent types is intertwined with the history of formalizing mathematics; dependent types were designed to encode essentially arbitrary mathematical statements.)
The interesting feature of Ada here seems to be what it calls "subtype predicates". As you've explained, these come in a "dynamic" flavor, which are a nice syntax for runtime assertions, and a static flavor, which are compile-time checked but restricted to certain static expressions (per https://ada-lang.io/docs/arm/AA-3/AA-3.2#p15_3_3.2.4).
An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
I do believe (having heard from multiple sources) that Ada's type system was ahead of its time and its success in creating practical programs that are likely to be correct is probably underrated. But I'm not here just to legislate semantics; one should be aware that there is something vastly more powerful out there called "dependent types" (even if that power is not likely to come into most people's day-to-day).
(Unfortunately Wikipedia is quite poor on this topic; you will see, for example, that on the Talk page someone asked "Is Ada really dependently typed?" two years ago; no reply. And it makes no sense to say that Ada has "tactics" but not "proof terms"; tactics are a way of generating proof terms. There are many better resources out there (especially ones associated with the languages Agda, Coq (currently being renamed Rocq), and Lean, e.g. https://lean-lang.org/theorem_proving_in_lean4/dependent_typ...). But be warned, there is no "short version": dependent types cannot be explained in a sentence, and they are not something you will arrive at with enough "hacking away".)
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
It actually can be done in Ada, but not purely with the type system, instead we rely on SPARK, which converts Ada code and passes it through various automatic theorem provers. Some examples of fully proven sorting functions are here: https://github.com/AdaCore/spark2014/blob/master/testsuite/g...
You can also see from the above code just how good theorem provers and SPARK are now with the reasonably low number of assertions required to both prove that the output is sorted and prove that the input and output contain the same elements, not to mention all the hidden proofs relating to integer overflow, out-of-bounds access, etc..
You could maybe do all this with types and SPARK, but it's not the approach that would usually be taken.
Ah, very interesting. It does seem that the Ada community has done serious engineering work to build in powerful formal verification, in a way that is somehow parallel to the (much slower, for practical purposes, if more elegant) arc of type theory...
what is the flow for working through this kind of proof? Is there an interactive proof mode like you find in a lot of dependent type provers? Or is there some other guiding mechanism for telling you that you haven't provided enough guidance with asserts?
Do note however that the "proof" part of dependent types requires being able to evaluate arbitrary parts of the program at "compile time". (As a fact of the matter, there isn't even a clean phase separation between compile time and run time in dependently-typed languages; the distinction can only be reintroduced after-the-fact via "program extraction".) So, in a sense, types may depend on values in a dependently-typed language but this is merely an elaborate form of meta-programming, it need not establish a true dependence from runtime values. Whether Ada qualifies as a 'true' dependently-typed language, in this view, would depend on how strong its forms of guaranteed compile-time evaluation and meta-programming are.
It does look like the "discriminant" system of Ada shares key similarities with what dependently-typed languages call a "dependent sum", a generalized record type where "later" elements of the record can depend on "earlier" ones.
"Dependent products", on the other hand, can be viewed as an extended version
of generic functions, although they may also suffice to account for e.g. the examples given by OP of Ada's runtime range types and runtime-sized arrays. The key here being that the representation of a type is indeed given as a "function" of the value parameters that are "depended" on.
Looks really difficult to prove even a "hello world" algorithm. I'm afraid you can easily run into the problem of not understanding what you're proving and just not doing it for what you would actually want.
What’s nice is that you can do it in steps - you may have a hard time proving full specification, but you can prove absence of bad behavior like buffer overruns, etc and go from there.
Maybe they're not implying this kind of limited dependent type system but surely it is still dependently typed? It's just not the "full fat" dependent typing.
Another example of a language with limited dependent typing is Sail. It has "lightweight" dependent types for integers and array lengths (pretty similar to Ada from what it sounds like).
It's very good in my experience - it lets you do a lot of powerful stuff without having to have a PhD in formal verification (again, sounds similar to Ada).
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order.
Yeah you can't do that but you can have the type checker say things like "n^m is positive if n is positive or even" or "foo(x) -> (bits(n), bits(m)) with m+n=x" (not the actual syntax). I'm pretty sure you can't do that stuff in a type system without dependent types right?
Well, "dependently typed" is widely used to mean something like "derived from Martin-Löf type theory, including arbitrary dependent sums and dependent products"; in other words, "dependent types" means "full fat dependent types", and it's the things that are less powerful that require qualifiers.
(So when Sail says it has "lightweight dependent types", that seems fine to me (it does seem to do more than it could with simple types or polymorphic types), but if it simply asserted that it "had dependent types" I would feel misled.)
The wording is subtle and language does change, but what I want to push back on is the confusion I see from time to time that "if I can write anything that looks like a function from values to types, I have the same thing that everybody talking about dependent types has". If you think this you don't know what you're missing, or even that there is something you're missing, and what you're missing is very cool!
Would you say Lean is a somewhat learnable language for somebody who only has cursory exposure to functional programming and static types? I’ve almost exclusively used typescript for the last few years, except for some clojure in the last few months.
Sometimes I find a neat language, but my very TS-oriented brain has a hard time getting into it.
I would say dependent types are going into the deep end; unless you have a real need to prove things, it may be hard to see the motivation to learn such abstractions.
In between ad hoc types like TypeScript and dependently-typed languages like Agda, Coq/Rocq, and Lean are well-typed, polymorphic (but not dependent) languages like OCaml, F#, or Haskell ("ML family" or "Hindley-Milner" are related terms). Those are what I'd suggest checking out first!
If you’d like to dive into the deep end and learn a “we tried to make as much type system power available as conceptually simply and elegantly as possible” language, Agda [1] is a very fun language to fool around with. It’s superficially a Haskell-like, but you’ll likely only really learn it as a toy that nevertheless blows your mind… like Prolog etc.
The first thing I tried to do after learning all the basic syntax is write a function f: Nat —> Set (from numbers to types), and then stick it in a function signature (i.e. g: f (3).)
> you're not going to be able to cover all valid families that way
In fact the emoji committee backpedaled on family permutations for exactly this reason, and now recommends (exactly as you suggest) "symbolic" family glyphs and juxtaposition of existing people-emoji to describe families in detail.
Even the new proposal still has problems. It still has all of the variations between 1 and 2 parents and 1 and 2 children. IMHO it should all be collapsed down to just two Emoji:
1. Two people without children
2. Two people with children.
The former being short for "just the parents" or childless couples and the latter encompassing all families with kids regardless of the numbers. This is a compromise answer, but I think it serves the purpose best given the intended use of emojis.
And even better solution would be to figure out some iconography that would denote "family" without explicitly depicting the people, but I'm at a loss on that one. I mean the people are what families are all about, it's hard to divorce the concept.
Apple publishes repair guides for this (e.g., https://support.apple.com/en-us/120768) as does iFixit. Genuine parts are available for purchase and tools are available to rent by individuals (see https://support.apple.com/self-service-repair, which specifically mentions display replacement). Skill and patience are required; replacement by Apple is not.