"Imagine if today, iOS or Linux had built-in libraries of code that allowed anyone to build a social application that didn’t require cutting a deal with Facebook or using their APIs,” … “[With PLATO], the API was in the operating system and it allowed any app to be social.
… Term Comment … allowed users to leave feedback for developers and programmers at any place within a program where they spotted a typo or had trouble completing a task … the user would simply open a comment box and leave a note right there on the screen. Term Comment would append the comment to the user’s place in the program so that the recipient could easily navigate to it and clearly see the problem, instead of trying to recreate it from scratch on their own system … “If you were doing QA on software, you could quickly comment, and it would track exactly where the user left this comment. We never really got this on the Web, and it’s such a shame that we didn’t.”
Makes you wonder what pioneers back in the 60s and 70s could have accomplished with modern hardware. What does Smalltalk or Englebert's NLS end up being with gigs of RAM and high speed network connections? What would the LISP machines have been like? It still kills me to think that an OS crash on one of those took you into the Lisp debugger. And it kills me to think that Smalltalk and Visual Basic had a built-in GUI editor and layout manager, unlike the web.
They'd accomplish a lot, but I fear most of it wouldn't take anyway.
The problem is the Internet. Or, more precisely, other people.
Those systems of old didn't separate between "an user" and "a developer" because they assumed all users are trusted (and responsible people). With the Internet, this assumption immediately becomes false - between people attacking the network and social engineering, it's dangerous to let people have that much control over their environment.
Which is something that pisses me off, because I'd love to work on a modern-day version of a Lisp machine, but I understand this can't ever become mainstream (and myself I'd be worried using it to do my banking).
Or, as I sometimes phrase it, Star Trek-level tech requires Star Trek-level society.
I suspect there are Laws of Social Computing which guarantee that public networked systems inevitably converge on certain kinds of uses and applications, with certain consistently reinvented “abuses” and failure modes.
Networked computing is just a mirror. We seem surprised when some of the things we see it aren’t very nice, but perhaps we shouldn’t be.
> Those systems of old didn't separate between "an user" and "a developer" because they assumed all users are trusted (and responsible people). With the Internet, this assumption immediately becomes false - between people attacking the network and social engineering, it's dangerous to let people have that much control over their environment.
More than that: ads. They really did destroy the internet.
We now have the HW/SW technology to authenticate and isolate code and individual humans, including the creation of parallel universes (copy-on-write disk and memory for transient VMs that are forked from running code) with optionally persistent side effects. Some of these ideas are present in https://cappsule.github.io/faq/
> What would the LISP machines have been like? It still kills me to think that an OS crash on one of those took you into the Lisp debugger.
Sometimes I think a lot of people do use LISP machines. It's just called emacs.
I joked on here once that I think the end game for GNU/HURD was ultimately to become a LISP machine via emacs, and if you look at some dev setups, that's not too far from what macOS and Linux are for them today: a place to run emacs and a web browser.
"GNU will be able to run Unix programs, but will not be identical to Unix. We will make all improvements that are convenient, based on our experience with other operating systems. In particular, we plan to have longer filenames, file version numbers, a crashproof file system, filename completion perhaps, terminal-independent display support, and eventually a Lisp-based window system through which several Lisp programs and ordinary Unix programs can share a screen. Both C and Lisp will be available as system programming languages. We will have network software based on MIT's chaosnet protocol, far superior to UUCP. We may also have something compatible with UUCP."
Not many people today would think of their "GNU/Linux" system as a Lisp machine anymore than anybody thinks of their Chromebook as a JavaScript machine, nor is it really comparable to the Lisp Machines of the past.
I fully agree that GNU/Linux is nothing like a Lisp Machine, but that is because things didn't go according to the original plan (otherwise we would be talking about GNU/Trix).
One interesting thing about the original plan is that Unix was essentially text only back then. Part of the idea of making it more Lisp Machine-like was to have a GUI, so when Unix got that (with X11 eventually winning) this part of the plan became less urgent.
I suppose an Emacs OS would be sort of a GUI but not quite?
The MIT derived Lisp Machines had all an Emacs called Zmacs.
But that was just one application among dozens others. Applications usually were not based on Zmacs, with only a few exceptions: on a Symbolics: Zmail and the Concordia documentation editor application frame. Other tools were not Zmacs based: Listener/REPL, the debugger, the directory editor, the font editor, the PEEK tool to view system details, the document examiner, the tape tool, ...
The problem for me with Emacs is that (while I use it everyday for just about all editing, and recently org-mode) it's so damn brittle and hard to configure. It's like editing autoexec.bat on DOS and reboot to try your settings. Still, it's the best I've found.
I hope you don't reboot Emacs on every config change?
M-x eval-region, M-x eval-last-sexp and M-x eval-buffer are your friends. Emacs is, as suits a Lisp system, strongly runtime-modifiable. It even has its own REPL - M-x ielm.
Then there was the access of the complete OS stack and other running applications, and everything was compiled to native code, there wasn't any VM running.
Try do do a (disassemble func) on Emacs for seeing actual 80x86/ARM,....
One of the reasons it has never taken off is that GUI editors and layout managers that have come out for the web (and there have been a few), have never quite gotten the code right. They would produce a page that looked like the designer... but with terribly written HTML/CSS. So web designers and developers prefer to make their own markup.
The deeper problem is that the tech industry is slow to get rid of HTML, even though HTML was created to exchange documents, and nowadays we mostly use it as a GUI for network software. See "The Problem With HTML":
HTML isn't the problem - declarative markup is a great way of doing GUI layout, non-web GUI frameworks tend to come up with alternatives that look similar. The problem is CSS, which is a fractal of bad design, broken at every level, from selectors to the box model.
CSS works fine for Text markup. The problem is you get two models one where the Browser picks where stuff goes depending on the browser and local settings, another where the designer makes that choice. You can't have both things exist at the same time, on top of that most designers don't know what they are doing.
It used to be conventional wisdom that using HTML as a layout system is wrong, you know. Because it wasn't meant to be one, and text content was supposed to be independent of the medium on which it is viewed. Well, at the time I had my doubts that it was going to work, but it was (and still is) a nice idea.
I used the 1990s RAD tooling, and in many regards, the results were mediocre at best. Changing a window size could kill your form, to say nothing of font size.
But you would expect 20 years to be enough time to improve RAD tooling in areas it wasn't so great at for modern devices. Instead, at least as far as the web is concerned, it's mostly been missing.
There really isn't any good reason for OS vendors to not add such a library to their operating systems these days .. except for the fact that they're all asleep at the wheel, having been overwhelmed with the idea of making their OS's browser-centric ..
> the entire PLATO system was designed with social sharing in mind, he says, whereas the modern Internet is more fundamentally suited to fetching websites and documents.
> the API was in the operating system and it allowed any app to be social. That was kind of the assumption that the PLATO people had.
Hmm, so why didn't the web emphasize social? Ponder...
Read-only web. The first web browser(s?) were read-write editors. HTTP GET was paired with PUT. But ramp up happened on read-only browsers (Mosaic, Navigator). User input wasn't possible until ISINDEX and then FORMs. Write didn't become part of the culture.
Documents, and separate social. In an ecosystem of ftp, and assorted file formats, and nntp (usenet news), there was a missing integrative piece. HMTL/URLs addressed that. Gopher had similar motivation. HTML was a static document format. But the 'one piece of an ecosystem' model meant social wasn't tightly integrated. If you wanted social, you did news, not WWW. You posted urls.
News (distributed social) plateaued, then declined, and the focus of social development shifted to web sites. I was never quite sure why news development stalled. There was a multi-year period, where it was clear what needed to be done to advance (eg voting), but it wasn't happening.
One certainly could have built a pervasively social system on top of the web. But it's hard for add-on layers to get traction. Even when they were part of the original vision. For example, X Windows' low-level Xlib, was intended only for the higher-level libraries everyone would then use, but those were slow to materialize, so everyone just stuck with Xlib.
Part of why the web worked was its initial simplicity and subsequent feature creeping. The hypertext community was quite miffed that this toy was getting traction, when their various more complex systems were not.
But we never got say a news2://... protocol. It seemed dotcom was pulling the oxygen away from a lot of potential infrastructure-ish projects - perhaps that was part of it.
Seems like browser-based PUT is at once "too" generic and simultaneously not self-describing enough for a lot of input. (Should I be allowed to change my wife's facebook details?)
The protocol supports rejecting those edits, but what if I want the form itself to change in response to edits I'm proposing? (Greying out the "change password" button, or telling me "You can't change the relationship status of others!")
Stepping up from my examples a bit, it seems like you'd need a pretty sophisticated model for ACLs and almost arbitrary client side validations, which we didn't have until javascript.
I'm afraid social methods embeded into HTTP would go the same way LINK/UNLINK (http 1.0), and DELETE,PATCH,OPTIONS,TRACE went - mostly forgotten, and rarely used.
Practice has shown that some things are better solved on a higher layer.
that is, after you've gotten a copy of the book. Really, everyone in the YC world ought to read this book. It is a completely alien history of computing that has basically nothing to do with Silicon Valley, and in some ways is heresy to Silicon Valley. The PLATO system was built 1000 miles to the east, in Illinois, and had very little (but not zero) cross-fertilization with Silicon Valley thinking. The exception is Xerox PARC, whose people including Kay and Goldberg had regular interactions and trips back and forth with their counterparts at the PLATO lab called CERL at Univ of Illinois. I have a whole chapter on PARC and Kay and how the PLATO project deeply influenced Kay.
But seriously, if you have any interest in the untold histories of:
• flat-panel gas-plasma display invention
• e-learning and online courses (which PLATO was doing, for full credit, as early as mid-60s)
• MUDs before MUD-1, including 3D dungeon games
• multiplayer graphical games of all sorts
• PLATO Notes as a precursor to USENET and Lotus Notes
• instant messaging in 1973
• chat rooms in 1973 using Talk-o-matic
• screen sharing
• phishing, PLATO-style
• emoticon long before Internet smileys started
• crowdsourced online newspapers
• the White House threatening to shut down, in 1973, ARPANET and PLATO, completely, because they had online forums talking about impeachment of Nixon
• tons more
I had my first job programming Plato at UofD. I had my second job building hardware for replacing the old expensive terminals with pcs running the same hardware. A few years later, I came to silicon valley to help the people who put tcp/ip on the Macintosh fight the networking war ( spoiler, tcp/ip won ).
Plato had community, and I even bought my first computer from a guy in illinois. There was a form of live chat that even spanned different universities, there were Friday afternoon massively multi player flight simulator battles. in the late 70's.
let me give you a quick taste of programming in Tutor. Variables were strings, integers, and shared ( same value for everyone in the program, used for multiuser experiences ). there were 150 integers, called i1, i2, and whatnot. There were no symbolic variables.
Since you didn't have to make a network call to communicate between users ( that came with the draconian unix process model ), you could make shared experiences far more easily than today.
For a social history of PLATO, see the 50 Years of Public Computing conference that took place at UIUC School of Information (the same year as the Computer History Museum video embedded in the article), http://50years.ischool.illinois.edu/.
"Imagine if today, iOS or Linux had built-in libraries of code that allowed anyone to build a social application that didn’t require cutting a deal with Facebook or using their APIs,” … “[With PLATO], the API was in the operating system and it allowed any app to be social.
… Term Comment … allowed users to leave feedback for developers and programmers at any place within a program where they spotted a typo or had trouble completing a task … the user would simply open a comment box and leave a note right there on the screen. Term Comment would append the comment to the user’s place in the program so that the recipient could easily navigate to it and clearly see the problem, instead of trying to recreate it from scratch on their own system … “If you were doing QA on software, you could quickly comment, and it would track exactly where the user left this comment. We never really got this on the Web, and it’s such a shame that we didn’t.”