If you want to play with a Lisp Machine on Apple hardware, Symbolics is still selling refurbished MacIvory boards. Even though the hardware is ancient the latest model (MacIvory model 3) costs more than a top-of-the-line retina MacBook Pro. The MacIvory 2 is more affordable, if you have an extra thousand dollars to spend.
I still have my original Apple II from when I was a kid. Have been meaning to fix it up in a while. This is a great reason to do so! In other news, any hardware hackers in Sunnyvale looking for a fun project...
A Lisp machine is designed at the hardware level for running Lisp well. This is a neat project to implement a bare-metal Lisp environment on top of Apple //e hardware.
This is true, of course, but our point is that there is not a lot of stuff separating the lisp from the machine -- no OS for instance. It's just a machine that runs lisp and nothing else.
I think its fine and a good project. I was once using Lisp on my Apple IIc.
Generally a Lisp Machine is also really a piece of hardware designed to run Lisp. The CPU of a Symbolics for example is a (mostly) stack machine which knows all kinds of Lisp data types and Lisp function calling conventions. The memory then also contains Lisp data (with tags for data types and GC which are used by the CPU directly). Additionally the Lisp that runs on top of it is providing OS services: device control, I/O, keyboard, disks, tapes, expansion cards, graphics, sound, network, scheduling, ...
Getting a Lisp on an Apple II to the level of the built-in Apple Basic with access to keyboard, graphics, etc. should be fun. I'm looking forward to read more of your adventures!
Right. Lisp machines came out of the assumption that general purpose hardware would be very bad at running lisp, in an era where custom single-purpose computing hardware seemed to make sense.
As it turned out, Unix workstations ran Lisp just fine. So when the 80s AI winter drying up budgets, AI labs decided to buy cheaper SUN workstations and the Lisp machine companies failed.
No, Lisp Machines came out of the assumption that there was no hardware to run AI development environments as personal workstations. The concept of the Lisp Machine dates back to the early seventies and the first machines appeared mid/end seventies.
Lisp developers early on wanted to use the machines interactively. A single Lisp developer could use a PDP with something like Macsyma, which probably was thought to serve tens or hundreds of terminal users. And the memory of that machine was still constraining for Macsyma or some of the other larger Lisp programs.
At that time there were no useful 32bit (or more) microprocessors. Other machines were either tiny or very large for multiple users. The 68000 appeared in 1979 and just was good enough to run the console of the Symbolics Lisp Machine a few years later.
So Lisp Machines were developed to run a very compact machine code representation of Lisp and with very large main memories (1 Megabyte or more), graphical user interfaces and all that for a SINGLE user - a $100k per machine.
Thus when they appeared on the commercial market in 1981, there was nothing like it.
Ten years later lots of other processors were fast enough, had enough memory, and were available as capable personal computers or workstations. But initially 'general purpose' hardware was not especially good at running Lisp. The Intel chips were a bad match, the first 68000 was slow with tiny memories, ... It took a few years and the next generations were better: 68020, the SPARC, better Intel processors, ...
But when Lisp Machines were designed at at Xerox/PARC (the Alto running Lisp in 1974, then InterLisp-D), at MIT (the CONS in 1975 and CADR), at BBN (the Jericho in 1979) general purpose hardware WAS unpractical for running Lisp, Lisp applications and their development environments.
Later in the mid 80s the architectures were brought to microprocessors (TI Explorer Megachip, Symbolics Ivory). It was also tried to design RISC based chips: SPUR, Xerox, Symbolics, ... but then the market for Lisp went away.
A good post, and some other analogies I can think of are/were the concept of specialized "word processor" computers, specialized CAD stations, specialized gaming hardware... Pretty much if there was marketing material with words like "accelerator" "turbo" "extreme" it probably fits right in.
I agree w.r.t. the 1970s, but the Lisp Machines were having cost-competition issues even before the Intel/SPARC machines caught up, due to the advent of lower-cost, smallish minicomputers that could be used as single-user machines if you had the budget. By 1985 you could get a MicroVAX II for ~$20k, plus another ~$8k for a copy of VAX Lisp, and have a quite reasonable Lisp system, with the downside of having to store a reasonably big cabinet (but not full-VAX kind of big).
The basic MicroVAX II was quite slow, with tiny memory, just a text terminal interface, ... To get anything comparable to a Symbolics 3640 one would have to invest quite a bit more money and it would still be slower as an 11/780.
The lab at our University was using a lot of Lisp on a DEC 10 and VAX 11/750 and 11/780. Nobody I know was keen to move their Lisp development to a Microvax (though there were some),
There were some Symbolics 3640 users. Then later Apollo/SUN/Compaq/Mac II, ...
> So Lisp Machines were developed to run a very compact machine code representation of Lisp and with very large main memories (1 Megabyte or more), graphical user interfaces and all that for a SINGLE user - a $100k per machine.
> Later in the mid 80s the architectures were brought to microprocessors
I wonder when will someone build one onto an FPGA.
It's the ultra-übergeek version of the Jeri Ellsworth's C-One.
I'm currently reading all the papers/talking to people so I can design one (still torn between doing something historically accurate or modern, new, and potentially useful).
Aw crap. Now I may have to setup my Apple IIGSs and begin another marathon Apple hacking session. My wife always finds these times trying/amusing/annoying.
The Apple //e was the hacker-friendliest machine ever. It's probably the ideal learning platform for anyone looking to go into computer science, although perhaps not app programming as a career, which requires a narrower skill set and more focus.
I was a die-hard Apple fan until the Mac came out, which felt like a betrayal to the original Wozniak-driven Apple philosophy. The PC, ironically, became the continuation of what Apple meant to me.
I suspect that their experience with the //e influenced the decision to make the Mac a closed box. Like you, I found the PC to be a much more comfortable transition.
Because the Apple ][ was a locked design, folks started poking directly into ROM and RAM locations, for instance directly manipulating system variables, or even jumping into the middle of ROM subroutines. A book called "What's Where in the Apple II" documented almost every known hook.
My mom had that book. She's my hacker role model.
As a result of the way that software interacted, it became virtually impossible for Apple to update the ROMs without breaking popular apps.
The next generation of personal computers all represented different approaches to avoiding this problem by providing well documented entry points while offering no guarantee of long term code stability. IBM made one mistake: They let people hard code the address of video RAM, which is what led to the legendary 640k barrier.
Having BASIC (or another language) built into the machine was a big plus and encouraged people to see continuity between the OS, the hardware and the code.
If that BASIC had to use built-in functions and data-types instead of simply POKE-ing and PEEKING-ing as a de facto interface with the underlying machine code, that would still have been hacker-friendly without being inflexible.
It was such a weird relationship between desktop machines and UNIXes back then. It's like the desktop designers had to re-learn the lessons of UNIX in the small. So much was forgotten.
I agree. In my view, the "walled garden" of iOS is just the next step in what Apple had already intended to establish for the original Mac. To be charitable, I can imagine it being based on the idea of protecting the user from Bad Software.
But lots of us wanted to write Bad Software such as little programs for our own use, or for limited, specialized use by other people. I found the Mac programming docs (Inside Mac) to be impenetrable, and the overhead for writing Hello World enormous.
Then I fell in love with HyperCard. But we all know what happened to that.
> In my view, the "walled garden" of iOS is just the next step in what Apple had already intended to establish for the original Mac.
I imagine this is a quandary that shows up not in computing, but anywhere that users interact with a single source of definitive rules.
For example, it might be easier for government to erect certain walled gardens... or for companies to do this with their employees, parents with their kids, etc.
It's not that I don't understand their position or view it as probably the best way to herd cats (I mean, "consumers").
But hacker-friendliness is what gets you the top echelon of users drifting toward your hardware and software. As a result, it's a huge (but invisible) business draw.
My mom was a high school teacher in a small rural district with a Ford plant. In the early 80's, she took a couple of programming classes as a hedge against possible layoff, and ended up starting a class at her school. She paid for the computers out of her own pocket, and class met before the regular school day.
It should be noted that during this time period, outside of a few affluent districts, virtually all teaching of programming at the K-12 level was done on an ad hoc basis by teachers who volunteered their time and money.
Is this sort of anti-hack mentality one of the reasons why, in some versions of ProDOS, there was a conscious decision to omit the assembler, and to encourage programmers to use BASIC? (Our version, in this article, for example, had no assembler; only BASIC and the monitor program.)
When I was young, I learned to program assembly via machine code on an enhanced //e, with no references. I figured out the instruction set by making a program with all possible opcodes and dumping it out, and would write code by inputting the hex directly after assembling it by hand.
The day I discovered the mini-assembler was absolutely mind-blowing.
The mini assembler was in the integer basic ROMS. Prodos didn't support integer basic (hence no mini assembler) but the later enhanced //e roms also had the mini assembler.
I have very fond memories of the Apple //e (and I have some with broken PSUs that I must get around to fixing) I bootstrapped my own assembler on DOS3.3 using the mini-assembler.
On Prodos I bought the full development kit which came with an assembler and a nice debugger. Wrote a Forth for the Apple with help from Loeligers Threaded Interpretive Languages and a copy of Starting Forth by Leo Brodie (both borrowed from the library) good times...
Mini-assembler was in all II, II+ and //e. II had Integer Basic, II+ and //e had Applesoft. Can't remember which ones had SWEET16 though. May also be "all of them".
If all you want is LISP, burn it into an EPROM and replace the stock one.
It was absent from the II+ due to the larger Applesoft BASIC interpreter. If you had a "Language Card" (a 16K expansion that brought the machine to 64K) you could load Integer BASIC on the card and use the assembler.
Also in the original II and II Plus. On those older models, I think you need to type "F666G" at the monitor prompt to get to the mini-assembler. The "!" command was implemented in the enhanced IIe ROM.
Sounds familiar; I used to use it on our II+, so I was rather bemused by all these people telling me they (II+) didn't have it.
I don't have access to my Apples at the moment, so can't check the details, but I'm now curious to know exactly what might have changed between various II+ - what did others gain that we were missing, or why did we have room for it when they didn't...
Same here. I went from an Apple IIe to a Macintosh SE. It was a nice computer for typing high school papers on (yay PageMaker 1.0), and the GUI of System 5 was beautiful on the little black-and-white screen. There was some hacking involving fonts, inits, and ResEdit, but despite despite getting a C compiler and a few books, I never was comfortable programming on it, and always felt like there was something missing.
Two years later I got a '386 clone, bought Turbo C and all the fun came back at once.
This was my experience exactly. I loved the open Apple //, even though it had limited growth potential. I found the Macintosh to be confining (and still do, although more in an interface design sense now).
I'm gonna stick my neck out here and say these guys are choosing the worst and most convoluted way to develop software, much less, a lisp system to for the Apple IIe ever. And then, trying to make a sensation out of it.
I was alive and programming back when the Apple IIe was popular. While I mostly wrote code for the Commodore 64 and Atari systems of the time, I did write a bit of code for the Apple II and IIe.
Those computers are primitive by today's standards, but nobody was entering hex codes. That ended with computers like the ELF II. A lot of the machine code programming was done with a program called a 'monitor' program that was much like the DOS debug program that allowed viewing and writing memory as well as a single line assembly (i.e. inline assembly) that would allow writing programs a line a time in assembly.
But most of the programming was being done using assembler's and compiler's like the Aztec/Manx C compiler that is still availble today (google it) so what they are doing and saying is BS.
In those days I'd just be happy with a language that let me quickly manipulate 16-bit integers. AppleSoft used floating-point for all intermediate calculations, which made even simple loops very slow.
I enjoyed using GraFORTH, which was a 10 kb dialect of FORTH with bitmapped and 3D wireframe graphics (!) that compiled to threaded code.
Steve Wozniak's original Apple II Integer Basic had 16-bit signed integers, but no built-in floating point support.
Applesoft (licensed from Micosoft) had 16-bit integer variables (such as A%) as well as floating point, but you are right that it converted them to and from real with every operation, which was slow. They were useful for saving memory (2 bytes instead of 5) and not much else.
There were BASIC extensions published in places like Nibble and Call-A.P.P.L.E. that added native integer math to Applesoft using the & command, so you could write things like "A% = B% &+ C%", and the operation was performed without conversion to real.
Let's also not forget SWEET-16, Woz's software emulation of a 16-bit kind-of-RISC processor on the 6502, that had 16-bit arithmetic. Reading the source code of SWEET-16 blew my young, impressionable mind.
This is very cool. For a slightly easier way of experimenting with a lisp machine it's possible to install OpenGenera. There are scattered tutorials online, but here's one I've found:
Nice blast from the past. I eventually built a serial port for my Apple ][. I bet you could do a USB-FTDI-Apple interface pretty easily. The audio interface was always really fiddly although we used cassette tapes back then and most of the problems were due to the low fidelity of the tapes.
Interestingly, in practice the laptop-to-A2e audio transmission almost never fails -- the fidelity is probably not an issue anymore. FWIW, we would have picked a better way to transmit the data if we'd had more hardware. This was basically just a hack that ended up being kind of funny and cool.
One of my personal projects (one that'll most likely never happen due to time constraints) is a Raspberry Pi-ish gizmo merged into an Apple Super Serial (or something with a built-in terminal). This way, you could plug it into a slot and boot into a terminal of a Linux machine.
But then it wouldn't have been a hack! We wouldn't have had to understand anything to get the job done!
... Though, credit where credit is due, the C code we used to generate the audio signal actually is pretty much a direct port of the corresponding code in ADTPro. One of the reasons it's released separately from our lisp code is because ADTPro is LGPL and we prefer the MIT license. :) One of these days I'll make the port more independent and less derivative, so they can all be in the same project.
Might I suggest adding something to your project description that acknowledges this. I didn't see any mention of ADTPro or David Schmidt on the linked page or the github repository.
David Schmidt has contributed a lot of free software to the Apple II community over the years and, IMHO, deserves recognition for his work by those that build upon it.
The C code that encodes the audio is listed in the dependencies section [1], and ADTPro is freely acknowledged there. However, I'm more than happy to add some reference to the post because I think you're right.
EDIT: Aaaaaand acknowledgement pushed into post. Cheers.
Thanks very much for the acknowledgement. Your project looks like a lot of fun - very cool use of audio. And of course Lisp rocks. Another audio-based project is the Apple Disk Server - loading games directly to the Apple via cassette port:
http://asciiexpress.net/diskserver/readme.html
http://www.lispmachine.net/symbolics.txt