I was part of the original UNIX porting team at NeXT, and we were just trying to get the most popular non-proprietary operating system (UNIX) modernized (by using Avie's CMU Mach) as the modern core to a standard OS. We didn't even think about any server type applications. And Steve told us that he was tired of being beaten-up at Apple for being closed source - so he wanted UNIX. (As an aside, he refused to allow any object-oriented tools in the system - but in Engineering we realized that Steve wouldn't know an object-oriented system from a hole in the ground, so we ignored his direct orders to not put in anything object oriented. The rest is history.)
I remember those days, and I remember Steve saying many of those things. Steve didn't have any deep understanding of what he was saying, and he spent a lot of time working on the wording of those sentences - practicing them over and over with a few of us in Engineering and Product Management. He created those sentences based on what engineers were telling him - reformulating them into his personal style. Really, Steve would often say that he couldn't do what we were doing, but he could help sell it. He described himself as a marketing guy - and that is what he was.
At the time, and for a long time after, a lot of computer science types were arguing that everything should be implemented from the ground up in very high level OO languages and abstracted runtime environments and this was the inevitable way things would be. Just look at the discussion on HN right now about smalltalk. Java also came out of this mindset. But Steve came form a highly practical, nuts-and-bolts engineering mindset where perfect is the enemy of the good.
So I think Steve's attitude has to be put into the context of that conflict. But as MrTonyD shows, there were people at NEXT who understood how to translate that attitude into practical terms. And to be fair to Steve that's because he knew how to hire absolutely tip-top talent and trust them to do their jobs right, even when they disagreed with him.
> At the time, and for a long time after, a lot of computer science types were arguing that everything should be implemented from the ground up in very high level OO languages and abstracted runtime environments and this was the inevitable way things would be.
I don't really see that they were wrong. After all, C and POSIX are also very high level and abstracted compared to the assembler-based OSes which existed before them.
The big problem now is that to be a successful alternate OS one needs to be bug-for-bug compatible with POSIX, and have a C compiler, in addition to one's own language and OS. Perhaps recent years' developments in virtualisation and containerisation will make it easier for a non-Linux host to run a Linux kernel and hand containers to it as needed.
Maybe one day OS and systems research will once again pay off.
My guess: Back at that time, it was harder to afford the extra resources for OO. It's not until the time I graduated college that machines and VMs of managed environments started to get faster than "embarrassingly slow" in mainstream hands. This is also a big part of the reason why there were languages like Eiffel and also why C++ got so entrenched. Also explains the existence of Objective-C.
Thinking like this annoys me, because it assumes a specific implementation for OO. Are people conflating OO with Java? Are they conflating it with Smalltalk? Are they unaware you can write C in an OO style?
> It's not until the time I graduated college that machines and VMs of managed environments started to get faster than "embarrassingly slow" in mainstream hands.
Yes. People are apparently conflating OO with Java or Smalltalk.
OO tends to imply things like dynamic dispatch (for method lookup), implicit extra parameters to functions (this pointer and so on), accessors (potentially a lot of extra function calls), placing variables in memory based on grouping within objects rather than where lookup might be most efficient. Not to mention possibly run time type info, which takes up more memory. Those are all things that are mostly fast enough to be a non-issue now, or are optimized by compilers, but they are generally slower.
Seems like you're mentioning a few features that are either optional or inline-able in C++. It took me a long time to appreciate this aspect, but c++ in particular is all about not making you add costs without explicitly asking for it.
Once you start removing features, then you are only speaking of an object oriented-like programming language, not a true object oriented programming language.
Examples include inheritance, abstraction, and encapsulation which you can achieve merely by changing syntax. What requires extra resources at runtime is polymorphism, and that's one of the most important, functional aspects of object oriented programming.
And they say religion is dying out in the civilized world.
Consider if you will that the things you are describing as slow - chiefly the indirection of a virtual call - are slow relative to other things because of relatively recent CPU advancements. If a CPU does no branch predicting or caching, I think the overhead of a virtual call doesn't sound so bad. So, if we're talking about why to avoid OO in the late 80s, I'm not as convinced the cost of a virtual call is the barrier.
That said, my point was that C++ lets you do these things, but you have to ask for them. If you want virtual calls, go nuts. But if you don't, you're not going to suffer from mandatory bloat - your object code will still look good, as if you had written it in a language without such mandatory frivolities.
Well, talking about optimizations and all of that, the reason to be against this is not because of big sections of code being unoptimized. It's the death by a thousand paper cuts. Every small object requires its own (extra) information besides its fields. Even if that's one double word that identifies a virtual function table, if the object is 16 bytes that's 25% more storage space required. Not that a well implemented object oriented scheme wouldn't work nicely, but given a naïve or overblown approach.
In any case, modern CPUs and compilers can use aggressive optimization, sure, and the runtime speed is only equally as important as the ease of development and concerns about reliability.
On an older system, though, using a liberal OOP design in an operating system would be akin to creating structs that have arrays of function pointers associated with them, and having every function call routed through those pointers. Looking at the machine executing this code, you would reliably see jumps to similar codepoints in-between function calls, and a lot of wasted time or space. And I'm not sure anyone has mentioned this, but exception handling especially can add a lot of overhead.
Obviously, the benefits significantly outweighed the cons, especially in this case (NeXTSTEP and Objective-C), but I think that was helped substantially by the fact that projects Steve worked on always had much better computer hardware.
The GP is talking about what OO was like back at the time. What the heck are you complaining about?
There were very few fast OO languages and they were emerging at the time. The most prominent examples of those languages are those languages GP names, and they do not exactly live up to the promises of even contemporary OO research, much less modern research.
I share the poster's sentiment. OO is a mental box and a lot of people end up with overly narrow definitions or have trouble escaping the box thinking. You can apply the same ideas without language support. It is orthogonal to bytecodes, VMs, GC or even a "class" keyword. But a lot of people have very specific expectations and will have a hard time seeing this for what it is.
Actually I think this is the beautiful thing about Perl 5's OOP support.
It was clearly an after thought and a bit of a hack, but it works, and it makes it transparent how everything works. I had been taught Java at university before that, and the whole OOP thing was a bit of a hidden mystery. Perl gave me a far better understanding of OOP.
We are talking about computing in 1985. A 25 MHz workstation would be at least 10,000% slower than a modern computer, with about 1,000× less memory available. Still, this isn't a huge difference in context, but talking about adding 4 to 16 bytes of overhead on every data structure, or every object, in a worst case scenario could result in twice as much memory usage.
Steve Jobs wasn't an engineer technically, so he wouldn't know the specifics here, but at the time object-oriented programming had a stigma associated with it because programmers did know of these drawbacks. Using Objective-C was obviously a fundamental choice by the software engineers.
Writing C in an object oriented style is what a lot of programmers did then. But there's a difference between associating data with methods and supporting polymorphism. Hence the talk of virtual function tables.
And indeed, even when Macs had faster hardware (early PowerPC), the Objective-C crippled performance. Windows crashed a lot (like Mac did before OS X), but it was fast because it had no isolation between components.
There was no Objective-C in Apple software in the days of "when Macs had faster hardware (early PowerPC)." The state-of-the art at OS X's release was the fourth-generation PowerPC, and there was only one more major iteration of the PowerPC used in Macs. I would call this "later PowerPC." "Early PowerPC" Macs couldn't run OS X.
Steve didn't understand OO at the time. Not in the least. Steve was completely non-technical, and he knew that UNIX was standard and didn't want us putting something new and untested in it. I can't tell you how many times I tried to explain OO concepts to him (lots of us tried, over and over.) So, to him, an OO toolkit was something that would make us too complex for developers looking for an easily understood UNIX system. He told us that he was relying on people's existing understanding of UNIX - that would allow them to adopt our system.
There are always casualties when managers make decisions and policies based on things they have absolutely no understanding of. At least programmers were able to do it anyway with this particular case, as Steve had no way of knowing it was happening.
Just a guess here, but true object-orientation is really hard to get right. It's not like OS X was going to be built on Smalltalk. And if you're going to half-ass OO with an implementation like structs+functions, you're usually better off sticking with a functional system. This is especially true when it comes to high availability services, like operating systems.
Given the above link, I deny there's any single useful definition for OO: It's been defined and redefined over and over again for decades, to the point people can't even agree on what languages are capable of writing software in an OO style.
Probably because of the hype. He was promoting OO and thinking about it as a competitive advantage over the competition. He probably didn't want such "powerful" (?) software to be reused by them.