People fret over language syntax too much. In most sane languages including Objective-C you simply forget about the syntax after a few months. What’s much more important is the conceptual complexity: the number of language features and the way they fit together. A second important thing is the standard library. Objective-C is a pleasant language by both these metrics, since there are just a few language constructs above the C level, they fit together well without creating dark corners, and Cocoa is a very mature and well thought-out framework. The iOS Objective-C ecosystem does take some time to master, but so does every modern SDK, since the libraries are always huge. Programming is hard and if somebody is scared by minus signs in front of method names, evil spirits will suck his soul out by the time he gets to thread synchronization.
It's good to remember Jeff Raskin's theorem "intuitive = familiar". If you come from the C++ world, Objective-C is going to look weird. If you come from the Smalltalk world, not all that weird.
Also, syntax is the worst place to start learning Objective-C imho 'cos there can be a lot of it to learn if you come from a non-smalltalk world. The best place I've found is to dive straight into the core runtime function objc_msgSend. Once you grok that, and see that you could write down the core runtime in a handful of C functions in an hour or so, everything else -- classes, categories, protocols, delayed invocation, remote invocation, proxies, posing, key-value coding -- finds a "natural" slot in your brain. As a bonus, as you get to the more "advanced" features unique to the system (relative to, say, C++) such as key-value coding, you see how the dynamism of the language plays to support all of that. (Disclaimer: Yes, this is how I got it, but I don't know whether it is generally good way to approach it, though I'd recommend it. Maybe I should write a tutorial on it.)
If you start by looking at the syntax and going "ugh", you'll be missing all the neat ideas in the system ... including, imo, the older memory management system that many complain about. I've, for example, used the "auto release pool" idea in C++ to relieve colleagues of the need to think about ownership and lifetime in relatively isolated corners of a system while considerably simplify api design and staying performant. If you're looking for "predictable performant garbage collection", this is a reasonable design candidate.
I couldn't agree more.
Everything is "hard" on the first couple of times, but the only recommendation I say to people is to stop worrying about the language, and go create stuff, get into that mindset of creating something, even if it's just a simple app. that will allow us to research and ask things around, and that's when you learn and progress. Nobody achieves anything by bitching around how hard this and that is.
You're right - I should have called it "Why Objective-C is Hard to Learn"; all of the issues I enumerate are surmountable with enough experience and experimentation.
I couldn't disagree more. Objective-C is a product of the 1980s, when it kind of made sense that your program would crash if you did this:
[NSArray arrayWithObjects:@"Hello", @"World"];
Of course it crashes! You have to add a nil sentinel value to the end of your list of objects, silly. And of course it compiles with only a warning, just like when you leave out the @ that makes the difference between a C string literal and an NSString. Those errors will crash your program as soon as the code is run, but they compile as valid Objective-C. Things like that are just a nuisance if you tell the compiler to treat warnings as errors, though. If you really want to know why Objective-C is hard, why not trust the authorities on Objective-C, namely Apple?
Where Apple tells you you will screw this up is memory management. To start with, there are four different memory management schemes: C memory management for C objects, manual reference counting, automatic reference counting, and garbage collection. You get to choose two, of which C will be one. Objective-C originated as enhancements on top of C, and Objective-C programmers writing Cocoa apps still have to rely on C APIs for some functionality, so you'd think by now they would have provided a better way of managing, say, the arrays of structs you sometimes have to pass to the low-level drawing functions. Nope; everyone still uses malloc and free. Failure to make malloc and free obsolete is hard to forgive.
From the other three memory management methods, pick one. (Different OS versions support different ones.) Automatic reference counting (ARC) is the latest and apparently the new standard, though manual reference counting is still supported, and GC is still supported on Mac OS. Reference counting requires a little bit more thinking than garbage collection. For example, since the Cocoa APIs were written with reference counting in mind, some objects, notably UI delegates, are held as weak references to avoid reference cycles. You basically have to manage those objects manually: create a strong reference to keep the object alive and then delete the strong reference when you decide it's okay for the object to be collected. (I'm not sure, but I think this is true even if you turn on GC, because delegate references remain weak.)
All reference-counting systems have that problem, but at least they have the benefit of determinism, right? When you pay that much attention to object lifetimes, you get to piggyback other resource management on top of memory management and kill two birds with one stone. (In C++ it's called RAII, and it's the saving grace of C++ that almost completely makes up for C++'s other warts.) However, according to Apple, this technique should not be used with Objective-C:
You should typically not manage scarce resources such as file descriptors, network connections, and buffers or caches in a dealloc method. In particular, you should not design classes so that dealloc will be invoked when you think it will be invoked.
Why not? Application tear-down is one issue, but that doesn't matter for resources that are recovered by the OS when a process terminates. "Bugs" are given as a reason, but I think they mean bugs in application code, not in the Objective-C runtime. The main reason, then, is that if your Objective-C programs leaked file descriptors and network connections as often as they leaked memory, the world would be in a sorry state:
Memory leaks are bugs that should be fixed, but....
Remember the "I don't mean to be a ___, but..." discussion?
Memory leaks are bugs that should be fixed, but they are generally not immediately fatal. If scarce resources are not released when you expect them to be released, however, you may run into more serious problems.
In other words, if you really need something to work reliably, you had better use a different mechanism, because you don't want your management of other resources to be as unreliable as your management of memory. That's a pretty strong statement that you will screw up memory management whatever your best efforts.
So apparently Objective-C memory management is hard. That's what Apple thinks, anyway.
> You should typically not manage scarce resources such as file descriptors, network connections, and buffers or caches in a dealloc method. In particular, you should not design classes so that dealloc will be invoked when you think it will be invoked.
Do they propose an alternative mechanism to handling resources other than reference counting? As you state, RAII breaths life into c++, given how well it works for all types of resources.
It sounds like the above statement is possibly being made in anticipation of the introduction of garbage collection, which would make piggybacking resource destruction non-deterministic. Whereas, it could also be interpreted as a very strong reason to favor manual (maybe automatic) reference counting, and eschew GC entirely. I don't know objective-c very well, but I wonder if the use of GC has generated these arguments against it from within the OSX developer community.
That's an interesting hypothesis, but I can't find any source to confirm or contradict it offhand. The part I took the quotes from only mentions that the order of dealloc'ing objects in a collectable object tree is undefined, as is the thread on which dealloc is called. Both of those are easy to keep in mind while implementing dealloc, though. If a resource has to be freed from a particular thread, then dealloc can schedule it to be released on the right thread using GCD. The non-deterministic order of dealloc'ing would rarely be a problem for releasing resources. After all, if a resource is only used via a particular object, and that object is dealloc'ed, then clearly it's okay to release that resource! Perhaps there are complicated cases where resources have to be released in a particular order, but that's no reason to give up RAII for simple cases.
Apparently it's a feature in Xcode 4.4 in the beta release of the Mountain Lion SDK. There's no developer preview for Lion, though. Fingers crossed that Xcode 4.4 will be released for Lion and not just for Mountain Lion....
Best I can tell, all of the problems you cite are fixed by MacRuby. It shows how surprising well Ruby semantics maps onto the message passing semantics of Objective C. They also found ways to wrap up the C stuff without making you manage your own memory.
Not sure why Apple hasn't been more aggressive in pushing it for Cocoa development. Maybe because they don't trust it to perform well, yet, on iOS devices and don't want to promote it until it can be used anywhere as a replacement for Objective C.
What in the parent post do you disagree with? It's probably obvious to you, but it's not obvious to me.
I understood his point to be mostly that syntax melts away after time, and you will just see the concepts. It seems that you are objecting to the notion that "Programming in Objective-C is easy," but I don't see that in his post.
" And of course it compiles with only a warning, just like when you leave out the @ that makes the difference between a C string literal and an NSString."
How is the compiler supposed to know you meant NSString or C-String?
I've put together a few things with Objective C over the years dating back to OSX 10.1 (yuck PB sucked then) to iOS. Most ended up being ported to Java or C#.
The syntax IS absolutely horrible if you ask me as it results in crazily verbose ways of expressing stuff. Everything is "too meta" and there are very few first class parts of the language. It still FEELS like it's hacked together with C macros (which was what it originally was).
Add to that the reference counting implementation (when GC is not enabled which you can't do on iOS) and it's just painful. Also the lack of any decent threading abstraction - ick.
I think there is a lot of hype around it. It's not where we should be in 2012. Android does better with bastardised Java if you ask me.
Also the lack of any decent threading abstraction - ick.
What? Grand Central Dispatch is a lot easier to work with than most explicit threading mechanisms and with the new block support is a lot less verbose than the typical Java thread-based approach.
It's just a fancy thread pool/task queue with a fugly syntax extension not some magic unicorn that poops rainbows.
Java/C# don't need a language extension - the functionality exists outside the semantic boundary of the language. Another cludge in Objective-C.
C# (ThreadPool/async framework/Windows workflow) and Java (ExecutorService/lots of 3rd party frameworks) have had them for years with well-known communication, thread safe data structures, concurrency and locking semantics.
Most of the verbose mess you see in Java threads is because the person writing it doesn't know much.
Your "fugly syntax extension" is your old friend the closure. The syntax is as good as its going to get in an Algol derivative. I'll take it over plain java any day of the week. If Kotlin takes off on Android then we'll have a real contest.