Having more megapixels allows you to crop more without losing detail and effectively zoom in on different areas on the photo in post processing. It frees you from needing to perfectly compose your photo when you take it and it makes prime lenses almost as versatile as zooms for getting closer to your subject. It has an important effect on ergonomics because it lets you carry around a lightweight prime lens instead of a big heavy zoom and still be able to achieve similar results, or even better ones if you can take advantage of the prime's higher max aperture.
Facebook Platform supports OAuth as a standard authentication protocol for 3rd party apps. It also provides APIs that developers can use to export data users have entered into facebook (with the user's permission, of course). What do you think is missing in Facebook's support for data portability?
First would be an API to export all of a user's data in a common format. The current solution, where you email a large file in HTML format is not an acceptable solution. Good for archiving, bad for data portability.
Most importantly, though, a clear change (or clear guidelines) to the Terms Of Service which would guarantee that moving your data to a competing service would not result in cancellation of the account or deletion of the new service's API key.
As for a distributed protocol, take a look at the functionality in Appleseed's QuickSocial and StatusNet for an idea of what a distributed protocol entails. I should be able to login to facebook with an external identity, and add someone as a friend on facebook, without having an actual facebook account.
If Facebook is interested in these things, you can have some representatives join the Federated Social Web mailing list, and you can support the W3C working group on federated social networking, as well as supporting the Data Portability Project.
I want to be able to dump out the photos from all the albums which contain photos of me. The API doesn't let me do this without a hack involving walking all my friends albums.
EDIT: and a full realtime read/write XMPP-style interface for all the data going in/out of my profile please! :)
The theory is that by accepting someone's friend request you're not automatically granting them the ability to export your email address to any application that asks for it. If it were possible there's a good chance your inbox would quickly be filled with spam from apps your friends use. I know of no social network, including Google's own Orkut, Twitter, and Myspace, which allows this kind of mass exportation of friend emails via its API.
Plus it's pretty clear Facebook itself needs to make a distinction between something like a "export application" (that does generate nothing-like-spam) and something like Farmville (which is only generates something-like-spam).
While this lack of distinction is a serious problem for Facebook ... it hardly qualifies as a good argument for Facebook being able to export from everyone else but not allowing exports to anyone.
So on the other hand, by emailing someone, you are granting them the right to download your contact information to a third party application who will then ... email you that you should join ... Facebook?
The Facebook API allows 3rd party applications to access most of the user's data, including the user's email, if the user grants the application the required permissions. More details are at http://developers.facebook.com/docs/authentication/permissio....
With the Facebook API, the user doesn't have to provide any third party his credentials to allow the third party to access his data. Facebook uses OAuth to securely pass an access token to the third party while protecting the user's credentials. See http://developers.facebook.com/docs/authentication/ for more info.
Ah, but if I remember correctly, the terms of that OAuth usage explicitly state that while data may be accessed and used, it can't be stored indefinitely (with good reason, sometimes I don't want some app toy to indefinitely store all the information I trust Facebook with). So if a third party uses that API as an export mechanism, their API access should be (rightfully) shut down.
BUT - what if I actually want to export all of my photos into SomeApp.com, and I want to give SomeApp the right to store my photos indefinitely? Is there an API they can use to pull it from Facebook directly?
It used to be the case that apps could only store your data for 24 hours, but we removed this restriction in the last f8 conference.
You can definitely export all your photos into SomeApp.com using the graph API and they can store your photos indefinitely. These APIs are documented at http://developers.facebook.com/docs/api.
On the flip side, if the team from 37 Signals worked for a great technology company like Facebook, Google or Apple, they could have built products that tens or hundreds of millions of users and hundreds of thousands of developers use every day. They could have worked with some of the best engineers and designers on the planet and have had a huge impact on the products they've built and their company's success. They also could have also created popular open source software that runs on massive infrastructures, written their own blogs, and achieved industry fame.
I'm not trying to argue that entrepreneurship isn't a great path to pursue for many people, but I disagree with the simplistic picture that people sometime paint wherein entrepreneurship is always glamorous and employment is always dreary.
That's a really good point. There definitely are shades of gray. There are companies where one can have a voice and build some great things. One doesn't have to be the founder to be part of an entrepreneurial effort.
It doesn't always always have to be about money. Open source software gives people to build something, break some new ground and make a change. I guess that's the same drive that gets people to found or be part of all kinds of efforts that extend beyond software and startups.
It's no coincidence that founders who get bought by BigCo's end up looking miserable 12 months down the line, after being demoted to an employee, who doesn't have full control of their baby any more.
"People tend to think of Common Lisp style hotpatching where you can dynamically build up your application from scratch without restarting the CL environment. Erlang doesn't really support that, but you can hack it to give you somewhat of the same."
I (almost) never restart my Erlang server, and that lets me develop just as you described. Why do you say it's not supported?
(Btw, it really sucks to go back and use languages that don't have hot code swapping. Developing in them feels terribly slow.)
Fast: if you put is_float() guards on your function definitions, Erlang will optimize their computation. If you worry about memory consumption of strings, use binaries.
"Note that the language was originally built for concurrency rather than parallel computation." I don't follow this argument. Isn't the point of concurrency to implement parallelism?
I am not too sure that an is_float() guard will yield a performance increase in the BEAM interpreter. It will give an increase with HiPE though I am sure. As per strings and memory consumption: Yes, you can use binaries or even better: iolists, but I think, then, that the string type has to go. A good (immutable) string library based upon binaries in OTP would be nice.
Concurrency and parallelism: No, they are not the same. A concurrent problem is one where you want to cope with concurrent activities: web servers, phone switch exchanges, routers, etc. A parallel problem is one where you worry about how to split the problem across multiple CPUs/Nodes so that you utilize as many resources as possible while achieving the best speedup possible.
The latter is usually number crunching on big clusters of several hundred machines -- or number crunching on a grid of computers loosely spread all over the world. The first is about coming up with a nice model for computation that makes you model the problem in a logical and straightforward way.
Erlang has a very nice concurrency-model. Together with the rock-solid and stable VM you have a very good outset for solving concurrent problems. But -- you can implement concurrency well on a single CPU, the key point is that you have a nice model in which to work when trying to explain the problem to the computer.
In a parallel problem, it is crucial how you split the problem up. Some problems have subproblems independent of each other. These are called 'embarrassingly parallel' problems. All of these are map-reducible. But how about a problem which is not? Numerical solving of differential equations via successive over-relaxation is a classical problem because each subpart needs to communicate its findings to neighboring subparts. You are worrying about the speedup when adding more CPUs or by Gustafsson: keeping the time-frame constant and adding more CPUs - how much bigger a problem you can solve.
Erlang, being about 30 times slower than C on numerical computation, needs at least 30 times the speedup to compete. Depending on the problem, that can be achieved with, say 32 CPUs, or perhaps not at all (if the problem has no inherent parallelism), where the C program only uses a single CPU. That is the gap Erlang is currently competing with. If you take the de-facto standard, MPI, and use that on C, you pay a lot of development time for a message passing interface which even allow for asynchronous messaging (crucial for hiding the latency of computation).