The opening statement is misrepresenting RMS; when he says "nonfree", he means Free. The article author seems to zero in and focus on the "open source" aspect of it all. Even if you have the unobfuscated Javascript source-code, you probably don't have the right to modify it.
Whatever you think on the matter, its best if articles explain RMS's stance accurately and with the right weight in the right place.
Yes. Writing an article about software freedom without understanding the difference between "free as in speech" and "free as in beer" in 2013 is a bit strange.
Just because he mentions it doesn't make this an article about software freedom. It's a quote, that's all. It's tangentially related, but that's about it.
I believe the confusion comes partly from freedom software almost always being for free/gratis, and in fact was encouraged in the early days of GNU if I am not mistaken.
The GNU stance on freedom software seems to be that it should be for free as well and that programmers should make money on activities related to their primary produce.
As RMS is driving for GNU/Linux so could we have a drive for freedom software or libre/liberty software to make no mistake what we mean. We can focus later on the discussion of price of software and how to compensate the developers.
I guess liberty software is a bit too much to swallow for the enterprise types that back open source. Free software is easier, but still was not easy enough for open source people, or did the open source people deploy their term due to free meaning gratis, so open source could be marketed as non-gratis freedom software?
Well, actually GNU have never discouraged selling Free software. Its right at the top of the list in the introduction to philosophy on http://www.gnu.org/
They havent encouraged it either. I believe in the early days the usage of free was gratis as well, thats the impression I got from the manifesto of RMS.
In general my impression is that GNU/Freedom-people avoid the question of compensation.
The takeaway is that an obfuscator that produces a "black-box" version of any program cannot exist, but one that can produce equal but indistinguishable obfuscated versions of any one program does exist.
I find that the article's most intriguing point is the link between homomorphic encryption and indistinguishable obfuscation.
>They adopt a different definition of obfuscation, called indistinguishability obfuscation. The criterion for success is that an adversary who is given obfuscated versions of two distinct but equivalent programs—they both compute the same function—can’t tell which is which.
I don't follow this part. Obviously, if you have two distinct programs, they are different and you could arbitrarily label one A and the other B. And even if you couldn't do that, so what? Anyone care to explain?
You give me two equivalent programs, A and B. I give you obfuscated versions of both, lets call them C and D. You can't tell whether I produced C from A and D from B, or C from B and D from A.
This property prevents certain class of cracks, making it harder to pirate software.
Consider the following example: a vendor provides a free demo and a paid version of the same program. Functionality is the same (`computes the same function'), the paid full version uses a good O(log(m) + log(n)) algorithm while the free demo is purposefully encumbered with a bad O(m * n) algorithm -- in hopes of making you pay for heavy use of the program.
Now the property means you can't really trace which exactly part of the program encodes the key algorithm, thus you can't really patch the demo version to the full version.
Yeah, I don't follow this either. If they are equivalent programs- they both compute the same function- then what does it matter which is which? Also how would this apply
Or obfuscation could help in producing "crippleware"—try-out versions of a program that have certain functions disabled.
Because then they wouldn't be computing the same thing, would they?
because implementation details matter, like the given example, demonstrating the ability to factor large numbers, without giving away the algorithm that does so.
In this case, we've defined a function, PrimeFactor(x), which returns the prime factorization of x.
you can write a naive program that computes PrimeFactor(x) by brute force. it will be slow but it will work. let's call this ProgramA.
let's say I write a magic, genius program that computes PrimeFactor(x) in constant time. This is ProgramB.
Given ProgramA, and ProgramB, you can correctly assert that ProgramA(x) == ProgramB(x) for all x.
If you ran A and B through a indistinguishability obfuscator to obtain ObfuscatedProgramC and ObfuscatedProgramD, the assertion that C(x) == D(x) holds, but you'd have no way of telling whether C comes from A or B (apart from runtime, which is ignored here)
By the same token, you couldn't tell if C and D both came from A, or both came from B, or any other possible program that correctly computes PrimeFactor(x).
so in theory, an indistinguishability obfuscator allows someone to know what a program computes, but not how it computes it, and implementation can matter a great deal.
How is that weaker (or any different) from the obfuscator that was proven to not exist? Namely, an obfuscator that preserves functionality and has the black box property.
I think the key point is the effienctly compute in the defintion. It seems that the black box property means C() and an oracle version of A() must be completely indistinguishable, including things like runtime and memory usage.
The weaker obfuscator has no guarantee about that.
I thought it meant two programs which compute the same thing but possibly using different algorithms. If they are indistinguishable then you can't infer from the improved algorithm anything you can't get from the old one.
Side-note: obfuscating variable and method names doesn't offer that much of a size advantage, since gzip will already compress those. Most of the gains in minifying come from removing whitespace & comments, and smart code transformations.
For Google products, huge gains come from running an optimizing compilation (Closure Compiler) which dramatically rearranges Javascript the same way GCC rearranges C into assembly.
It is not always true that minifying method names is the same as gzip, as a good minifier will understand scopes and arrange many of the variables to have both the same name and smaller distances between like symbols. This shifts the frequency distribution of the symbols as well as the distances allowing the huffman coding to be more optimal.
The Closure Compiler on ADVANCED_OPTIMIZATIONS mode is a terrifying beast and only works on carefully-crafted code. It makes some astonishing optimizations, though.
I encourage JS developers who strongly believe in distributing source along with their webapps to use source maps instead of delivering slow, unminified code. They are only downloaded when a user opens the web inspector (no overhead for regular users) and provide transparent access to source as if it had been delivered instead.
Actually it does sort of help. It allows all locals across the program to have the exact same name, which compresses much better than only some locals sharing names.
Whatever you think on the matter, its best if articles explain RMS's stance accurately and with the right weight in the right place.