Acceptance into Mozillaʼs CA Certificate Program is usually discussed in public. Let's encrypt is not on the list of pending CA applications[0]. Does anyone know if they are introducing a new root certificate or teaming up with an existing CA?
When you mentioned discourse serving 19k req/sec, does it hit the ruby stack at all? If no, serving 19k req/sec of cached HTML doesn't seem that impressive. What am I missing?
With SSH you usually own both endpoints (or at least trusting your cloud provider).
The example you give with regards to exchanging a piece of paper is very similar. It's ridiculously hard to do such a thing on large scale without trusting intermediaries.
This article[0] is largely about DNSSEC and DANE but it might give you some insights why making registrars the sole authorities isn't such a good idea.
I may be dense but it seems to me that your registrar is still the trusted entity no matter what:
- they sell you the domain name. Doesn't matter how you try to authenticate yourself to clients (cert pinning aside), the registrar can seize the domain at any point.
- they control what your authoritative name servers are. They could easily change these on you.
- they populate the whois database, which is used when you purchase your TLS certs. This means that a registrar can list [email protected] as you the contact, and have Joe get a completely valid cert.
- one important issue that the article does not mention is that you are forever locked into trusting the site operator. This means that you as a user already must trust another entity.
This, what I am proposing is that out of the current trust list: [site owner, registrar. CA] we cut out the CA. Once again, the registrar always trumps the CA in their ability to seize your domain. At the same time, the CA provides zero protection against the registrar misbehaving. This article talks about shifting trust from the CA to the registrar and how that's bad. I posit that you already trust the registrar, forever (or as long as you are willing to use their TLD) so you would be strictly reducing the amount of entities you need to trust, never adding new ones.
I'm not sure this is true (anymore?). There are many open source projects who are thriving under non copyleft licenses.
If someone wanted to be a dick about it they could release modified GPL code as one big diff and publish it on some obscure webpage only their customers have access to. Instead we see more and more companies and individuals becoming friendly open source citizens. Perhaps some because the GPL mandates it but I would like to think mainly because they feel it's right thing to do—copyleft or not.
I too share your general optimism about the current state of open source. I really hope that you are correct in your conclusion that the GPL mandate is unnecessary because, as we see here, many projects are trying to get away from it. However, I feel we have a number of historical examples of what happens when constraints are lifted in a space that has a number of very powerful players. None of them look like anything I want the Open Source movement to look like.
Not sure if you mean for profit and proprietary or just making money in general. For some reason though a lot of people think you can't sell GPL'd software which you absolutely can.
#5 Privileged early access to updates/content files?
All those different strategies has been used to "sell" GPL software. Even the last one has been used, even if their customers could repackage the update and redistribute it to non-paying customers.
Not necessarily. For example there's the model used by the x264 project (and upcoming x265 project), they offer the encoder as free software under GPL but they also sell licences for using it in proprietary projects and apparnetly they (the developers) make good money this way.
Then there's the recent example of Openshot, a GPL licenced video editor where the author went to kickstarter to gain funding for improving it greatly, the goal was $20.000 and it reached $45,028.
I see a combination of these models as a future popular way of making money out of your open source efforts.
Selling licenses for use in proprietary software is not selling GPL software. In particular, I can not sell you a proprietary x264 license for software that I received under the GPL.
Dual-licensing effectively says, "We'll let you take away user freedoms, but only if you pay the original copyright holders." Which is akin to openly bribing communist party officials.
As for steady income of money, true that this type of funding isn't of the steady pay-check variety but if you can continue to add value to your project then it could become a long term income.
Either way I'd say the vast majority of open source desktop software today is done without any compensation and during developers spare-time, things like kickstarter opens up possibilities for them to actually make money while doing what they would likely have done either way, and it can also give them the financial opportunity to work exclusively on the project for a period of time.
Perhaps. But even if no-one would pay money for a direct software download/package, this does not preclude sponsorship of companies to free software projects they wish to be more developed. Indeed, there are many such large projects.
> If you use a lot of GPL code in your product you have to provide the source.
Fixed:
If you use ANY GPL code in your product you have to provide the source.(There are of course exeptions eg LGPL)
Also it isn't simply derivative work that that can be forced to re-license under the GPLv3. If FreeBSD had elected to adopt a GPLv3 compiler and distribute the source(as they do with the compiler) then the FreeBSD kernel itself would become subject to it. In fact, it isn't just gcc which has this restriction. Any GPL v3 code isn't allowed in base.
Nonsense. Just putting GCC source code in to the BSD repository wouldn't magically spread the requirement that the entire project become GPL any more than keeping a copy on my hard drive requires I give the FSF the complete contents of my /home directory. What matters is the dependency/linkage.
The GCC 'runtime library exception' firewalls all 'target code' from the GPLv3, and even allows proprietary preprocessors, linkers and assemblers (but nothing that would involve modifying GCC to split what it does in to more granular stages)
You have to license any code that links with GPL code under the GPL. This does not prevent you from dual-licensing your own portion of the code base, or shipping non-GPL, non-linked code as part of the same product (Although in the case of things like DLL plugins it gets a bit hazy)
You have to license any code that links with GPL code under the GPL. This does not prevent you from dual-licensing your own portion of the code base, or shipping non-GPL, non-linked code as part of the same product (Although in the case of things like DLL plugins it gets a bit hazy)
Actually, that's a common, but wrong interpretation. The whole work becomes GPL, but the code that links to GPL code doesn't have to be GPL.
Not true. Just see MySQL for one example. Before Oracle bought MySQL AB (for $1 billion) their revenues on commercial support for their largely GPLv2 software was in the tens of millions. The same may already be true for Monty Program AB who provide commercial support for MariaDB. The closer you get to infrastructure, the more being open source is a competitive advantage.
This is not the case with most open source software.
Actually the majority of corporations are leechers, using it as "free beer", only paying back if they really have to, as many corporations don't use software without support contracts.
Or happen to have some FOSS followers taking business decisions.
Otherwise, I can tell you from my experience working on Fortune 500 consulting projects, very seldom is any money or contribution given back.
It's impossible to convince anyone that they should pay you even $1 if exactly the same thing is available for $0 from somewhere else.
Radiohead's In Rainbows, the Humble Bundles and other pay-what-you-want sales disprove that statement. All had average purchase prices well above the minimum. (That doesn't mean such a model is very profitable compared to fixed prices, of course.)
Even in a purely self-interested point of view, there are reasons for paying (e.g. keeping development active).
Those are one shot successes, hardly a model to follow for anyone that needs a steady source of income to pay things like mortgage, employees and such.
Maybe so; can you point out those cases where the average payment for a pay-what-you-want sale was no higher than the absolute minimum?
hardly a model to follow for anyone that needs a steady source of income to pay things like mortgage, employees and such.
That's a very different claim from what I was contesting.
But in any case, it may work in when you have modest needs; Joey Hess just got his second year of development of Git-Annex funded[1]. Of course, not everyone is willing to work for $15k/year, even if it comes with few strings attached, particularly obviously talented and experienced developers like him.
If users can't configure their Facebook privacy settings I'm not too optimistic about their ability to manage root certificate trust bits. It's a good start though.
Ideally OS/Browser vendors would drop compromised CAs. That happened with DigiNotar. It's not always an option though — the three largest CAs have a 83.97%[0] market share. Dropping any one of them breaks the internet.
Unfortunately a lot of people seem hell bent on destroying rather than improving on top of it.
[0] https://wiki.mozilla.org/CA:PendingCAs