The "problem" that basic income is trying to solve is the massive increase in productive capacity due to automation, of which the increase of unemployment is a symptom. It's not so much that supply would meet demand (and so prevent prices going up), as that basic income allows demand to keep up with supply even as automation makes more with less labor.
I have to disagree: There certainly are people who can keep (very) large systems in their minds and refine and extend them, but they are very rare and most of the people who might call themselves "Software Architects" are not.
It might be said that the definition of "size" for a computer system is exactly the degree to which it requires "architecture" to make it.
Under that interpretation one way to try to become a "Software Architect" is to study large systems and how they succeed or fail at their intended purposes.
Well, I can tell you easily how a large system (to the degree that it requires a architect to understand) get successful: They don't.
Successful systems may get large, and as a general rule, large systems do not get successful. Once in a while a large system is so well written that it fits inside the head of anybody, and it has a chance of success. You can never get one of those later if there's interference of people that do not code directly.
In my opinion, what works in the small also works in the large and even though there may be caveats with this rule (e.g. synchronous communications vs asynchronous ones that may introduce non-determinism), if you stick to the rule of building simple components that do one thing and do it well and them combine them to build bigger components that are still simple (in the non-entangled sense), then you'll definitely do a good job. But that's easier said than done because ...
There are two types of problems here:
1. doing design mistakes because you make a decision that you never took before and then you get burned by it - as an example you might introduce unwanted cyclic dependencies between your components and if you're talking about different address spaces, then things can get really problematic, or you might introduce concurrency concerns (like in the communications between components) that could be avoided - problems that you won't avoid until you get burned by them.
2. some projects have an incredibly complex business logic - I'm working on such a project right now and the business logic is like a freaking fractal. Being so complex and the architecture being based on micro-services, the downside is that people are only willing to understand only as much as they need to for the components they are responsible for. On one hand that's good because that's how you parallelize the development effort. But then the people can't view the big picture anymore and this is usually a failure of business analysis, because truth be told most systems are built with a primary purpose (for example the primary purpose of an email platform is to send emails in bulk) and then the "stakeholders" lose sight of that and come up with often useless and often conflicting features that detract the developers from focusing on the primary purpose (e.g. what use has an email platform that's shitty at sending emails). The end result in our team is that a single man knows all the business logic details and dealing with that takes so much of his time that he does not have time for software development at all.
I do not agree with you on studying large systems. Large systems that have to be studied because they are large are in my opinion a failure of design that shouldn't be copied. IMHO, much better is to develop a nose for simplicity and always strive to build simple things, while also pushing back on the amount of work needed by simplifying the business logic.
While I agree with you that the people that can deal with the above are incredibly rare, I'll stick to my initial claim ... people good at it are software developers that (1) are always studying new ways of thinking and learning from others and (2) that try to do things and get burned a lot.
And nr. 2, the experience you've got, is incredibly important. There's no amount of studying that you can do for successfully recognizing and fixing or avoiding concurrency problems (and btw, I'm not talking about multi-threading here, but about the general issue of concurrency that can happen in all sorts of communications). There's no amount of studying you can do that can help you recognize entanglement that shouldn't exist in the architecture. Sure, books help, but books don't fix the superficial attitude that rookies have towards such problems. In order for one to learn, one has to build stuff and suffer from such issues and then improve.
Good news is that if one follows the above advice and gets out of his conform zone, then one can short-circuit the learning process to about 5 years instead of the usual 10.
"... From there it is only a small step to measuring "programmer productivity" in terms of "number of lines of code produced per month". This is a very costly measuring unit because it encourages the writing of insipid code, but today I am less interested in how foolish a unit it is from even a pure business point of view. My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."
I sometimes use imagery of a toilet flushing and the thought going down, several flushes if necessary (imaginary water is free!) Also, imagining white noise (visual and auditory) washing out the thought, increasing in power until it wipes away the thought like waves on a beach erasing patterns traced in the sand and then fading out, seems to work well. (I suspect using (imaginary) noise might have some actual biophysical basis for why it works, but that's just speculation.)
What, exactly? Which current startups/tech companies are doing something meaningful? I know a few that are not marketplace(Uber)/ads(Google)/finance(Goldman) companies, but there are not many.
There's a few that provide interesting and new ways to connect with and donate to charities (I forgot their names, but you must've seen at least a couple pass by the frontpage).
There was one developing a robot spoon with stabilizing anti-vibrations for people with Parkinson's. I remember the look on the faces of elderly people in the video, as they could finally again bring peas to their mouths without spilling everything. That was genuine, they didn't need that "I appear in all the startup videos" guy or hire actors.
There's a bunch of startups working to bring encryption and other privacy-enhancing tools to the masses.
These guys providing mobile laundromat service for the homeless. Though I dunno if they were making any profits with that.
A major contribution by tech industry is how it fundamentally transform the way we live, work and interact, in a mostly positive way towards higher efficiency. The effect is everywhere. I don't think you can find many tech companies which are not contributing something to this landscape. Are some of them possibly "more trivial" than others in a perspective? Might be. But it doesn't change the fact that they are having positive impacts on the world.
Like what? If you're learning a new technology, and you also happen to have a great helpful idea that you want to implement, that's fine, but usually you just want to implement something, so you pick up an idea and roll with it even if it doesn't benefit mankind.
I think people overestimate the number of useful project ideas floating around which don't require unusual expertise. Some "starter projects" might benefit mankind, but certainly they can't be expected to do so.
The point of spiking trees isn't to hurt people, in fact the tree-huggers tell the loggers that the trees have been spiked so that they won't attempt to cut the trees. The point of the tactic is to prevent damage to the forest, not to actually hurt anyone.
No you are not alone. But you are minority. Majority sees the language not as a tool to solve specific problems, but as a goal, i.e. a way to chase its own tail endlessly. Hence the rudderless pursuit of new. The whole industry is in ADD mode - they moment they create something useful, they discard it and start a new quest.
Fortran's most recent standard (Fortran 2008, adopted as an ISO standard in September 2010) was actually adopted more recently than the 2.7 release of Python, so, arguably, Python 2.x has been static for longer than Fortran.
(Obviously, if you said, e.g., Fortran 77, the story would be different.)