Somewhat related, there was also a "capitalisation shift" in the late 1600s where everyone (or their editors) started writing more nouns as capitalised (e.g. "The quick brown Fox jumped over the lazy Dog"), and then switched back in the mid-1700s. Interesting to know what might have been.
Donald Trump's Twitter account nearly follows this rule, but not quite:
"4.2 million hard working Americans have already received a large Bonus and/or Pay Increase because of our recently Passed Tax Cut & Jobs Bill....and it will only get better! We are far ahead of schedule."
(Feb 11, 2018)
He missed "Schedule" but incorrectly capitalized "passed". Tsk.
Looks like capitalization as a general emphasis - if you just scan capitalized words:
Americans Bonus Pay Increase Passed Tax Cut Jobs Bill
Makes a good summary of the points they want to emphasize, that Americans get a bonus because the government is progressing on the Republican+blue-collar agenda.
Missing "Schedule" isn't a big deal. During that period, English speakers didn't capitalize every noun, just the ones deemed important. Evidently Trump doesn't consider "schedule" important.
Didn't a German branch inherit the English monarchy in the late 17th Century? That timing would work out -- people adapting whatever style the monarchs used as the standard.
Elector Georg Ludwig of Hanover became King George I of Great Britain in 1714. This happened because the Act of Settlement 1701 declared Georg Ludwig's mother, Sophia, the heir to the throne (this was done specifically to cut off Jacobite claims), but she died a month before Queen Anne (which was a shame: by all accounts she was brilliant, and she was a patron of the sciences), so the throne went to her son.
I remember when I first started learning German, I thought it was useless. But after getting used to it, when I switched back to English, I was like where are all the nouns? Why is everything lowercase? It's all about what you get used to, I guess.
To maximize information content (entropy), we must try to use all symbols with equal likelihood - which this rule assists, employing capital symbols with closer to equal likelihood than their usual rarity.
Furthermore, said symbols must convey previously unknown information - which this rule does not, as the nouns have the same meaning whether capitalized as not, and the "information" conveyed is redundant.
Human languages aren’t usually concerned with maximal information density though. Multiple layers of redundancy are often present. Features like noun verb agreement, grammatical gender, pleonasms, and even redundant words (last will and testament, vim and vigor etc) are used in various languages despite the redundancy, to increase clarity.
In English, nouns are often spoken with stresses in a sentence, which could make capitalizing them a pronunciation issue, just as other punctuation is (e.g. commas for pauses, question mark for changing intonation). I don't know about German, though.
It's quite hard to examplify once I actually think about an example, but I'll try. It's more a global vibe of a piece of code rather then semantics but they do play a part (using Python as example):
A young person (say 24), has experience coding of a few years, but very enthusiastic, loves reading programming blogs, a real stickler of doing it the "right" way, and sometimes forgets other people might have to read their code. They are not as pragmatic as they would be 10 years down the road. They would use vim or emacs in vim mode because that's more pro.
Their code might have a good few "clever" one liners that do something complex in a very concise manner. Took a while to compress into a single line and will take a week to read back and understand. Clever, but only if it never breaks or never needs maintenance. They write factories, sometimes of factories. They have 110% test coverage, including trivial stuff that doesn't require it. They would refactor a piece of code many times until it feels it's the right shade of clever. I like these coders because I can learn something from them, but if you have tight deadlines they might actually get in the way. Great to have a couple of these in your car, but don't let them drive.
from itertools import chain
first_set = set(['one', 'two']).union(set(chain.from_iterable([next.key for next in some_yielding_iter()])))
other_value = make_value_factory_factory(first_set).make_value_factory()
10 years down the line these guys realise you write code once and it's read many times over, and the
above turns into a more vanilla multi line readable expression of the same idea.
A less enthusiastic version of said 24 year old, codes because they knows how to, but has no aspirations
to get better at their craft. Less organised individual. Pep8 is a new variation of Pepsi Cola?
They are not bad coders, just sloppy. Their code works but can be better. They would benefit pairing
with the above person.
I think a good exercise is to look at a person you know and try ot match their personality to the code
style. Then look at some code which might or might not be theirs and try to assert if it is.
I"ve been coding professionally for the last 12 years -- I'm 34 now -- and my code definitely still looks like example 2.
Maybe it's because I code in multiple languages daily, but I tend to not use a lot of the language features for each language. A part of it is definitely thinking about the ones that come after me, and wanting everyone -- from every background -- to be immediately able to reason about the code without having to look up what every syntax feature does.
I guess it's a fine line between depth of language features used vs. readability once one is gone from the project.
Its not an exact science, or even close to approximate. It's more for fun than anything else. The correlation between style and personality is 'loose'. And a lot of it is in the perspective of the observer and what he knows about the person upfront. I guess. Consider this a "fun" or "not fun" exercise, and nothing more :)
So what you're saying is that you guess and then rationalize your successes? (If the correlation is loose, you've really got to measure it to know for sure it even exists.)
I'm pretty much the same, been coding since 16 and I'm 28 now. I've gone through phases but I mostly code like the 2nd style. I mostly use JavaScript but occasionally pipe in another program (say python or C) when needed. I use new language features if it helps make things easier- ES6 has some really cool things, but I try not to let my code become cryptic. :)
Like another commenter on this thread, I too find the second example the most readable and I write code like the second example even though I have many years of experience with Python.
The third example is cool but I don't see the immediate value in writing code that way unless I am designing a framework of some kind.
I am of the opinion that code should be written in as simple manner as possible so that programmers from different background can get up-to-speed with the code with little trouble.
An OO language that has the ability to do dispatching on types ought to be exercised in that way.
Checking for the types explicitly in your code creates a brittle system. For a one-off bit of code, fine. For anything that may be extended in the future, it's a bad idea and will have to be refactored to become maintainable.
IMO, having functions that accept different types (when the types are unrelated) is an anti-pattern in most cases, it doesn't matter if you make it more "maintainable" by using single dispatch.
It's fine when you have related types, but in the example
`someFunction` accepts both ThisClass and str, which (I assume) have very different functionality.
You see this anti-pattern a lot in dynamically-typed languages. As an example, both these examples are valid and do the same thing in `knex`, an SQL query builder for node:
Second is example is most readable, I agree. What I was trying to convey there was first the use of "type" vs "isinstance", and second the use of non pep8 snake case in Python which is frowned upon. Then I wanted to show there is another way of handling (larger) blocks of "if this type/instance then this, if that then that, etc.". I use it as a getter when I have either an id or an instance and it feels smoother to me. Python's version of overridden methods. Both are fine of course.
I'm not trying to start an offtopic discussion, but the not enthusiastic 24-year old looks like the clearest and best option to me. Personally, I would refactor it to three lines.
Except grabbing types like that in code is usually a code smell. It can become brittle and hard to extend as time goes on. It's suitable when you need something now or you know you'll only ever have a few (less than 4 or 5, preferably no more than 2) cases. But either way you ought to be refactoring to something more like the third example.
Well, calling type() is usually bad, isinstance() is a little better. But if you're expecting a ThisClass and only falling back on strings, as implied by the example, surely this is the classic example of where duck typing is handy:
Personally, I consider code I don't understand after reading it thrice a much worse code smell.
Less sarcastically: I think you're being too dogmatic. I don't like the idea of 'code smells', because it encourages you to judge code not by performance, maintainability, or readability, but by the property that someone has decided it's a code smell. Often it's true that the pattern can be used to write utterly shit code. Code smells are, IMHO, being used as a substitute for common sense. My common sense tells me that the first example is most clean solution. If the code keeps you from obtaining the performance you need, or if you add more checks in other places which make the code unreadable, by all means, refactor it.
Interesting example to give. I always think it's good to have programmers with quite in-depth knowledge, and programmers who are more superficial in their programming knowledge but get stuff done.
In my experience, it's not so much about age, hence I'm not sure if that is relevant in there. (For example, I'm not sure if using VIM is something younger programmers tend to learn, and I'd guess more of the 'older generation' will know it).
Yeah age is not relevant really, but I went for a complete example. As for vim, or emacs in particular, I actually met a few of these 20 something programmers with enthusiasm who went there. Some stayed, some came back frightened. But they all tried (and I modelled the exampled on them)
I fail to see how this relates more to a person's individual personality than programming habits or attitude. It's not unreasonable to see two programmers of different skill levels producing the same code simply because they're both showing up to collect a paycheck. It wouldn't matter if one is a perfectionist, depressed, hyperactive, high-strung, lazy or proactive. I'm relatively sure code style is tied more to experience rather than personality.
Wow that's a pretty good idea - I'll see if I can find a way to make this. I think I'd need some kind of map API that lets me plot coordinates and define mouse-over text for each pin.
Interested to know what folks think - if there are any obvious mistakes, if you have any requests for strings to map, or any cool ideas we could try with the data.
Thirty spokes share the wheel's hub;
It is the centre hole that makes it useful.
Shape clay into a vessel;
It is the space within that makes it useful.
Cut doors and windows for a room;
It is the holes which make it useful.
Therefore profit comes from what is there;
Usefulness from what is not there.