It's great to see how others are doing serverless development. It's definitely not something you learn to do well overnight. I'm glad to see someone get past the learning curve and write about what works for them.
I've been experimenting with serverless for some time now and came to many of the same conclusions written about here. The biggest takeaway for me is that there are pitfalls to an overreliance on lambdas. You really need to offload as much as you can to the other serverless solutions AWS provides.
I've been using Appsync for my graphql API instead of API gateway + Lambda, and I have had a good experience with it. A lot of logic can be offloaded into mapping templates, making lambdas unnecessary in many cases. Debugging is still a bit of a pain for me, but the end result is fast, cheap, and reliable.
The privacy risk of installing extensions that have full access to every page you visit is enormous. It includes the risk of exposing passwords and credit card numbers.
Using a bookmarklet is a much better idea since users can control where they run.
And for a simple bookmarklet like this the source would be so trivial that you can read it and verify that all it does is redirect to the computed url.
No, the comment indicator would be the missing feature. But personally I would very happily skip that feature for the huge security and privacy advantage.
You could display the comments in-site but then you are bringing back all of the security and privacy concerns. (I guess you could have some middle-ground if the code to add the iframe is trivially simple?)
What are your thoughts on the built-in network request disabling option or Chrome's option to disable unless clicked?
I know you use Firefox, but for me these options seem to be sufficient for Chrome at least (and are already available).
Let me know if I'm mistaken though; I want to make sure that all such concerns are addressed!
I also think the badge could be dropped, but in initial stages when traffic is low I think it would really help with engagement.
It is nice, but if I am not getting a comment indicator anyways I would rather just not trust your extension at all.
> Chrome's option to disable unless clicked?
This is pretty good. But I would still prefer a 1-line auditable bookmarklet rather than a large extension.
> I also think the badge could be dropped, but in initial stages when traffic is low I think it would really help with engagement.
Yeah, I can see that. It would be interesting if you could get it just with URL access. You could still check for comments and open the discussion page but you wouldn't need full site access permission. Due to how the permission model works this would probably need to be a separate extension but it would be interesting to have a "Netvyne Lite" version with much less permissions.
...that being said logging every visited site is still a lot of access. But I can see how the comment count is a valuable feature for some users even if it isn't worth the cost for me.
Also for comment count checks are you protecting the URL in any way? For example checking for `sha3("netvyne-" + url)` instead of the raw URL. This way you aren't sending the URL of my URL-accessible Google Docs (for example) to your server.
We did try to reduce the permissions as much as possible, but I believe at this time every permission is required by some feature. I get your point though, and I'll try to see what else we can do in this regard.
As mentioned elsewhere in this thread, just to be clear we don't log every visited site (or log browsing in general); the only time data is added to the database is when the user actively adds a comment/creates content.
Currently we send the URL as is; its only stored though if you are writing a comment or sharing the site with friends. I do like the hashing idea but ultimately when someone does leave a comment or share it, we do store it without hashing so later users can filer it and so on. Please let me know if I misunderstood your question!
> We did try to reduce the permissions as much as possible, but I believe at this time every permission is required by some feature
I appreciate that. And I guess it is partly the fault of the browser that you can't ask for just permissions that the user wants to use. From my point of view the set of features you chose isn't worth giving you the permissions. However I can see that for other users they do want these features. It seems that there isn't a perfect option with one extension here.
> but ultimately when someone does leave a comment or share it...
Yes, but there is a huge difference from "site I left a comment on" and "every page I visit". For example if I have a secret document that is protected by an unguessable URL I am not going to leave comments on it, so it would be great if it wasn't sent to your service. Of course once I do leave a comment it makes sense to send it to your service so that you can fetch metadata and other features. (Although it would be cool if there was an option to never reveal the URL as well.)
As someone with experience as a startup employee and founder, having negotiated startup offers from both sides of the table and seen several unfavorable and favorable liquidity scenarios play out, here is the advice I give people:
Treat stock options in an early stage startup as if they are worthless. Don't make salary/equity tradeoffs and instead, negotiate for both the "high salary" and "high equity".
Stock options have a number of "gotchas" that may not be immediately obvious:
1. Exercise price and exercise window. It takes a lot longer for a startup to exit than than most people would like to think (if it exits at all). 10+ years is my experience. You're probably not going to stay with the company that long, so when you leave the company and want to keep your shares, you only have so much time to exercise them (this is the exercise window - typically 3 months). It could cost you many $thousands to exercise and there is no guarantee your stock will be worth anything. You are essentially now an investor in the company and you are afforded none of the protections that the company's venture investors received.
2. Liquidation preference. In a liquidity event, the company's venture investors get paid back some multiple of their original investment (typically 1-2x) before any common shareholders get paid (which includes options holders). If the company is not valued above a certain threshold at liquidity, then common shareholders get nothing. As the company takes on new investors, this liquidation preference starts to add up, and as an employee you are not going to be told what this amounts to. You could exercise your shares, pay the company money, have the company exit for an apparently attractive amount, and then get nothing because the liquidation preference threshold wasn't met. The company exits, and you lose money.
3. Tax treatment. Assuming the company exits while you are still an employee (i.e. you have not exercised your shares) or the company has an attractive exercise window (10 years is not uncommon nowadays), and the valuation is high enough not to trigger liquidation preference, then you will make some money. Unfortunately, the amount you earn will get taxed as income, not capital gains, and the difference is significant. To be taxed as capital gains, you have to exercise your options and hold on to the shares for at least a year before selling them. Some companies offer early exercise benefits, but if you do this then you could potentially lose money as I described above.
RSU's on the other hand (essentially just plain stock like founders get) do not have to be exercised and are taxed as income on their value the moment they are vested (or on the value of the entire grant on the data of issue, assuming you file an 83(b) election with the IRS). These have value, and I would be comfortable with a salary/equity tradeoff for them. This is something you should ask about during negotiation. If RSUs are off the table, then you can try asking the company to pay the [early] exercise price for you as a signing bonus.
I would bet that modem is using a phased antenna array [1]
(I'm guessing this is what "solid state, no moving parts" means on the product page). With the right sensors (gyros, etc), a solid-state system like that should be able to keep a pretty tight lock on the satellite even in the roughest conditions.
Look at the Terminal Equipment tab of this page linked below. It shows the Cobham ( different manufacturer from Thales) antenna for the same satellite service. It looks like it is a set of six or so patch antennas. It's not clear if it is switching between patches or combining the signals to/from the patches. If the latter, it is indeed a phased array. That seems likely because the other manufacturer, Intellian, describes their antenna as a 12-element phased array. I'm guessing the Thales also uses a phased array.
Thank you for saying this. Despite taking all precautions, I was infected in Jan 2021 before becoming eligible for the vaccine. Once I recovered, I was comfortable venturing outside again and doing some traveling but there was practically zero information about natural immunity following infection. As states easing travel restrictions, these only applied to people who had been vaccinated. There was no exception for folks who were previously infected (at least for the states I researched). At a time when vaccine appointments were scarce and there were long waiting lists (my how times have changed), I still got the first appointment I could so I could get my vaccine card.
The lack of guidance for those who were previously infected was strange. When I'd read about immunization rates and progress towards "herd immunity", previous infections also weren't included in these stats, which again was strange.
I'm in Europe now, and here it seems like natural immunity is essentially as valid as vaccination, at least for travel.
I can't explain why previous infections seem so widely discounted in the US. It feels like an intentional omission, which makes it hard for me to trust at face value what I hear and read about the pandemic.
Completely agree. Most of the criticism I've seen comes from people who don't have a good grasp of the platform or tools, or haven't adapted their style to the "serverless mindset".
There's definitely a new technique I've had to teach myself in order to build serverless systems effectively. Today I'm getting great results from the approach.
Yes, absolutely. And to be honest to get started with it for a software developer which doesn't have any "cloud" experience is pretty rough. Suddenly that developer needs to have a understanding of AWS in general, IAM, CloudFormation and all the services which can be utilized to avoid having to write custom code. It's a steep and long learning curve.
Fully agree. The problem gets even worse at bigger orgs where there's a separate, security oriented, team which is responsible for setting/managing IAM roles and KMS keys. I think the AWS ecosystem can be very discouraging for younger engineers in these situations. A innovative PoC in your downtime seems a lot more daunting when you need to learn about 4-5 AWS services that are unrelated to code execution and you have make a request to devops for a new IAM role just to locally test out a prototype that you built over the weekend.
I'm just really sad reading all this comments, knowing that some people will take the decision based on them, PLEASE DON'T, most of the commenters have no idea about what they are talking about.
Ugh, these arguments frustrate me, but here I go getting sucked into another...
> Actually, I am setting up a serverless app now. 4-5 lambdas, s3 buckets, RDS, IAM roles, and 6 weeks (easily) getting everything into CFT's and Ansible so that I can deploy this relatively small app.
I'm sorry but if this took you 6 weeks then you're doing it wrong. To be fair, I haven't tried using CFTs and Ansible for my serverless deployments, but then again these seemed like big time overkill to me. Your experience seems to back that up.
Look, I don't want to say infrastructure-as-code is a bad thing, but don't blame the infrastructure when it's your choice of tools that is the problem. The AWS CLI makes it so easy to write a bash script to deploy a small project. But hey if you think an extra 6 weeks is worth it to use Ansible, then by all means...
I can't believe I'm wasting my time on another testing debate.
Speaking as a formerly young and arrogant programmer (now I'm simply an arrogant programmer), there's a certain progression I went through upon joining the workforce that I think is common among young, arrogant programmers:
1. Tests waste time. I know how to write code that works. Why would I compromise the design of my program for tests? Here, let me explain to you all the reasons why testing is stupid.
2. Get burned by not having tests. I've built a really complex system that breaks every time I try to update it. I can't bring on help because anyone who doesn't know this code intimately is 10x more likely to break it. I limp to the end of this project and practically burn out.
3. Go overboard on testing. It's the best thing since sliced bread. I'm never going to get burned again. My code works all the time now. TDD has changed my life. Here, let me explain to you all the reasons why you need to test religiously.
4. Programming is pedantic and no fun anymore. Simple toy projects and prototypes take forever now because I spend half of my time writing tests. Maybe I'll go into management?
5. You know what? There are some times when testing is good and some times where testing is more effort than it's worth. There's no hard-set rule for all projects and situations. I'll test where and when it makes the most sense and set expectations appropriately so I don't get burned like I did in the past.
One of the dark arts of being an experienced developer is knowing how to calculate the business ROI of tests. There are a lot of subtle reasons why they may or may not be useful, including:
- Is the language you're using dynamic? Large refactors in Ruby are much harder than in Java, since the compiler can't catch dumb mistakes
- What is the likelihood that you're going to get bad/invalid inputs to your functions? Does the data come from an internal source? The outside world?
- What is the core business logic that your customers find the most value in / constantly execute? Error tolerances across a large project are not uniform, and you should focus the highest quality testing on the most critical parts of your application
- Test coverage != good testing. I can write 100% test coverage that doesn't really test anything other than physically executing the lines of code. Focus on testing for errors that may occur in the real world, edge cases, things that might break when another system is refactored, etc.
I now tend to focus on a black box logic coverage approach to tests, rather than a white box "have I covered every line of code" approach. I focus on things like format specifications, or component contract definitions/behaviour.
For lexer and parser tests, I tend to focus on the EBNF grammar. Do I have lexer test coverage for each symbol in a given EBNF, accepting duplicate token coverage across different EBNF symbol tests? Do I have parser tests for each valid path through the symbol? For error handling/recovery, do I have a test for a token in a symbol being missing (one per missing symbol)?
For equation/algorithm testing, do I have a test case for each value domain. For numbers: zero, negative number, positive number, min, max, values that yield the min/max representable output (and one above/below this to overflow).
I tend to organize tests in a hierarchy, so the tests higher up only focus on the relevant details, while the ones lower down focus on the variations they can have. For example, for a lexer I will test the different cases for a given token (e.g. '1e8' and '1E8' for a double token), then for the parser I only need to test a single double token format/variant as I know that the lexer handles the different variants correctly. Then, I can do a similar thing in the processing stages, ignoring the error handling/recovery cases that yield the same parse tree as the valid cases.
I think you missed an important one, which is: how much do bugs even matter?
A bug can be critical (literally life-threatening) or unnoticeable. And this includes the response to the bug and what it takes. When I write code for myself I tend to put a lot of checks and crash states rather than tests because if I'm running it and something unexpected happens, I can easily fix it up and run it again. That doesn't work as well for automated systems.
You should understand when those tests are low effort: Look for other frameworks that help you to develop those tests easier or frameworks that remove that requirement for you. I.e. Lambok for generation of getters/setters. You only have to unit test code that you wrote.
High test coverage comes from a history of writting tests there. Sadly people include feature and functional tests in the coverage.
There's an easier answer and that is - as an experienced programmer - don't write any tests for your 'toy project' - at least not at the start.
The missing bit in the discussion is 1) churn, and 2) a devs ability to write fairly clean code.
Early stage and 'toy' projects may change a lot, in fundamental ways. There maybe total re-writes as you decide to change out technologies.
During this phase, it's pointless to try to 'harden' anything because you're not sure what it's entirely supposed to do, other than at a high level.
Trying Amazon Dynamo DB, only to find a couple weeks in that it's not what you need ... means it probably wouldn't make sense to run it through the gamut of tests.
Only once you've really settled on an approach, and you start to see the bits of code that look like they're not going to get tossed, does it make sense to start running tests.
Of course the caveat is that you'll need to have enough coding experience to move through the material quickly, in that, no single bit of code is a challenge, it's just 'getting it on the screen' takes some labour. The experience of 'having done it already many times' means you know it's 'roughly going to work'.
I usually try to 'get something working' before I think too hard about testing, otherwise you 3x the amount of work you have to do, most of which may be thrown out or refactored.
Maybe another way of saying it, is if a dev can code to '80% accuracy' - well, that's all you need at the start. You just want the 'main pieces to work together'. Once it starts to take shape, you've got to get much higher than that, testing is the way to do that.
This is the approach I take as well, and also think about it in terms of “setting things in stone”.
When you’re starting out a project and “discovering” the structure of it, it makes very little sense to lock things in place, especially when manual testing is inexpensive.
Once you have more confidence in your structure as it grows you can start hardening it, reducing the amount of manual testing you do along the way.
People that have hard and fast rules around testing don’t appreciate the lifecycle of a project. Different times call for different approaches, and there are always trade offs. This is the art of software.
I agree with all your points. Have you looked at any strongly typed functional language from ML like Ocaml, F#, Rust, or say similar like Haskell?
If you do make a slight tweak somewhere, the compiler will tell you there’s something broken in obscure place X that you would find out at runtime say with Ruby or Python.
THATS the winning formula. I’ve written so many tests for Python ensuring a function’s arguments are validated rather than the core logic/process of it.
Not so fast. For some problems it's great, for other ones it's not.
Have you tried writing numeric or machine leaning core in Haskell? You'll notice that the type system just doesn't help you enforce correctness. Have you tried writing low level IO? The logic is too complex to capture on types, if you try to use them you'll have a huge problem.
> Have you tried writing low level IO? The logic is too complex to capture on types, if you try to use them you'll have a huge problem.
Rust's got a very Haskell-like type system, but it's a systems programming language. People are literally writing kernels in it. I think this is a pure-functional-is-a-bad-way-to-do-real-time-I/O thing, not a typing thing.
Hum... Pure functional is a bad way to do real time I/O, but my point was about types.
If you try to verify the kind of state machines that low level I/O normally use with Haskell-like types, you will gain a huge amount of complexity and probably end with more bugs than without.
Low-level I/O doesn't seem to have that much complexity, unless you're trying to handle all of the engineers' levels of abstraction at once.
Let's say you're writing a /dev/console driver for an RS-232 connection. Trying to represent "ring indicator", "parity failure", "invalid UTF-8 sequence", "keyboard interrupt", "hup" and "buffer full" at the same level in the type system will fail abysmally, but that's not a sensible way of doing it.
I could definitely implement this while leveraging the power of Rust's type system – Haskell would be a stretch, but only because it's side-effect free and I/O is pretty much all side-effects.
Really give it a go! It is beyond worldly. If you think Typescript is great, then ocaml/f# will make it look inferior.
If you're doing React + Typescript give Reasonml which is a syntax sugar on top of Ocaml that compiles using bucklescript a go. Ocaml has the fastest compiler out there.
How’s the tooling for that? Haskell has the “best” compiler and garbage tooling that should be built on top of the ol’ rolls Royce engine it’s rocking on.
Meanwhile the plugins and IDE integrations for Reason/Ocaml and F# are ready to go from the start and work pretty well.
Just a data point, with my current team, everyone jumped right in and wrote code and wrote tests from the start. The tests were integration tests that depended on the test database. Worked great at first, but then tests started failing sporadically as it grew. Turning off parallelism helped a bit, but not entirely. Stories starting taking longer too, where features entailed broad changes - it felt like every story was leading to merge conflicts and interdependency, where one person didn't want to implement their fix until someone else finished something that would change the code they were going to work on.
So then I came along and said, "hey, why don't we have any unit testing?" and it turns out because it was pretty impossible to write unit tests with our code. So I refactored some code and gave a presentation on writing testable code - how the point of unit testing isn't just to have lots of unit tests, how it's more that it encourages writing testable code, and that the point of having testable code means that your codebase is then easier to change quickly.
I even showed a simple demonstration based off of four boolean parameters and some simple business logic, showing that if it were one function, you'd have to write 16 tests to test it exhaustively, but if you refactored and used mocking, you'd only have to write 12. That surprised people. Through that we reinforced some simple guidelines of how we'd like to separate our code, focusing on pure functions when possible, making layers mockable. We don't even have a need for a complicated dependency injection framework as long as we reduce the # of dependencies per layer.
Since that time we've separated our test suite into integration tests and unit tests, with instructions to rewrite integration tests to unit tests if possible. (Some integration tests are worthwhile, but most were just because unit tests were hard at that time.) We turned parallelism back on for the unit test suite. The unit tests aren't flaky, and now people are running the unit test suite in an infinite loop in their IDE. Over that time our codebase has gotten better structured, we have less interdependence and merge conflicts, morale has improved, velocity has gone up.
Anyway, according to this article it sounds like we've done basically the opposite of what we should have done.
Sorry, nothing that's so good for the general public, but the general gist is that the goal for a test is something that is simultaneously small, fast, and reliable.
And that by following those three principles, it kind of drives you to writing testable code. Because if you don't, you might have tests that are only small (simple integration tests), or only fast and reliable (testing unfactored code with lots of mocking) - and that the only way to do all three is by refactoring to write testable code that has good layer separation and therefore minimal mocking requirements.
There was stuff in there about how mutable state and concurrency leads to non-determinism and therefore unreliable tests, which is part of what justifies pushing towards pure functions that can be easily unit tested without mocking.
Only half your time? You're doing testing wrong if it doesn't take 80% of the time ;-)
I have a love hate relationship with testing. Working for myself as a company of one, some of the benefits testing bring just don't apply. I have a suite of programs built in the style of your point (1). The programs were quick to market and hacked out whilst savings ran out not knowing if I would make a single sale.
Sales came, customer requests came, new features were wanted, sales were promised "if the program could just do xyz". More things was hacked on. The promise of "I will go back and do this properly and tidy up this god unholy mess of code" slowly slipped away that I stopped lying to myself I would do it.
Yes there was a phase of fix one problem add another, but I have most of that in my head now and has been a long time since that happened.
Not a single test. Developing the programs was "fun" and exciting. Getting requests for features in the morning and having the build ready by lunch kept customers happy.
Now I am redoing the apps as a web app for "reasons". This time am doing it properly, testing from the start. I know exactly what the program should do and how to do it, unlike the first time when I really had no idea. But still, I Come to a point and realise the design is wrong and I hadn't taking something into consideration. Changing the code isn't so bad, changing the tests, O.M.G.
I am so fed up of the project, I do all I can to avoid it, it is 2 years late, I wish I never started it. The codebase has excellent testing, mocks, no little hacks, engineering wise am proud of it. The tests have found little edge cases that would have been found out by customers so avoided that. But there is no fun in it. No excitement. Is just a constant drudging slog.
Am trying to avoid dismissing testing all together, as I really want to see the benefit of it in a production substantially code base. If I ever get there. At the moment, the code base is the best tested unused software ever written IMO
Well, then stop! Delete all the tests right now and do it however you want to do it.
The thing about testing that never really gets talked about it is, what's the penalty for regressions? What's the consequences if you ship a bug so bad the whole system stops working?
Well, if you're building a thing that's doing hundreds of millions in revenue, that might be a big deal. But you? You're a team of one! You rollback that bad deploy and basically no one cares!
Your customers certainly don't care if you ship bugs. If it was something important enough where they REALLY cared, they wouldn't be using a company of one person.
So, go for it. Dismiss tests until you get to a point where you fear deploying because of the consequences. Then add the bare minimum of e2e tests you need to get rid of that fear, and keep shipping.
There is another cost, if you try and fix a bug and break something else. If your codebase becomes so brittle that you feel like you can't do anything without breaking something else, that makes it unbearable to keep going with that project.
Having said all that, I find that it's better to avoid doing some unit tests when building your own project. It can be better to do the high level tests (some integration, focused on system) to make sure the major functionality works. In many cases, for an app that's not too complicated, you can just have a rough manual test plan. Then move to automated tests later on if the app gets popular, or the manual testing becomes too cumbersome.
It's still good to have a few unit tests for some tricky functions that do complicated things so you aren't spending hours debugging a simple typo.
Sure. My point wasn't really whether to write unit tests or not. It's more, do what works for you / your team to enable you to ship consistently. For the OP, spending all of their time writing tests clearly isn't working for them if they haven't shipped at all.
> Well, if you're building a thing that's doing hundreds of millions in revenue, that might be a big deal. But you? You're a team of one! You rollback that bad deploy and basically no one cares!
Human lives, customer faith in product, GDPR violations, HPPA violations, data, time/resources in space missions
> But you? You're a team of one! You rollback that bad deploy and basically no one cares!
I somehow doubt that comparing this 'team of one project' to the Mars Climate Orbiter leads to any useful conclusions. It's a nice bit of hyperbole though!
Rollbacks can create data loss. Also, rollbacks are not always a viable option.
Anyways..this was to address the issue of a bug. I took the comment of "it's just a team of one" as a way of trying to justify not putting your engineering due diligence into delivering a product to the customer.
> Rollbacks can create data loss. Also, rollbacks are not always a viable option.
I've delivered a number of products (in the early days of my career) to clients where data loss happened and while not fun, it also didn't significantly harm the product or piss off said client. I saw my responsibility primarily to do the best I could and clearly communicate potential risks to the client.
> I took the comment of "it's just a team of one" as a way of trying to justify not putting your engineering due diligence into delivering a product to the customer.
That I do agree with, but 'due diligence' is a very vague concept. I guess honest communication about the consequence of various choices is perhaps the core aspect?
And of course 'engineering due diligence', in my opinion, includes making choices that might lead to an inferior result from a 'purely' engineering perspective.
> not putting your engineering due diligence into delivering a product to the customer.
Yes. This is exactly what this person should do. Stop worrying about arbitrary rules and just deliver the damn product already. A hacky, shitty, unfinished product in your customer's hands that can be iterated on beats one that never got shipped at all every day of the week.
LOL. I guess I was being a bit conservative with that estimate!
I've worked for myself as well and know what you mean. In my situation, I was able to save myself from testing by telling my customers "this is a prototype so expect some issues".
My observation around codebases that weren't written with/for unit tests is that they always end up being a monolith that you have to run all of in order to run any of. Having decent code coverage means that it's at least possible to run just that one function that fails on the second Tuesday of the month when that one customer in Albania logs in.
Your points are fine, but I do not see how they apply to the blog post.
Overall, the blog post says, unit tests take a long time to write compared to the value they bring - instead (or also) focus on more valuable automated integration tests / e2e tests because it is much easier than it was 10-20 years ago.
My point is that OP is in step 1 of 5. It's not to say there aren't any good thoughts there, but the overall diatribe comes from a place of inexperience so take their advice with a grain of salt.
I don't think OP is step 1. OP is not arguing against testing, although the title could lead one into thinking that. OP is arguing for better, more reasonable testing.
OP appears to be arguing what you call step 5 of 5. They're not even saying you should never unit test, only that it should be avoided where it doesn't make sense, and that this happens more often than step-3 people like to think. Furthermore, the main direction of the article is that it's arguing for integration testing as a viable replacement for unit testing in a lot of situations, which doesn't relate to your overall point at all.
Step 5 touches on what I like to call "engineering judgment".
One of the things that distinguishes great engineers is that they make good judgment calls about how to apply technology or which direction to proceed. They understand pragmatism and balance. They understand not to get infatuated with new technologies but not to close their minds to them either. They understand not to dogmatically apply rules and best practices but not to undervalue them either. They understand the context of their decisions, for example sometimes code quality is more important and other times getting it built and shipped is more important.
As in life, good and bad decisions can be the key determiner of where you end up. You can employ a department full of skilled coders and make a few wrong decisions and your project could still end up a failure.
Some people never develop good engineering judgment. They always see questions as black and white, or they can't let go of chasing silver bullet solutions, etc.
Anyway, it's one thing to understand how to do unit tests. It's another thing to understand why you'd use them, what you can and can't get out of them, what the costs are, and take into account all that to make good decisions about how/where to use them.
This. These days I write unit tests only for functions whose mechanism is not immediately clear. The tests serve as documentation, specification of corner cases, and assurance for me that the mechanism does what it was intended to do.
I keep tests together with the code, because of their documentation/specification value.
I do not write tests for functions which are compositions of library functions. I do not test pre/post-conditions (these are something different).
And I definitely do not try to have "100% test coverage".
Personally, I fast tracked through 2-4 out of sheer laziness but that's definitely my progression in regards to testing and pretty much everything related to code quality. It includes comments, abstraction, purity, etc...
More generally:
- Initially, you are victim of the Dunning–Kruger effect, standing proudly on top of Mount Stupid. You think you can do better than the pros by not wasting time on "useless stuff".
- Obviously, that's a fail. You realize the pros may have a good reason for working the way they do. So you start reading books (or whatever your favorite learning material is), and blindly follow what's written. It fixes your problems and replace them with other problems.
- After another round of failure, you start to understand the reasoning behind the things written in the books. Now, you know to apply them when they are relevant, and become a pro yourself.
One thing I do religiously all the time is putting asserts everywhere. It's the only thing you can go crazy on. The rest is indeed always a balancing act.
> > Start with the book: “Head First JavaScript Programming”.
> Please don't. 6yo book doesn't worth it. Even one year is a lot for modern tech.
I'm sorry, I know you're parroting conventional wisdom here, but have you read a book? Did you look at the table-of-contents? It looks like a great introduction to me, covering the basics that I think would be a prerequisite to learning any of the more recent language features. "Don't judge a book by its cover", as they say!
> We believed that it was poor time management and that if we just worked a bit harder and had more self-discipline, we could do the job
It's frustrating that the "experts" still see things this way. As if it's some moral defect that people are too lazy to overcome.
Can't stop procrastinating? "Just work a little harder and get started already"
Depressed? "Just cheer up and get over it."
Addicted to drugs? "Just look at the negative consequences and stop using them"
I guess I could see this article as a step in the right direction, but it's still frustrating to see how the casual stigmatization of behavioral health and its symptoms continues to linger.
I've been experimenting with serverless for some time now and came to many of the same conclusions written about here. The biggest takeaway for me is that there are pitfalls to an overreliance on lambdas. You really need to offload as much as you can to the other serverless solutions AWS provides.
I've been using Appsync for my graphql API instead of API gateway + Lambda, and I have had a good experience with it. A lot of logic can be offloaded into mapping templates, making lambdas unnecessary in many cases. Debugging is still a bit of a pain for me, but the end result is fast, cheap, and reliable.