I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor of software development processes such that things are now very much in cult-/cargo-cult territory.
There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.
This is a natural flow of steps, and it always appears to me that the new-fangled developer cults are always trying to 'prove' that these aren't natural steps, by either cramming every step into one phase (we only do "Analysis" and call it Scrum) or by throwing away steps (Qualification? What's that?) and then wondering why things are crap.
Too many times, projects I've seen fail could've been saved if they'd just gotten rid of the cult-think.
Its nice, therefore, to see someone else making the observation that Waterfall is a natural process and all the other -isms are just sugar/crap-coating what is a natural consequence of organizing computing around the OSI model - something that has been effective for decades now, and doesn't need improvement-from-the-youth antics.
The problem with waterfall to my experience is not immediately visible to developers or directly concerns developers, but rather the stakeholders and managers. Almost always, your customer doesn't know what they want - they think they do, but they don't and will only realize this after you finished development and have a visible system demo/field test.
Additionally it is really hard for project management to get metrics/early warnings if something is not going properly (no project - waterfall or agile - is without problems during its lifetime). How would you asses the quality/progress of a project based on a couple of a specification or requirements documents 1-2years into the project with no (immediately usable) code being available? How would you asses if the approach taken after 3years in a project will be accepted by your endusers?
Waterfall is just a big black box to management and stakeholders with the hope that after 5years something usable will come out of it.
Additionally, as a start-up, we rarely know what the product should be.
We have a first vague idea of the product but don't know what it exactly is.
Commonly, the ideal product is different from the initial design, and we only know that after the actual design, investigation and development.
If it's the nature of software development, the development framework should be capable of design change in short terms.
Agile development styles are trying to solve it. Unfortunately, many projects failed to adopt it because they don't understand the purpose of agile soltware develoment.
"Agile" just means "analyse the situation until you have properly fleshed out specifications, great requirements that will solve the problem as described, and a proper plan for development".
Still, "Analyze until Completion" doesn't need to be couched in fancy new-age terms that someone will get paid to teach everyone ..
In startup land, it's often quicker to ship software than analyze, and this tends to win in the market.
The "analyze" worldview assumes that the easiest and most accurate format for the output of an "analyze" process is some natural language text and maybe diagrams and a powerpoint or two. What if the friction of authoring software was reduced to the point that the easiest output of "analyze" was .. working software?
The absolute extreme example of this is of course SpaceX: classical control systems theory can't quite deliver a hoverslam landing that works first time, so they had to iterate by crashing rockets.
>What if the friction of authoring software was reduced to the point that the easiest output of "analyze" was .. working software?
Well, the software world is certainly working hard - on one side - to reasonably attain this goal, while another big portion of it actively resists this advancement in human social services, such that it represents.
Perhaps that is the key thing to all of this: "Until all of us are using waterfall, many of us will have a hard time using waterfall." I'm not sure I like where this leads, so I'll just agree with you that sometimes you just have to crash rockets.
> actively resists this advancement in human social services,
> "Until all of us are using waterfall, many of us will have a hard time using waterfall.
Now you've lost me, I've no idea what you're talking about with the first statement and the latter just sounds like cultism? People have valid reasons for iterative processes!
It goes like this: for as long as we are incompletely applying waterfall, someone else will attempt to refactor the natural process and call it something else, instead of just recognizing and then completing the waterfall process. This will be the state of things until either a) everyone abandons waterfall and/or calls it something else or b) waterfall is just so well understood as a natural law of the industry.
So its really more of a self-fulfilling prophecy situation, and yes in that regard, we are in a cult.
>The problem with waterfall to my experience is not immediately visible to developers or directly concerns developers, but rather the stakeholders and managers. Almost always, your customer doesn't know what they want - they think they do, but they don't and will only realize this after you finished development and have a visible system demo/field test.
This just indicates a failure to perform a proper Analysis/Specification/Requirements phase, with relevant qualifications steps. It doesn't matter if you're a Manager or a lowly Developer - if you can't adequately qualify the requirements and specifications, the analysis is simply not complete.
Pushing the problem to or away- from Managers is just one way of saying "I'm too lazy/incompetent to adequately complete the Analysis phase, resulting in poor requirements, lacking or contradictory specifications, and no planning". Its the Analysis that needs attention, i.e. maybe nobody actually knows how to wear the Analysts hat any more ..
>This just indicates a failure to perform a proper Analysis/Specification/Requirements phase, with relevant qualifications steps. It doesn't matter if you're a Manager or a lowly Developer - if you can't adequately qualify the requirements and specifications, the analysis is simply not complete.
But that view of requirements is not borne out by the reality for most projects. You're presuming that it's possible to gather the "ideal requirements" when the project first starts, with enough due diligence - and also, that those requirements are fixed.
In my current job, we're producing a service that has to court a handful of very large clients. Even if there was a well-defined idea of what the service should eventually look like in 5 years, a lot of feedback is required to discover how it should look now. Which client needs more attention? Where is the biggest opportunity for additional value? How are users actually using the service, ignoring what they said they were going to do with it?
That last part is the most important - requirements are in reality a feedback process for which the existing product is one input. You cannot analyse how users will empirically interact with a product that does not yet exist. Abstract analysis is no substitute for data.
If you can't formulate actionable requirements, you're either not the domain expert, or not communicating properly with the domain expert.
What your service looks like now and what it looks like in five years are obviously two different questions, but a proper analysis will divide the issue between now and 5 years from now and come up with requirements that fill the gaps. This doesn't mean things get set in stone and aren't adaptable to changing needs - when this condition is identified, the manger/developer need only apply the workflow again, and revise the specifications with the updated data, and a new development plan can be formulated. Maybe this is 'agile', but again - it speaks to the fact that waterfall is a naturally occurring phenomenon in engineering/technical matters, and thus should be applied consistently, completely, in order to provide fruitful results.
> You cannot analyse how users will empirically interact with a product that does not yet exist.
I don't agree with this, as I believe it is very, very glib. You can of course empirically interact, by wearing the user hat. Too often the developer/manager/user hats are considered adversarial - but when the decision is made to be flexible about which of these hats one is wearing, during analysis, makes all the difference between whether your product is successful or not. Software is a social service - rigid developers who cannot put on the user hat, are not providing the most intrinsic aspect of that service in their field.
>You're presuming that it's possible to gather the "ideal requirements" when the project first starts, with enough due diligence - and also, that those requirements are fixed.
I make the claim that this ideal can be attained, by ensuring that the early steps in the waterfall process are actually applied. You are correct in noting that "when you don't do things right, right things don't happen", however ..
>but a proper analysis will divide the issue between now and 5 years from now and come up with requirements that fill the gaps
The issue with the Waterfall model is that this approach doesn't work. Aiming for "now" means you come out with an outdated product in the future. Aiming for "5 years from now" means speculating on what users will eventually want, which is very error-prone. Trying to adjust course midway through is a nightmare - and completely defeats the point of trying to get requirements "right" the first time.
>You can of course empirically interact, by wearing the user hat.
This is not empirical: it does not involve observation or measurement of the real world. Speculating about user behaviour is no substitute for concrete data about how users actually behave.
"Software is a social service - rigid developers who cannot put on the user hat, are not providing the most intrinsic aspect of that service in their field."
All the more reason not to rely on the proper wearing of a user hat (how are you going to know, by the way?), but actually work with the users instead and spend time creating the necessary artefacts to capture their perspective as soon and as well as possible.
> If you can't formulate actionable requirements, you're (...) not the domain expert
Yes, fine. So there is no (true Scotsman) domain expert. Grandparent's point is also just that actually building the application and seeing how users interact with it is a valid method of analysis either in a prototype phase or actual evolving production software.
Is it valid tho? Deploying an app to get feedback is totally backwards. Make an interactive demo with no-code tools, record it and iterate on that based on stakeholder feedback.
You're paying your most expensive employees to build something twice, and initially in a lower-fidelity tool that won't quite mimic the functionality or the look the of the final product. So you end up with stakeholders that don't understand it's just an interactive demo, and start wondering why things don't quite work. That or they buy into the demo, you build the real solution and then they're questioning why it doesn't quite work the same, or looks different, because when does an interactive demo in a mockup tool every really get it 100% right?
> If you can't formulate actionable requirements, you're either not the domain expert, or not communicating properly with the domain expert.
I want to highlight this, because it's almost universally true on any given project that there are no complete experts. But I'll be looking at domain not as a business domain, but actually considering if you can find a domain expert in the technology stack a project is planned in.
When requirements are written up, there is never a domain expert on all the technologies to be used that is so experienced in the technology that they will not get things wrong. Software development is so intertwined and so fast-paced that we are at the mercy of tech stack we use too.
Imagine you are well-versed in Postgres, but your new project requires you to use Postgres FTS extension. Without learning the ins-and-outs of Postgres FTS extension, you'll make grave estimation and "planning" mistakes. And then you are supposed to deploy on RDS instead, where there is another set of considerations. And this being a new project, you are asked to use asyncio psycopg for the first time, so there will be more gotchas as well.
Basically, the number of "tools" we shift between is so vast that nobody can be an expert in them. The rate of change is huge as well. Just count all the Kubernetes tools today, and imagine having to use any one of a dozen of mini-k8s set-ups for local development. Just exploring the tech stack is a full time job for a dozen people, and you wouldn't be getting anything done. While others in the market would.
So, if you are in a world where you've got your tech stack pre-set, you've got plenty of experience with it, and you are a domain expert on the business problem you are looking to solve, sure: waterfall-like approach will not put you at a disadvantage. But introduce any one of the newer components (cloud scale-out), and an approach to gain expertise first will have you run-down by the competition.
Do you work in software development or IT operations? Of if you did but no longer do, how long ago was it?
Having been in the field for some time, and reading HN and talking with people in the field for as long, I believe your general attitude is one of someone who has an idea for how things should work in a stable, ideal world with unchanging toolsets and unchanging technologies. This is not the world of technology.
the challenge I've always seen with putting the user hat on on so to speak is the difficulty in behaving as though you don't know how it works. I've seen enough user research where things that were obvious to the tram were not at all to the user.
> > You cannot analyse how users will empirically interact with a product that does not yet exist.
You can certainly refine the software over time (A/B testing, if you will) to more closely attain the ideal, which you may not have well defined at the beginning of the project if you don't perform an adequate review of the needs of the user.
But you can certainly also complete a user analysis that produces requirements, which when fulfilled, solve the problem in its entirety for the user. These two extremes are not absolute - sometimes, if the analysis is incomplete, A/B testing can rescue the project regardless of how poorly the first analysis was performed - but A/B testing, it could be argued, is applying waterfall properly: iteratively, as intended... but by all means, call it 'agile' if that makes the difference to the team involved.
Wait a minute, you're a proponent of iterative approaches? In modern parlance most people put <non-iterative approaches> under the heading "waterfall".
Can you specify which book/process/document(s) you are using in your day-to-day? This could be very interesting in future discussions with clients, to say the least!
You might be interested in [1]. Supposedly it has the diagram that became known as the "waterfall" diagram. I don't think the author is advocating for what we normally associate with waterfall, though, as there are some additional steps they advocate for. They start with a simplified model, criticize it, and then add some more complexity to the model.
Also to note, the paper is titled "Managing the Development of Large Software Systems."
The author outlines 5 steps they believe need to be used to improve the success of a project. One step is called "do it twice" in which the author advocates the development of a prototype well in advance of delivering the final product. I think this is iteration. It's not indefinite iteration, but it's definitely not showing the big bang single release waterfall approach either.
It also advocates to involve the customer throughout the process instead of only at the end.
The summary says to refer to Figure 10 as the process that incorporates the author's steps. This final diagram is pretty far from the simplified and flawed waterfall picture that has been pulled from the front of the document.
Waterfall is indeed a decidedly non-iterative approach. Anyone calling an iterative approach "Waterfall" is, well, confused about the terms. As soon as you add iteration, you're doing something else. Something intelligent. Often some variation of Iterative & Incremental (of which Scrum is an extreme form), Evolutionary, or perhaps the V-Model.
>This just indicates a failure to perform a proper Analysis/Specification/Requirements phase, with relevant qualifications steps. It doesn't matter if you're a Manager or a lowly Developer - if you can't adequately qualify the requirements and specifications, the analysis is simply not complete.
Not really. It indicates a general inability to predict the future.
That's really the core of the issue. Requirements analysis can be done badly but even when done well it doesnt make you good at predicting the future.
The longer the feedback loop the more the ground changes under your feet.
>Pushing the problem to or away- from Managers is just one way of saying "I'm too lazy/incompetent to adequately complete the Analysis phase
I used to believe precisely this in my early 20s.
My "aha" moment came when I built software solely for myself and even then I realized that this didnt stop the requirements from changing because of surprises that kept being thrown my way and problems that I uncovered only in retrospect from actually using the software.
For sure not all problem spaces are this chaotic but most software problem spaces are.
When I was a kid, I remember we topped over this hill on I-15. Way down there in the distance, there was an overpass across the road. I remember wondering how my dad could aim the car accurately enough to go under that overpass from that far away. As an adult, I know that he didn't even try to do that. Instead, he steered the car as a continual process, not just once.
That's the same kind of problem with waterfall. You want them to do the analysis perfectly, and then you want the world to not move while the implementation happens. Neither of those is possible. And then you blame the analysts for failing to do the impossible.
The analysis phase is never complete enough. The conditions are never the same by the time the implementation is complete. That's just reality. What are you going to do about it? You need to steer during implementation, not just once at the beginning.
Any analysis of a problem, requires understanding WHAT problem needs solving. The issue in real world is, buisnesses and markets (as well as technology) are changing very fast, and any in-depth analysis done two years ago might not have the same outcome as if done today again. This is a fundamental realization of agile. If you have a problem domain which is not subject to change, waterfall might be the right choice. But you almost never have that. I know it is hard to accept for us devs, often with backgrounds in Math/Physics/CS etc. because the problems we were trained to tackle there are always the same, and the laws of math and physics tend to never change (or change way slower than in modern markets).
This just indicates a failure to perform a proper Analysis/Specification/Requirements phase
No, we're suggesting the analysis/specification phase should be comprised of what are, effectively, many really small waterfalls. And that any effort to obtain specs for an entire system without first building some minimal version of the system is doomed to failure.
For a small enough feature, yes, you end up doing waterfall. The problem is breaking up a massive system into those small enough pieces.
I consider fleshing out customer requirements part of the engineering process. You cannot expect the customer to properly communicate what they want/need. That doesn't mean Scrum, you have many other tools at your disposal, presenting the customer with scenarios to challenge their requirements, use interactive prototypes or even paper ones. All of them far cheaper than implementing the wrong thing.
I agree about the pitfalls of not seeing something usable until it's very late and how to measure progress. The opposite is also true however, you make some easy non-scalable PoC and it looks like a huge progress when it actually can't or shouldn't be used as a foundation for anything.
I'm inclined to think a PoC (or whatever you want to call it) would be useful for some things:
- a tangible and cheap way of showing the customer how you think you can solve their problem
- getting concrete feedback on that solution (you're both talking about the in same, tangible thing)
- using it as a foundation for the "big" project. Not its code, but the ideas behind its UX, flow, treatment of data, whatever is the main crux of the solution can be made tangible in a PoC and be used as a reference for the next step, ie making a production-worthy application
- the PoC can also show that what the customer wants is in fact a bad idea.
This problem is not a problem of project management methodologies, but rather a problem of business strategy and scope management. If you want to fly to the moon with one shot, that is exactly it. Breaking down scope into smaller and better understood objectives is what will make things work and that should happen before a project starts. Afterwards it can be executed both as one big waterfall sprint for each objective or in some agile way with much higher chances for success and lower costs for cancellation.
Perfect example of unclear formulated goal of the project. You say "fly to the moon with one shot" and think it is completely defined. But the whole, year-long project would change in scope whether you mean "Land on the moon unmanned", "Land a man on the moon", "Land a man on the moon AND return him safely to earth" or "Flyby the moon, get close to surface and take some pictures".
And if you now accept that any of those above goals may be changed into another during project runtime, you understood why waterfall is not often the best idea.
>Perfect example of unclear formulated goal of the project
That was exactly my point.
>And if you now accept that any of those above goals may be changed into another during project runtime, you understood why waterfall is not often the best idea.
I think you missed the entire idea of my comment. There is no sense in accepting that those goals will change and trying to start a project that aims to achieve one of them. They are too ambitious and broad in scope and need to be broken into smaller pieces (see Apollo program or SpaceX roadmap - both are focusing first on smaller objectives, that will clarify the next steps).
> Almost always, your customer doesn't know what they want - they think they do, but they don't
I honestly don't understand this statement. Could you provide an example to elaborate on this point? I have read it at so many places but it sound more like the fashionable thing to say.
The "need" have a starting point, may be a very high level problem statement, and then through analysis, back and forth question answering, you discover the need in more concrete terms instead of abstract terms where you started with.
It is very easy actually. We often a business uses Excel for their processes and flow, that is they have excel sheets that they edit, copy, send around and merge back. Now, we all know the problems that exists with this approach, and even the customer knows that. But that doesn't mean they can perfectly describe how any application that gets rid of those excel files should look like.
But isn't the problem statement itself what you need from customer? My idea is that the customer approached you with the problem statement and their expectations is that you would provide possible solutions to it instead of the customer defining the solution exactly and just asking you to "code" it.
You describe the requirements analysis step which itself is a big issue with traditional waterfall projects. You bring someone (or a team) in from either outside the company or inside the company. At that point you are already falling victim to the fallacy that you think you can fully and exhaustively analyse the full problem domain. And even if you think you can do the 100% complete requirements analysis, who says the solution/software you deliver in 3 years from now will exactly fullfil these requirements and - more over - what makes you assume the problems of today are the same problems of the business in 3 years from now?
Well, the moment you talk about 'possible solutions', you've already accepted the original claim. A problem described in vague terms has many possible solutions. Some of them will actually solve the exact problem, some of them won't. Deciding which is which is not trivial.
Ho do you go about it? Do you build and deliver all possible solutions, and then the customer gets to chose one? Do you prototype many possible solutions, agree with the customer which prototype is most promising, and turn that into the final deliverable, hoping you had captured the relevant details?
Or do you start working on a basic solution, let the customer use that and provide detailed feedback on what it's doing well or not, rinse and repeat until the customer says 'good enough, thanks'?
Let’s say the high level problem statement is that a company needs to add a Search feature to their Store Inventory product.
Here are some questions that can arise during this course of this project:
1. What fraction of customers are asking for it, and how badly? How do you know their judgment is correct? What if people who aren’t asking for it may also go on to love it?
2. How deep and fast do you want the Search functionality to be?
3. How much time and money are you willing to invest in it? What if you find out after a month of work that Search is much harder than it looks?
4. Let’s say you discover there’s a bug in the indexing system which leaks personally identifiable information even though it’s supposed to hide it. Will you postpone the project till the bug is fixed, or work around it somehow?
It’s nearly impossible to answer these kind of questions right at the start of the project.
That doesn’t mean there
aren’t projects where all the requirements are known upfront in precise detail. Usually that correlates with the associated technologies being very mature and their capabilities well understood by the people involved.
Formal research into software development techniques tends to be junk. It's very very hard to conduct a meaningful study of professional-scale development. The costs are too high, the confounding variables too many, and the industry demand too low.
Typically you'll get software researchers with no industry experience conducting studies on "subjects of convenience"—their undergraduate programming classes.
There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.
Doing that per issue is still basically agile. The problem is when you do it per project, and try to get everything right the first time, and then it's months or years between analysis and qualification.
Agile really boils down to starting small and keeping your stake holders involved. It is an explicit acknowledgement that programmers are bad at estimating time, and that users don't know how to ask for what they want.
All the stuff people complain about is just companies buying snakeoil, and overzealous organizers justifying their own jobs.
That might be its stated goal, i still haven’t seen a single shop that actually implements that. First thing that happens when implementing agile are estimations, and the justification for that is for “predictability”.
To me it the estimates themselves don't really matter. Everybody already knows if something is gonna be worked on or not (at least kind of). The value I find from estimates is they encourage the team to discuss how something is gonna be built.
The best moments are when everybody estimates that something is "easy" except for exactly one person who says "this is super hard". Then there is a discussion about what the "easy" camp might have missed or perhaps the shortcut the "hard" person didn't know about.
At the end, everybody walks away understanding a little more about what the rest of the team is working on.
The issue is when organizations or bad POs or SMs subvert this and make it all about estimating everything exactly in story points where everybody has to be on the same page on the fact that 2 story point means 2 person days of work. And that is the only thing that estimations are used for at these places and every team has the same scale and get compared on how many points they deliver each sprint. And they press the teams for estimating ever smaller.
That is the kind of environment that most of the devs that hate Agile or Scrum are living in and it's no wonder they hate it. It's completely against the spirit of Scrum and agile so we can't blame them. Unfortunately they blame agile which was a try to make things better for them.
I do it properly, but I also usually work alone, or in very small teams. I have the luxury of setting the goals, and moving the goalposts, when needed (often).
Waterfall is almost required, when you are crossing the streams. Merging separate workflows is a fraught process. You need to know what will be delivered, when. I spent most of my career, working for hardware companies, and saw this every day.
Truly agile processes don't really lend themselves to that; which is actually kind of the point.
But the real deal, is that high-level managers and non-tecchies can't deal with Agile.
Managers have a lot of power. If we don't help them to understand the process, they will make decisions, based on what they know.
It is valuable to have engineers who know how to decouple individual pieces to eliminate (reduce) these dependencies. These dependencies are almost guaranteed points of conflict and risk for the project.
Tell that to a hardware manufacturer that is developing a camera, with firmware, three different processors, drivers, APIs, SDKs, host software, lenses, speedlights, marketing materials, packaging, distribution, documentation, etc., while coordinating manufacturing lines in three different countries, and engineering efforts in two.
It really helps to use vague terms like S/M/L/XL or bike/car/plane or whatever. If you say an epic will take 6 months, it is very likely that in 3 months someone will ask for proof that you're halfway done.
But once you have abstract S/M/L/XL estimates for some tasks, what do you do with that information? How many S tasks are equivalent to 1 M task? If the team completed 3 M tasks and 3 S tasks last sprint, how does that help you plan for your next sprint? While story points have their own issues, at least tasks' relative size differences are clear and different tasks of different sizes can be scheduled.
Story points are relative measures (at least when you do it 'right').
What is so different from estimating things as 1/2/5/8 or S/M/L/XL?
S (or 1) is roughly half as hard as M (or 2).
M (or 2) is roughly half as hard as L (or 5). Notice how 2 is not exactly half of 5. Just roughly.
And so forth.
And the point is that you can't say how many S are exactly equivalent to how many Ls. Estimates tend to get less precise the larger they are. There's usually more uncertainty built into large estimates which means once you do the work it might be much faster or much slower because of things you didn't look at very closely when estimating. While small things are easier to have a complete overview of and the estimates are more accurate.
A recent example from my work. There was something the team unanimously estimated as L. I knew it was an S. I knew the code though and they didn't. I let them put the L estimate on it. When the sprint started and I had some time I did the task myself and it really was an S in the end. But that's fine. They 'priced in' the uncertainty. If one if them had done it, it might very well have taken them longer because if not knowing the code as well.
You don't; you ease your executive/sales/etc partners into the new world where they get better software, faster features, and fewer outages, but they give up release dates and micro-managing the roadmap. Calling an epic "Large" instead of "237 story points" is a way of forcing yourself to accept that you only have a rough idea of how long it will take.
The thing I don't understand about agile is that many projects require a year+ investment before anyone will use it. e.g. say you're building a competitor to Google Docs. How do you get any real user feedback after < a year of work by a substantial team of engineers? No one is going to want to use your prototype that barely does anything. Or worse yet, your backend that doesn't even have a UI yet, and also barely does anything.
In that case you don't have a customer, you have to dog-food it. Most of the time when agile references a customer, customer feedback, and customer interaction it's actually the customer contracting the work. That is a different context than a startup or video game developer or others because there is a clear customer stakeholder who can provide feedback and validation from the start.
This gets to part of the problem in all these discussions. The context that the Agile Manifesto was written in was not in creating Facebook or Google, but in creating software for a defined customer who held a stake in the outcome (had invested capital, along with wanting/needing the results). Once written, something like your Google Docs replacement can get a customer stakeholder, but they won't have it at the start unless you line up early users (and even then, they probably aren't invested).
In that case, I have never worked in the context for which Agile was intended, and neither, I suspect, have a significant proportion of all working software developers. That may explain many of the disconnects in these conversations.
Indeed. They way I understand Agile/Scrum in enterprises is basically a dogmatic process around implementing a bog standard CRUD app Nth time. Minor details keep changing from customer to customer.
It should be clear to folks that complex technology products in past were not developed by caricature of Waterfall method that Agile peddlers show on slides. Neither today interesting engineering products are developed following some "methodology industry".
In this case, you do user research with mock-ups and other extremely low effort prototypes. But the work done in that regard is the same for agile and waterfall. If you truly do have a project where the MVP will take a year, and the prototypes would be useless until that year is up, then it probably doesn’t matter which methodology you use in that first year.
I was once explained the interest of quick iteration cycles (the main opposition, IMO, to a waterfall model) in a simple way:
The teacher drew a very simple chart : y = t. "This is, in a given project lasting from t=0 to t=1, the amount of practical information that you have about how to design this project. At t=1.0, you have 100% of the information. When you start, you have about zero information about it, just guesses.
Then another line : y = 1-t. "And this, is the amount of design freedom you have during the project. At the beginning, everything is possible but at the end, big changes can not be made."
"This is what we call the curse of project management."
It has really be enlightening. Make prototypes, be courageous enough to scrape entire designs, and when you have the resources for it : make a total reset for version 2.0 and redo v 1.0 correctly.
Of course, this is highly subject dependent. You don't build a bridge the way you build an experimental robot, but this explained well the interest of non-waterfall models.
And herein lies the rub. If a project is more than 6 months to a year in duration, there is very little chance that what the customer now wants is what they wanted before. Requirements aren't created instantly on day 1 ready for dev, you could easily have several years of requirements, during which time people change, the world changes, the law changes, priority changes and then what?
Even systems that are relatively unchanging like the air traffic control system they built in the UK still had a whole raft of issues that needed addressing and at this point, the documentation becomes out-of-date and changes are extortionately more expensive to resolve.
> If a project is more than 6 months to a year in duration, there is very little chance that what the customer now wants is what they wanted before.
Depends largely on the customer. Just do anything defence, related to aircraft, or industrial control systems, more often than not, requirements are set in stone, and modifications require approval and consequent fees.
Now, those industries throw a lot of money at actual R&D to know what requirements can be requested, which is very different from your usual software shop, where "R&D" is some brief talk during refinement or at most a dedicated user story to look into the subject.
For "small" projects having requirements set in stone (probably?) works fine.
The customer might claim something different, but for a couple of large and failing projects in industrial automation I've had to fight hard to be able to iterate and rescue them from the jaws of defeat.
This. If you properly understand the problem. If your team is compact enough to be in the same room. And your customer input during concept/requirement writing is high.
If you stray outside that, both waterfall and agile struggle… but waterfall has the potential fatal flaw of delivering a product no one wants (or doesn’t work).
This is why agile was introduced. There were software products being made that ended up just not working… so people saw that problem and tried to fix it with agile. But agile just fixes the “don’t deliver a product that doesn’t work” problem.
It's more subtle than that. What "agile" does (both Scrum and XP, historically) is _protect the delivery team_. With waterfall-style project management, early errors balloon but often don't surface as something that needs addressing until implementation and testing, so the delivery team gets all the stress for blowing out the project schedule.
Agile techniques both surface those errors early, so they're course-corrected quickly, and provide a set of clear rules for the rest of the organisation to abide by which should mean that the delivery team can't get overloaded into a deathmarch.
"You're doing it wrong" is not a compelling argument.
A competent, motivated, invested, and empowered team don't need any particular methodology to succeed. Methodologies are adopted precisely because many people don't do things properly much of the time. A methodology which doesn't account for this and doesn't self-correct when done improperly isn't worth a whole lot.
Waterfall done properly produces great results. Agile done properly produces great results. Both of them produce bad results when done poorly. Proponents of one tend to compare their methodology done properly to alternative methodologies done poorly. That's a pointless conversation to have.
At t=0 you rarely have 0% of the information and usually you can get more information ahead of starting to jump into the water. The problem is that stakeholders often pressure for early results which prevents enough planning and information gathering.
Of course you can also waste too much time on planning and information gathering, but that's not something I have ever witnessed. Usually time is wasted by starting to develop without making a proper plan.
> make a total reset for version 2.0 and redo v 1.0 correctly.
That's how I usually work (in personal projects). I wonder what it would take to convince management to do the same. In theory it should "just" duplicate the budget of any given project (which doesn't sound totally wrong).
It sometimes happens to me, though, that I need to reach v3 in order to go back to v1. I just feel this way of doing software to be the most natural way.
> I wonder what it would take to convince management to do the same.
Build the redo time into your estimates.
Non-technical management has no concept of what's involved in building software. They don't need or particularly want to know. They care about things being "done." You don't need to convince them of anything, you just need to consistently and reliably show progress in a way that they can comprehend.
It's very irritating to see no progress for a month, then see a demo of something that appears to do everything you asked for, then hear that it won't be "ready" (whatever that means) for another month because the whole thing needs to be redone.
It's very reassuring to see demos every two weeks, each one showing an obviously incomplete product, but with clear progress between each demo culminating in a completed and delightful product after two months.
The actual development process for the two scenarios above can be exactly the same, it's just how they're messaged.
"Agile" processes still have all these steps, they just repeat them in a loop on a much smaller timescale. This allows for a lot more flexibility, and for feedback from the qualification of the first iteration to go in as input to the analysis stage of the next iteration.
But it's worth noting that we need to spend more time on task management with a shorter development cycle.
In waterfall or scrum or whatever, one development cycle often comes with listing tasks, prioritizing, planning, developing, retrospection, or something similar.
If the cycle is long, we can spend more time on each process.
But if the cycle is short, we rush to complete each process and can't spend enough time understanding each process's meaning.
So I focus more on consuming as many stories as possible.
I never think about the product's goal and how I can contribute to it.
So we have to spend more time teaching the purpose of each process.
Or we end up as a failed agile development.
I've been doing this a long time. As far as I can tell, it comes down to senior leaders in software development wanting to make some sort of success story to further their own careers. It's much easier to buy into somebody else's canned success story plan. Hence the popularity of stuff like "Scaled Agile". Which is funny to me, because in practice, it's heavier than old school MS Project and Waterfall development. Even the diagram they use to summarize the concept is laughably undecipherable and complex[1]. Same reason companies latch onto 6-sigma, Clifton strengths, TOGAF, Total-Quality-Management, and so on.
I do recognize there is some actual "good stuff" in there. I'm a fan of the textual agile manifesto.
.. at this point the customer discovers that what they asked for does not actually solve their problem.
You cannot do any kind of exploratory or innovative work in waterfall, because you can't specify that upfront. You can't easily co-evolve solutions by discovering what works and what doesn't. You can't even do A/B testing really, because every time you do your GANTT chart bifurcates.
There is a history of big, spectacular IT project failures, often commissioned by the public sector, but not always. The evolution of non-waterfall systems has been driven by trying to prevent or mitigate these failures.
>> Analysis, Specifications, Requirement
>These tend to be inadequate, incomplete, or ..
>> Qualification
>.. at this point the customer discovers that what they asked for does not actually solve their problem.
I disagree. If you are properly isolating your work into an "Analysis" phase, you will properly flesh out the issues and the subsequent specifications and requirements will be complete - sufficient to the task of formulating a development/implementation plan. But if you don't take care to complete a proper analysis - then no, you won't formulate specs or req's properly, either.
The important thing is that the individuals involved know the value of a proper analysis - whether they are developers or managers or just paying the bills. Too often, software developers don't realize that they are really providing a social service, and thus they don't deliver proper analyses that can result in action. Alas, that is something other than a tooling/methodology issue and lays more in the motivations of the developer.
Its a lot easier to say you can't be responsible for the mess as a developer if you don't allow the mess to be adequately described, in the first place .. so the Agile vs. Waterfall debate is really more of an ethics issue than anything else.
Good developers do Waterfall Complete. Mediocre developers do it, and call it Agile. Terrible developers find excuses for not doing the key things, and end up with a failed project on their hands.
> If you are properly isolating your work into an "Analysis" phase, you will properly flesh out the issues and the subsequent specifications and requirements will be complete - sufficient to the task of formulating a development/implementation plan. But if you don't take care to complete a proper analysis - then no, you won't formulate specs or req's properly, either.
But what if that doesn't happen?
(This seems to parallel the "C is safe if you don't write bugs" argument. Processes that require infallibility are bad processes.)
The level of dysfunctionality that can be achieved is baffling. How can someone fail to deliver a payroll system? https://www.smh.com.au/technology/worst-failure-of-public-ad... - it's one of the oldest applications of business computing, and yet somehow it wastes a billion Australian dollars?
Of course no one is doing the Analysis properly. It's impossible. You plan how to implement something using a library method. But only in the qualification phase you find out that this method actually has a bug on your target platform or is incompatible with another part of your software.
I dunno, I've seen many, many examples of a properly done Analysis phase. But it does require a great deal of competence, a lack of hubris, and plenty of self-doubt/verification/validation on the part of technical contributors to make sure they do actually know what they're talking about .. a Qualifications step usually reveals the nature of this impact on the project - or lets just say, omitting this step is usually the first part of project failure.
If the Analysis has been done properly and we are sure that it has left no room for error, and if we demand similar quality from the Specification and other phases, then why do we need a Qualification phase at the end at all?
Conversely, if we accept that each phase is fallible so Qualification is crucial for any chance at a good working product, what is the recourse for errors in the Analysis phase? Basically if there was an error in the Analysis phase, we will, by definition, only find it in the Qualification phase, requiring us to scrap all of our work and start over from a new Analysis phase.
Royce argued that only in small software systems can you do a little bit of analysis then a little bit of coding then be done with the project. He proposed waterfall as a way to build larger software systems.
He argued that these 2 steps were the only direct value steps in developing software - figure out what the problem is then write a little code to solve it - he acknowledged this was the perfect ideal since the two steps directly benefit the output product but he also argued this can’t scale to larger systems.
So various analysis and testing steps were added (none of which directly contributed to output, all were drags or costs on delivery - with the idea that they catch more problems earlier thus paying for their overhead), with feedback between them to catch earlier errors, and thus waterfall.
The agile mindset revolves around the idea that the other steps added are BS. The agile mindset solves for the impossibility of a 2 step analysis then code process at scale by reducing scale. It does this by using very small iterations and removing handovers between different parties.
The problem is even more fundamental than that. The point of these extra steps in Royce's eyes is to ensure the developers deliver exactly what the customer ordered. Page 335 - "Step 5: Involve the Customer" - Royce wants to push design risk to the customer by committing the customer to sign off specific requirements along the way.
This leaves the project open to delivering the wrong thing. It doesn't matter that the customer specified the wrong thing and everyone else implemented exactly what was asked for, because in the end you all failed to meet the original goal - to solve a specific business problem through the introduction of a software system.
Agile recognises that no one knows exactly what's needed ahead of time - Royce kind of recognises this but only for the software developers with his idea to build then throw away the first system then deliver the second ground up re-write to the customer.
Agile says we'll all learn together as we build, we'll minimise risk at every step by only taking such small steps each iteration that we're completely happy to throw the iteration in the bin if we got it wrong.
Honest opinion: Its not a matter of size. Its a matter of completeness. Agile or waterfall processes fail if you omit the Qualification step, or if you don't ensure that the analysis produces actionable requirements, or clearly formulated specifications that lead to requirements, user/technical/or otherwise.
The whole point of agile is qualification through working software and getting feedback. There is no amount of analysis or requirements gathering that can compete with delivering an understanding of the requirements to serve as a model for how to move forward.
That Qualification step was the challenge I had when working on scrum projects in the past. I might finish implementing a feature in the current sprint #1, but QA won't get to start testing until sprint #2. When QA inevitably finds bugs, when do I fix the bugs? Should I have reserved an unknown amount time in sprint #2 for bug fixing? Or should I schedule time in sprint #3 to fix those bugs? If so, then the separate implementation/QA/bug fixing stages are each adding extra latency.
> I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor of software development processes such that things are now very much in cult-/cargo-cult territory.
It's refreshing to see someone else make this point.
I hate to admit it but, In my experience, agile has brought productivity gains.
When practiced correctly what agile seeks to do is, not skip the steps you mentioned from the traditional waterfall model, but to apply them iteratively to individual feature sets in sprints, versus across the entire project.
I have seen many products fail to meet requirements and dramatically slip schedules under the waterfall approach. The project becomes too large and unwieldy, and too many assumptions are made that create dependencies for future features.
The reason I hate to admit agile's success is because it comes at a cost--namely there are frequently externalities such as lack of documentation and developer burnout.
Many agile implementations treat developers as cogs on a production line, responsible for essentially participating in automated CI/CD processes as a piece of software might be expected to do. And the relentless unending sprints can also easily take a toll.
I joined the industry only 20 years ago so missed the heyday of waterfall. I did witness the rise of the RUP though. And I must say that to jr developer me it was very educational despite its rather intimidating volume.
There's one angle I don't see discussed in the last 10 years or so. There was a time when a lot of attention and literature was dedicated to software design. The whole OOP movement, design patterns, the RUP, entire shelves of software architecture books. I'm inclined to call them systematic approaches to building software. I believe feature-driven development was the last such methodology aimed at developers.
Ever since Agile became popular around 2008 I have not seen anything comparable. The OOP school of thought lost its appeal without really being replaced with another well defined body of knowledge. Anything FP-related is extremely community-specific and nobody is really talking casually about, say, monad transformers. Any Agile methodology has very little guidance for developers in comparison with RUP/FDD. It's all about managerial concerns and time tracking.
But there's no question that software development is drastically different now. Using version control (even local one) was not a given in 1999. Unit testing with jUnit in 2001 was a major revelation. And shipping features constantly in SaaS got popular after 2007 or so.
I was never a fan of RUP. But it had the idea of adapting the process to the project. It gave you lots of choices.
With Scrum, every project is approached the same way, and no one thinks if it makes sense or not. McConnell´s "Rapid Development" also gives you alternatives for you to choose. Now days, it is Scrum or sometimes Kanban.
Traditional waterfall model doesn't allow for iteration between the main steps, and this was followed historically, and caused absolutely massive delays and cost overruns in projects through the 70s, 80, and 90s. Popping iteration onto Waterfall model and still calling it Waterfall model is being disingenuous.
And cost overruns and delays in the 00s and 10s! People here on HN keep telling me I'm imagining things, that Waterfall either isn't bad or isn't real. But a 9-figure project I was (fortunately briefly, though not brief enough, around 1 year) on was several years late and delivered only half the features the users needed. Why? Because they made the plan in the mid-00s, and the circumstances changed but the plan didn't. They did what Waterfall describes, they committed to their initial analysis and plan and never looked back. Fortunately they learned after that, but it was a costly exercise.
> .. then the Developer takes in the materials of each of these steps
IME this is the main problem, the "Developer" must be heavily involved from step 1 (and work in a close feedback loop with QA). Everything else follows automatically in ever smaller iteration steps. Software development is first and foremost experimental research work, not a factory line.
E.g. if a specific software development task feels like boring/repeating factory work, it should have been automated already.
It is hard because more people means more opinions.
* Involving the developers in everything that the product team does as well as all of their own work is too inefficient.
* Involving the wrong developer means you might not ask the right questions up-front.
* Some developers are too negative and block things early on
* Some developers are too positive and will say yes to everything even if it is unreasonable
* There is not always a clear authority between product owners and developers
* A lot of decisions are based on company priorities that might need someone outside of the product/development team to argue for
The problem is that the more unknowable the future is, the quicker your plans about the future will be invalidated.
Thus a quick iterative feedback loop where there is a tight lead time between user feedback and users using the software tends to work better because it lets you adapt much more quickly to changing circumstances.
This is why I always aim to minimize that feedback loop as much as is feasible.
Unfortunately you are right that it got bound up in a pseudo cult. Worse, the cultish practitioners tend to do as all cults do and put their emphasis on ritual (e.g. standup) and completely miss everything else.
I kind of hate the term agile now precisely because the movement almost set out to create a cult out of this.
For what it's worth, I've never been on an agile team that did not have some waterfall decision making that was not open to correction, or a waterfall-based process that was totally closed off to testing and making changing along the way. I think we argue about platonic archetypes that don't exist in reality. Most processes and teams I've worked with have resembled each other more than not, if you ignore the terminology they use. I think success or failure of a project comes down to a lot of factors that are hard to quantify, so in some sense our focus on process methodology is a bit ritualistic and cargo culty.
So I'll start by saying I detest the snake-oil salesmen professional scam artists that are "Agile consultants", as they have now begun to leach onto DevOps which honestly is in large part has the same end goal of agile. These people are largely talentless hacks and parasites that extract value from coporations that are making money and then move onto the next victim while the actual engineers deal with the infection their ideas have left.
That being said the issue I've seen and read with waterfall is that it is very slow and cannot adapt very well. Which honestly is a super great thing in certain fields, like avionics, NASA moon landers, those kinds of things, because with this lack of speed and rigour you also get a certain level of assurance and quality and predictability, assuming things retain a certain level of consistency.
However in the world of newly emerging markets and the .com bubble and the new SAAS bubble being slow is a death sentence because at this point it is better to spray and pray because their are plenty of VCs willing to handle you fistfuls of cash to shoot at the wall because they are hoping to get in on the next big Facebook or Google (in reality now they are just hoping you will get bought by FB or Google which is another rant for another time.)
That being said agile offers flexibility and adaptability in return for offering less predictability. This causes a huge rift however when you have in house tech resources, that are part of a larger organization that needs all the assurances and predictability promised by waterfall, but the in house resources have to compete against SaaS opponents that management is all to happy to use a credit card to purchase.
The end result is tech teams saying they are agile by which they mean they respond to whoever is yelling the louadest and success is determined not by value but instead by politics and in the meantime no one is taking care of the customers because and you end up with everyone's data getting resold a dozen times, having more security holes than an Alabama "don't shoot at signs" sign, and companies spending several million dollars a year to keep some rusting old COBOL application running because it is the only thing they know for sure works right now.
Sorry I might've been unloading some baggage in this post.
When people hear "waterfall" they imagine doing this for the entire project end to end. It's super risky to do this because nobody knows what they want until some of it is built. And if you just spent a year of a teams time building something and "tada" you show the user... odds are not in your favor that it will do what everybody actually wanted. Plus odds are good it might not work because there was no iteration to shake out the rough spots.
To me, the key for "agile" is rapid iteration. It's a bunch of mini waterfalls where every step delivers something of value (even if that value isn't directly visible to the end user yet). Each iteration forms a feedback cycle that helps make sure a team is delivering something the end user actually wants.
In practice I don't think anybody is actually doing "waterfall classic". It's more of a story we tell to remind us how important it is to get in the habit of rapid iteration.
(Sidenote: there is plenty of other reasons to rapidly iterate. The end project is built on a moving target. Business processes change over time, the competition changes over time, etc... if you have your specs 100% locked in at the beginning you'll find that half of them no longer apply after a year because so much has changed in your environment. There is also inventory costs of keeping so much code out of production for so long.)
Its the symbolism- returning to a earlier stage of planing. Take building a house- the moment you have to redraw the plans, the previous work is usually a tear-down.
Often enough - that is the case in the real world. Berlin Airport, was a waterfall project, were several times additions were added, late in production.
Now that image carries only halfway into the software world, as alot of the other abstractions (classes, modules, interfaces) will survive, even a replanning and rewrite in waterfall.
I am about as old as you. Unless you started at 13.
The problem was that projects spent months doing analysis and design and never ship everything. Also agile recognizes that you don have all the answers at the beginning; you need to experiment and see how things should work. There was also a tendency to treat people as factory workers that you could assign any work and they just convert specs into code. We still do that with Scrum. :(
> There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.
If you do this feature by feature than you have described iterative/incremental development process. Scrum is one example of such process. Yes, Scrum is many little waterfalls.
But with Waterfall you
Do the analysis of the whole project/product, check
Specification of the whole.., check
Design, check
Implementation, check
Internal testing, check
UAT - this is the first time the customer sees the product, it's too late.
If you think it's stupid to do it this way you are right, that's the point. If you haven't seen this in real world then you probably didn't work on a government project or you were very lucky.
I've worked on all kinds of projects, and every single time the project failed, the tooling and methodology were blamed, not the individuals. This is the #1 cause of failed software projects - the inability for individuals to take responsibility for parts of the workflow, for which they are incompetent, and not subsequently working on improving that competence at an individual level.
But true, breaking larger problems down into smaller units (Agile/Scrum) is an effective way for a "manager" to help their developers - but I argue this is non-optimal for sociological reasons, not technical: Adherence to a cargo-cult is a function of laziness, not competence.
So when waterfall doesn't work, it's the fault of the developers, who are doing it badly? Well, I've seen agile work, too, with really good developers. And plenty of people have seen both fail, if you don't have good enough developers. Hmm, maybe it's not the methodology?
> I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor of software development processes such that things are now very much in cult-/cargo-cult territory.
I'm going to tell you up front that I'm a CSPO (certified scrum product owner), and that I don't exactly know. I don't mean this sarcastically - it's more that I'm not sure what agile/scrum fix, versus where would _any_ process change naturally cause an org to address its weak points. Example -
For a while I was pretty into scrum - the company I was at transitioned from waterfall and it did give the appearance of faster delivery. In hindsight, I think what it really did was provide explicit authority to a decision-maker (me, or the PO in general), and build a culture of keeping docs updated regularly, and therefore status visible/known. Waterfall didn't break these things, but in the past we were slow to unblock things and nobody really knew where the work was unless they went and asked (and had someone do an adhoc update).
I'm now at an org where a team that isn't mine is trying to be pretty strict about scrum, doing all of the requisite incantations and such. The issue is, they have more management than they do actual developers on the team. Adding this process on top makes the management happy, but it hasn't done a thing to boost anything related to development. It's exactly the cargo cult behavior you describe, when IMHO agile is best thought of as a toolkit that you borrow from selectively to fix the things that need fixing. I think going all-in isolates engineers from the people and processes they're trying to help and reduces them to basically short order cooks pulling tickets off the rack. I get that it's meant to make them more efficient, but I think that isolating your most expensive people whose job description says "solve problems" instead of having them engage directly is the wrong move.
Mind you, I don't think I'm necessarily right about everything but I've seen enough broken shit to know I'm not totally wrong either. Now as a fairly senior leader, I discourage my teams from going all-in on agile and push them toward looking at their process, identifying what's broken, and fixing that (with agile principles or not). It's rare that layering on more process has a positive effect and I like folks to be thoughtful about those changes.
> ...There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps...
What do Developers do before it's a green light for them to start the dig? What do Designers do before it's "their step"? What do they do afterwards? What about those in preceding stages?
Well, they might as well do whatever else next that falls on their seats, but sure everyone still cares to see the "final output", that is the developed and tested product. Yet it's inevitable for them to switch their context to something else meanwhile.
And this detachment is what makes Waterfall process often protracted and resistant to change. Not speaking that there's also a Customer somewhere in that pipeline, hoping to derive some value and feed some form of input.
Some industries/companies/teams/products may be just ok with waterfall or no methodology at all. Other situations may want shorter cycle or even parallel efforts to stay in line with demand/constraints/risks.
Also, there's a "working software" factor too. Call it a PoC/proto/early demo... It not only tests the ideas, it also motivates to keep going or offers assurances that it will be done "right".
If anything natural there is to any process is that most professionals want to do their jobs well and be proud of their effort, preferrably rewarded for the success and not singled out for blame of failures.
The question is then how to create such an environment to make that possible.
Agile on not, what cargo-cults it is the general lack on internal vitality in some industries/companies/teams/products. And as such it affects the very professional values on all levels, eroding the collective "care" about the result or even the process itself.
Yes, that is the natural process. However, Waterfall (big-W) is a Big-Bang Big Design Up Front approach, each phase is months long and after the analysis you do little or no revalidation (emphasis on re) and a release planned 1-5 years out. That's where the problem comes in. If you revalidate the plan and change the plan, you're no longer doing Waterfall. You're doing something else, you're using your brain.
If you break that natural process down and apply it to subcomponents and have releases every few months, you're doing something like either the classical Iterative & Incremental or Evolutionary models which are longer cycle versions of what Scrum aims for (which as a target tries to get you under one month, maybe down to 1-2 week cycles). I&I and Evolutionary tend to operate with cycles from 3-6 months.
More developers do not mean the software will be finished more quickly.
There are economies of focus in software, generally not scale.
Which is why you want good developers.
The other factor is that 'Requirements Change' in Software more than they do in building, and that's the other #1 issue that causes problems.
The entire ethos of 'small releases, often' is built around that.
Software is evolved a big organically for this reason.
Waterfall is obviously counterproductive.
If there is a 'default' methodology in software it's 'Iterative Waterfall' whereby you break the project down into the smallest reasonable phases. There can be an overarching plan but it has to be nimble.
Coders just want to code. Waterfall means you have to wait before you code, and plan, and organize. Agile means you just start coding, and code and code and code until you leave and someone else has to clean up the mess.
This is no different than most other activities in life. See the Stanford marshmallow experiment.
So you are saying that agile happened because of coders and not because there were fundamental problems with using waterfall on large projects?
No part of agile says that you get rid of planning and organizing, it is just done in smaller slices in shorter cycles.
Having used both over 25 years, I wouldn't look back at waterfall for any project except the very smallest. No-one allows you to lock some requirements in for 5 years any more, something that was accepted back in the day. I have plenty of examples of waterfall projects that delivered a number of things that simply weren't required any more and that was a failure of a long-tail project plan.
Agile also allows you to work iteratively on a project that is never finished i.e. SaaS, which is not possible with waterfall.
I think this is half of it. From my experience, the other half is that "agile" is a wonderful excuse for managers and product managers not to have to plan anything or commit to any priorities, and for software companies not to hire project managers. I don't think this is particularly agile's fault; no process can fix broken organizations, and most tech organizations are broken. Luckily the magic money machine papers over grotesque inefficiencies so we can all be gainfully employed without having to become real professionals!
Waterfall is page 2. Reading past pg 2 you will see attributes of agile and various techniques described. The one thing that comes to mind is the blind men and the elephant[1].
Quoting from the wikipedia link (which quotes another):
If it helps you can think of it like this: Scrum is just Waterfall, but with faster cycles.
Traditional Waterfall(tm) is when you spend a year in analysis, a year in specifications, a year in requirement, a year in design...
And then you put out a piece of software that's already outdated and doesn't do what the customer wanted or needed. But the company got paid anyway, so off to the next project.
With Scrum you go through the whole waterfall loop every 2-4 weeks and the customer and the team have a chance to amend any of the phases after every iteration.
This way there's less of a chance the product is completely outdated and unnecessary after the whole project (comprising of multiple waterfall loops) is done.
>...And then you put out a piece of software that's already outdated and doesn't do what the customer wanted or needed.
Or software that was built on outdated foundations, like frameworks and providers which may have been "in-vogue" at the product conception, but deflated or advanced into maintenance-only stage by the product release.
Let's not forget that the competition is not sleeping meanwhile, so Waterfall is equally capable of pushing out half-cooked products just to be ahead or to meet obligations/expectations.
Exactly. I've seen this in military contracts where the rules are REALLY strict (and ancient).
Everyone on the project knows at one point that the resulting product will be outdated and useless, but neither side can call it quits. The one making the product needs to finish or they won't get paid, the one ordering said product can't cancel the project or they'll get sanctions according to the contract.
So they just go through the motions and produce something that'll never get used and is put straight into the bin.
> There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.
I've never worked at a place where all these things actually happened. Everything was responsibility of the developer, who would complain about the absurdity of that.
Then management eventually introduced Scrum, as a way to excuse their behaviour. "We don't need an analyst, we use Scrum now."
Developers who fail to take responsibility for the full workflow, fail.
Its not "Agiles" fault, although this is often used to justify the failure.
Developers have got to realize that they are responsible for the full workflow from beginning to end, and only poor/low-quality developers will work to change that natural law - with negative effect.
Imo analysis needs domain knowledge, the developer can do it on their own but it won't be as good as when a domain expert does. A developer doing their own QA will have the same blind spots as during development. And of course, if he was also doing the analysis, he'll have the same blind spots as during analysis.
Yes, waterfall is a team activity, as all software is necessarily a social service. Developers that fail to understand this - or indeed, resist it as part of their cultural identity - usually get taught this lesson hard in the form of failed projects.
in my 30 years of software development, everything has been a combination of waterfall and agile and projects that attempt to adhere to one or other strictly end up failing.
The generalization is that waterfall is a form of organization level open loop control and scrum is a form of organization-level closed loop control of your process. In one case you set a goal and then move towards it (and "damn the torpedos"). In the other case you stop and look where you're going and adjust course at regular intervals iteratively (But "are we there yet?").
In real-time applications, open loop control only works well in narrowly defined situations, but nevertheless can be very useful to get somewhere in a definite amount of time. Closed loop control tends to be more robust in that it is rather more certain to get you to the target in a wider range of situations and in the presence of noise. The downside is that it can be harder to determine when you'll get there.
This way of looking at it, you immediately see the balancing act going on.
Attaching specifically to what you are saying: if you do the Analysis, Specifications, Requirement, Design, Implementation, Testing, Qualification, Signoff just once, your product-objective fit will only be so good. (This is where waterfall stops and calls it a day. It's just a very long "day")
Once a change is signed off; review it in production for a while, take lessons learned, and reapply them to a new Analysis step (and move forward through the steps again).
After the second sign off, in real world practice you're likely to have a much better product market fit than the first iteration.
Wash rinse repeat, after the third sign off, you'll find an even better fit.
This much should stand to reason. You combine your original plan with everything you've learned. Then take the new plan and add in everything you've learned, etc. How could your 5th, 6th or 7th, Nth version NOT be better than the first one?
This is closed loop control.
If I present it this way, it would seem like open loop is always going to be faster than closed loop. Well, in biology this is actually exploited as such: fast processes are often open loop.
Fortunately it turns out that if you're doing closed-loop anyway, the self correcting properties of uberhaupt having a closed loop means that you can get away with cruder/faster steps in the loop. As long as you keep the loop closed. This stands to reason, right? (And else it stands up to empirical testing.)
If you're in a hurry, it might be the case that it's ok to merely hit home close to the target, rather than bang on 100%. Despite some theoretical slowness, you might be able to get away with using a closed loop process with fewer/cruder iterations to get to your target more quickly in practice.
Now what a lot of cargo-cultists do in the case of open loop (and waterfall) , is that they apply it to very long projects, but forget that the world might change while they're still working. Open loop works best for short, well defined projects.
In the case of closed loop cargo cultists: They might end up taking cruder faster steps, but then don't actually close the loop by going back to the start and iterating. Now you have a very bad product.
In practice you'll need to tune your process to your own needs.
> In the case of closed loop cargo cultists: They might end up taking cruder faster steps, but then don't actually close the loop by going back to the start and iterating.
Agile isn't about having standups. It's about having fast feedback, so that you can react to what you have learned. If you don't have that, you don't have agile, no matter what process you have in place. (For that matter, any time you are trying to do agile by a rigidly defined process, that's an oxymoron.)
Maybe it is natural, but instinct isn't always best.
All you need to see is that its obvious that if you don't have continuous design/stakeholder feedback you can spend a lot of resources building the wrong thing. A lot of work is put into preventing this.
The cargo culting and snake oil sales exist but its just a fact of life not exclusive to agile methods.
Planning beyond 90 days out and pretending the plan has any hope of accuracy, that new information won’t change the plans (or design) and that the plan you started with will still even be beneficial when you finish. Brittleness.
Dependency planning has to happen of course. That’s not unique to waterfall, it’s just planning.
"Waterfall out 90 days" is not what is derided as Waterfall. The derided form of Waterfall is for large applications and commitments, 90 days is small in context. That's the prototype phase of a project that Waterfall may be applied to.
The basic model of Waterfall is, indeed, fine if you iterate, which then isn't Waterfall. The basic model + iteration + shorter timeframes (probably under 90 days, certainly under 180) is basically just the same plan-do-check-act cycle from Deming (and others). You want it to be shorter so you can respond to new information (including the results of the partially developed system). Waterfall, the derided form, doesn't give you feedback until final delivery, or maybe the phase called "FQT" (final qualification testing). Importantly, until the customer is re-involved (which may happen in FQT or in delivery) you don't get revalidation of the system.
System engineers learned and applied this as far back as the 1950s, at least. No serious large scale system uses Waterfall and reliably succeeds.
You’re basically describing Scaled Agile. They have big planning meetings every 8-12 weeks (called PI Planning) where devs make a plan based on priorities, work out dependencies and get agreement with the business side of the house about the plan itself.
In my opinion, it’s the ideal balance of concerns.
> I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor...
Because layers of management who need to continually justify their existence can never, ever keep themselves from putting their big fat thumb in the pie and fucking everything up.
I have about one third of the experience you have, but still I ended up in a place that was doing something very close to the mythical waterfall, and it was a horrible system.
We started with a one year long release plan, with a defined set of features that were supposed to make it in. A few weeks were spent defining functional specs for the features, negotiating them with PMs, and finally handing them over to the QA department (which was entirely separate). Then, dev work would start on all these features, while QA would come up with a series of Acceptance tests, Feature Tests, System Tests, Final Tests, Automation test plans etc. - all based on the FS.
Timelines and Gantt charts would be produced, deciding how to fit the features and required testing into the 1 year timeline.
After many months of dev work, the first builds would start making their way to the QA department, who would start tentatively running the Acceptance tests for the features that were supposed to be delivered. Some rapid iteration on any bugs affecting the Acceptance tests would happen here. Once the Acceptance test pass rate was high enough for the feature, QA would start working on Feature Testing, logging Bugs, while the dev teams would move on to other features. Occasionally QA or other devs would complain about blocking bugs that would have to be addressed ASAP, but otherwise bugs would pile up.
This would continue for all features, with dev teams keeping an eye on the total bug counts trying to make sure they would stay below some limits that had been set in a Bug plan (typically this meant keeping below 2-300 bugs for a team of ~5-6 people).
Finally, all features would be delivered to QA and passed acceptance, reaching the Feature Complete milestone. At this time, the dev team would start work in earnest on bugs, while the QA teams would continue with Feature Testing, logging new bugs.
This would continue for 1-3 months typically, until the Feature Testing would reach a decent pass rate, reaching the Feature Test Complete milestone. QA would now start on Performance testing and Early Final Testing. Often this would be the first time multiple features would truly start being used together. Dev teams would still be on full bug fixing mode.
When bug counts were low enough and Early Final Testing had a good pass rate, the product would reach the Code Chill stage - only important and safe bugs would be allowed to be fixed anymore. At this time, Final Testing would start in earnest, with the most complex cross-functional tests. The dev team would be finalizing the remaining bugs, until Code Freeze was reached - the vast majority of testing done, and only critical issues being allowed to be considered for fixing.
Of course, during this while, deviations from the schedule devised initially would be monitored constantly, with pressure coming not only from upper management, but also between QA and dev, as delays on the dev side would easily translate to delays on the QA side.
Customer feedback on previously released versions of the product, or new opportunities, would be very hard to squeeze in, causing panic and chaos in all the well laid plans. More often than not, it would be left to next year's release, meaning the average time between an opportunity and a release containing a feature to address that opportunity would be ~1.5 years.
Dev and QA would often be at odds, blaming each other for schedule slips, complaining about silly bugs, trying to pass the buck.
Of course, overall the whole thing worked, but slowly moving to a more Agile working model was a massive relief to everyone and created much better team cohesion, much less pressure on deadlines and deferred features (in the past, a feature getting kicked out of the current release meant it would only come one entire year later at best), and a much better product. Huge issues in design would no longer be treated as bugs, discovered weeks after the initial delivery of a feature. Tests would be designed by dev and QA together, often being run iteratively days after the code is written, leading to much better quality and corner cases getting caught early, not late. Process adjustments are now local team decisions, whereas before they required VP level involvement and agreement and coordination between all teams.
There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.
This is a natural flow of steps, and it always appears to me that the new-fangled developer cults are always trying to 'prove' that these aren't natural steps, by either cramming every step into one phase (we only do "Analysis" and call it Scrum) or by throwing away steps (Qualification? What's that?) and then wondering why things are crap.
Too many times, projects I've seen fail could've been saved if they'd just gotten rid of the cult-think.
Its nice, therefore, to see someone else making the observation that Waterfall is a natural process and all the other -isms are just sugar/crap-coating what is a natural consequence of organizing computing around the OSI model - something that has been effective for decades now, and doesn't need improvement-from-the-youth antics.