It's not an RRT[1] if it doesn't employ a Voronoi bias to sample within the largest Voronoi cell. Since the first paragraph describing the "RRT" implementation admits it didn't explore properly, it really wasn't an RRT. This is a pity, because a proper RRT (or better still, a PRM[2] with the monster walking every edge) would really have been a great way to handle this!
[1] Rapidly-Exploring Random Tree. Without the Voronoi bias it's not rapidly-exploring, so it's just a random tree (RT) instead.
[2] Probabilistic RoadMap. Similar to an RRT except that samples are connected to all close samples within some neighbourhood function.
narrow corridors and so called "bug traps" are known pathological cases for standard RRTs. The RRT approach was never gonna work, implemented correctly or otherwise. See my other comment for a proper way of doing it with guarantees on coverage.
Because I'm...not visually inclined...I've always had a bias against visualization for debugging/analytical purposes compared to mathematical analysis...that is, a bit of code that calculates standard deviation should tell me something is off in a dataset faster than a bar chart will (this is speaking from the perspective of being the creator/analyzer/thinkerer...obviously, visualizations have a vital and unique role when communicating your findings to others).
But between this article and the one that was posted last week (the Nebraska problem http://mollyrocket.com/casey/stream_0015.html)...it's really thought-provoking to see the ways that a visuals-first debugger can at least spark a faster iteration of ideas, if not uniquely identify them...the OP wrote this in another of his Working on the Witness pieces (http://mollyrocket.com/casey/stream_0013.html):
> Instead of stepping through the code or even carefully reading it to see what it was doing, I did what I often do these days and added debug visualization code right away. I’ve been through these kinds of problems enough times that I know, more often than not, if you’re going to debug one thing in a piece of code, you’re going to debug a bunch of things, so you might as well add the debug code early and leverage it throughout the whole process. Why wait for the really hard bug before adding the debug code? It takes the same amount of time to write debug code earlier as it does later, but earlier means you can use it to speed debugging of all those not-as-hard bugs along the way.
It's not test-driven-development obviously, but am I wrong in assuming that this preemptive custom debugging is rare in game development? What kind of frameworks/scaffolding exist to do such testing? I thought the situation described in the OP was interesting in that it showed how one of the most annoying gamebreaking bugs can slip through...but if consistency of boundaries is dependent on designers/artists meticulously cleaning up their "blocks"...and/or a dev-type who has hacked together a tool to make inconsistencies more noticeable...I'm surprised that games don't have broken surfaces as a rule rather than an exception. Or maybe way more time goes into mundane, repetitive game testing than I had thought.
As for visual debugging, these tools are really important because they a) convey information in a much denser form than you'd get from a tool and b) convey information which you can digest without already knowing the answer.
For example, unless you'd seen the oversampling in the picture, you probably wouldn't think to ask "Hey, do we have a lot of overlapping coverage in our walk here?". Similarly, in the case of the Nebraska problem, statistical methods would probably give you the a-okay until you actually saw the results in action, at which point you'd know something was off.
In games and graphics it's pretty much essential that you have some set of visualization debug setup to easily see what's going on--in very complex systems like physics engines it can be almost impossible to tell that something is off without having debug rendering enabled.
I can't speak for all games, but this sort of visualization is something I believe to be common in large AAA games, and is something I've had (and continue to have) in the small subset I've worked on. On the other hand, it's always been custom code (like the engines themselves) and I would guess not something a lot of smaller studios would feel justified spending time on until they needed it. I don't know if that's true. I've found that just providing some simple easy to use debug line and shape drawing that lives outside the rest of the rendering systems means people will use it for this sort of thing all the time. After a while there'll be a bunch of useful visualizations built up and if none of them do quite what you want, it'll probably be easy to modify an existing one.
I do think Unity and Unreal both probably provide a lot of tools in this department, so the situation outside crazy developers still using custom engines is probably a lot better than I think it is.
Also, it's almost certainly true that way more time goes into mundane, repetitive game testing than you thought. The QA budgets for AAA titles are quite large, and a lot of that testing work is horrifyingly dull. Methods like this post outlines help a lot, but there's no substitute for actual people getting paid to try to break your game.
Braid (Jonathan Blow's previous title) was in development for something like 4 years and technically it was significantly less ambitious than The Witness appears to be. In the end it turned out very nicely polished and was one of the best games I've ever played.
Not all years-long development cycles are signs of trouble; generally that only applies when you have task-masters death marching a burned-out team along to various failed deadlines. When you have a single or small team of creators iterating on a core game idea with the means to actually release the title "when its done", good end results often occur.
Which means that back in Dec. 2012, the OP thought this:
> Despite not even being finished, The Witness is already one of my all-time favorite games. I’ve been playing it since Jonathan Blow started developing it several years ago, and I could not be more excited about its upcoming release.
Though he could be just a corporate shill and/or have bad gaming tastes...if he was this excited about it two years ago (and had been playing it since "several years ago")...that's pretty promising. All I know is that the first thing I'll do is check to see how well-arranged the grass is.
should do a lawnmower pattern (uniform coverage) and on each obstacle boundary, spawn a boundary explorer.
The boundary explorer would try to move a small distance forward, if it could, it would back up, and turn right a small amount. Then it would try moving forward again, looking for the most right it could turn whilst still being able to walk forwards. This would trace the boundary clockwise.
If the lawnmower grid resolution pattern was half the smallest diameter of an obstacle. You would be guaranteed to discover the topology of the map (assuming the map walk operator is bidirectional, which it isn't, but should be.)
This is called a boustrophedon decomposition[1] (the name comes from the movement an ox makes when plowing a field). It's used for minesweeping robots among other things. Constructing the decomposition simultaneously covers the space being explored, so a robot can explore and mow grass at the same time. Applied as a coverage algorithm, it's much more efficient than a random walk.
The robot is usually modelled as having a non-zero footprint since coverage is often a goal, but a radius of zero can be used for exploration. To explore a space with minimum obstacle radius R and robot radius r, each sweep can be 2(R+r) units wide.
oh true. So there will also be another type of boundary which has a direction associated. And a sorta don't know area when you are mid fall with some limited manoeuvrability (non-holonomic movement). Best not to try and categorise the non-holonomic areas in too much detail as that's a rabbit hole.
should do a roomba pattern. random coverage, and guarantee to get stuck in a inverted polygon face as soon as you leave the room thinking the build will still for an hour.
[1] Rapidly-Exploring Random Tree. Without the Voronoi bias it's not rapidly-exploring, so it's just a random tree (RT) instead.
[2] Probabilistic RoadMap. Similar to an RRT except that samples are connected to all close samples within some neighbourhood function.