Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know. If you are struggling with A* search, and claiming to be a great programmer, it sound contradictory. I am assuming poster was familiar with graphs and graph searches before he stumbled onto A* search.

Recursive/Non-recursive dfs should be pretty much obvious to a comp sci student. Here is a sample non-recursive dfs:

https://gist.github.com/1789807

A* search is the same, except you change the $graph to include costs, and change the `queue` implementation to pop based on a heuristic function(cost from source till here + cost to dest).

I am not saying a good computer programmer must know A* search - I am saying if you are a programmer, your thought process should be molded in a way you start finding these things obvious.

I can't think of any programmers who I personally know, or otherwise, who aren't well versed in algorithms. Familiarity with algorithms serves 2 purpose:

1. It gives your mind general elasticity and agility.

2. Many times, it gives you foundations on which you build.

I recently wrote this small python lib which converts xml to python dict and vice-versa https://github.com/thoughtnirvana/xmldict. It came naturally to me as I was simply implementing a recursive-descent parser. I don't think had I not known about parsers, the solution would have come naturally to me.

That is just one example. There are other examples involving dynamic programming, counting sort...which translates nicely to real world use cases, and if you don't know the basic algorithms, you end up re-implementing a second-class substitute badly.



In defence of the poster, I think it depends on how much experience you have solving a certain class of problems, and perhaps more importantly, how you were taught.

I don't recall having studying the A* algorithm before, so I first looked it up on Wikipedia for a few minutes, and then came back to the comments. It's worth me saying that 30 seconds of reading your comment and linked gist did more to aid my comprehension than a few minutes looking through the Wikipedia article.

This is the case because you described the algorithm in terms of building blocks I was already familiar with. It's a depth-first search with a weighted queue; as you say, pretty obvious! But it wasn't immediately obvious from the Wikipedia article, because Wikipedia articles are designed to impart information, not necessarily to teach.

If the poster was taught by a professor who was teaching more like the Wikipedia article (and I've known a lot of lecturers who do this!), I can understand the confusion.


I see where you are coming from. I was arguing more on the side of "familiarity with algorithms" is like "hitting the gym" irrespective of what game/sport you play. It either helps you directly, or gives you general skill.

I was mostly taking exception to conflating "great programmers - no algorithmic knowledge".

It's great that you figured out A* search in under 30 seconds, but that's because you had the foundation. As I said, I am not claiming A* search(or any other algorithm) is the baseline, but there is a baseline(I won't bother enumerating algorithms - there is no agreed upon list), and a good programmer should strive to be above that baseline. Simply putting your hands up and claiming I can develop large systems fine and hence algorithmic knowledge is not really helpful - it is counter productive.

Again, if it's working fine for the poster, that's great. But the poster might be missing "unknown unknowns."


On a side note, Wikipedia is a horrible source to learn a new algorithm.

I can't say for A*, but many other algorithms (like Bayes Probability, FFT) I tried look up made little sense to me on Wikipedia, but learning from some well presented material(be it text, graphics or video) made them easy to comprehend.


I wrote this about 10 years ago and hope to spend some time one day making it simpler, but it seems to have helped a lot of people judging by the emails I get... http://www.heyes-jones.com/astar.html


Not as concise as the explanation given by irahul (which actually made me get the concept), but your article is very interesting and well written, and it is absolutely worth sharing to a couple friends of mine who would probably not get it with the wikipedia article.


I Wikipedia has helped me understand many algorithms. It's probably something like 50% chance I will get it, but the articles so short it's worth a first look.


I agree, but I think there are also multiple foundations, and missing out on one or two doesn't necessarily result in a structural collapse into mediocracy.

It's quite easy to end up with a bad teacher and miss the foundation necessary to understand a class of algorithms. In an ideal world you'd backtrack until you understood that missing foundation, but it's easy to get the impression that a particular area of study is beyond your intellect.

I kinda think that's what's happened here. The poster probably has enough foundations to work with normally, but here and there are missing pillars of knowledge.

That said, search is a pretty huge and important class of algorithms! The fact that he had difficulty understanding A* suggests that a rather large pillar is missing from the foundations of his craft.


Something I have found is that with learning algorithms a sideways approach is sometimes better if you find yourself struggling.

I initially had some trouble visualizing the process of algorithm development if I simply followed a book and did exercises (self-taught), so I would pick a complicated looking algorithm and just read about it, then think about it for a while, and try to find somewhere it had been used in code and how it solved some problem.

After a while I would go back to the book and it would sink in and I was able to better understand the mathematical model and intuition. Several such algorithms later and I am able to pick up new algorithms relatively quicker.


I think an encyclopedia, or at least a list, of algorithms and their real-world effects or analogies.


> if you are a programmer, your thought process should be molded in a way you start finding these things obvious

> had I not known about parsers, the solution would have come naturally to me

Your use of "obvious" and "come naturally" fit together, and match the template of someone who is naturally more inclined to understand this stuff than the OP might be, or (for that matter) I might be.

But rather than abandon algos as irrelevant, the OP should put more effort in.


I disagree that anyone is "naturally-inclined" to be better at algorithms, or coding, or abstract thinking. Rather, approach it as if we've been a blank slate that has picked up techniques and patterns of thinking, as throughout our lives we've been confronted with problems that required a new way of thinking. I'd reckon that someone you see as "naturally inclined" has simply encountered problems before that lend themselves to abstract in a similar way as these new problems they face (algorithms, architecture, speaking Japanese).

If you believe my paradigm, then you start to realize that not only is your mind extremely flexible, but that by focusing on a varied set of similar problems, you can start developing a framework to pick up more skills faster, and thus develop that "natural inclination" in the field you practice.


> Here is a sample non-recursive dfs:

I'm pretty sure that's a BFS, not a DFS. A DFS uses a stack, not a queue.


> I'm pretty sure that's a BFS, not a DFS. A DFS uses a stack, not a queue.

In the gist, queue.push adds element to the tail end, and queue.pop removes from the tail end.

For it to be behave as BFS, you will have to change queue.pop to queue.shift which will remove elements from the front.

An ideal graph search implementation will take queue as a parameter, and will use queue.enqueue and queue.dequeue. And then the injection -> implementation mapping will be:

  LIFOQueue(stacks) -> DFS
  FIFOQueue(regular queues) -> BFS
  PriorityQueue -> Some sort of Best First Search
  PriorityQueue with A* heuristics -> A* search(duh)
But I didn't want to go through all that trouble to demonstrate a simple snippet.


Interesting, I had never seen this formulation where DFS and BFS can be expressed by a single algorithm that differs only in the behavior of the "queue." It's still a bit misleading to call it "queue" since it might be a stack, but I see the intention.

This formulation of DFS (which tracks a stack of nodes to visit) is less efficient than one that tracks a stack of iterators, because the size of your stack is O(vertices) instead of O(graph depth). On the other hand, it's simpler in some ways, so I can see why it would be preferable in some situations.


Many graph search algorithms can be expressed as a generic graph search algorithm where you have a "white set" of unvisited nodes, a "black set" of visited nodes, and a "grey set" of nodes that you've encountered but have yet to visit. The structure of the grey set determines the algorithm:

Queue = BFS

Stack = DFS

Priority queue by distance from start = Dijkstra's algorithm

Priority queue by distance + heuristic = A*

Bounded priority queue by heuristic = Beam search

Priority queue by connecting edge weights = Prim's algorithm

Pointer reversal = Mark & sweep garbage collector

Check on whether it points into from-space = Copying garbage collector.


Norvig's practical ai programming has a great chapter on search which also shows how similarly different algorithms can be expressed if you have the right abstraction


In case someone runs into this comment and is interested in checking out the book, the book is called AIMA(Artificial Intelligence: A modern approach), and here is the relevant code.

http://aima-python.googlecode.com/svn/trunk/search.py

It's a long file though.


Also, justinh meant this other code instead: http://norvig.com/paip/search.lisp (from Norvig's other book).


Oh. I should have guessed "practical ai programming" was actually PAIP(Paradigms...). I haven't read PAIP, but have read AIMA, and the topic under discussion occurs in AIMA; so I assumed OP meant AIMA.


Perfectly reasonable! The treatment is similar but less formal/complete in PAIP.


My bad for getting the name of the book wrong. I thought of it as PAIP for so long I forget what the actual name is :) Great book though, even for those not especially interested in AI or Lisp


I agree, and relevantly to this post it's a deep book on programming where algorithms and algorithm design do appear but aren't really central. They're an essential part of the toolbox but not the focus.


When I overhauled that code for the third edition, I messed up the A* search. I don't remember what the bug was offhand -- just saying, the OP is not the only one who has trouble sometimes.


> DFS and BFS can be expressed by a single algorithm that differs only in the behavior of the "queue."

Correct me if I am wrong, but the naive non-recursive DFS and BFS differ only in the implementation of queue.

https://gist.github.com/1791284


Isn't that what I said?


My bad. Written communication is confusing - your comment sounded incredulous.

EDIT: In retrospect, you did mention you find it interesting - don't know how it jumped out as incredulous.


Despite the name, he is using the array as a stack, not as a queue.


Only uses #push and #pop, so it's actually a stack. I dunno why it's named queue.


Yeah thats a nice xmldict you wrote there, except it doesnt do anything good at all. Try to parse this with it, <html>hello</lol>world</html>. So much for your A* search and algorithmic knowledge.

I am no good with algorithms but would know to stop at such an idea as to make my own xmldict parser in that way. I would instead learn about parsing and the algorithms for successful parsing, instead of a half-assed approach.


> Try to parse this with it, <html>hello</lol>world</html>.

Parsing invalid xml with a xml parser throws an error like it should.

I am using cElementTree for parsing and this is what will happen with your input.

    In [1]: import xml.etree.cElementTree as ElementTree

    In [2]: ElementTree.XML('<html>hello</lol>world</html>')
    ---------------------------------------------------------------------------
    ParseError                                Traceback (most recent call last)
    /home/rahul/musings/python/<ipython-input-2-54e782b0af58> in <module>()
    ----> 1 ElementTree.XML('<html>hello</lol>world</html>')

    /home/rahul/musings/python/<string> in XML(text)

    ParseError: mismatched tag: line 1, column 13
> make my own xmldict parser in that way

What is that way you are talking about? There isn't a xmldict implementation for Python(at least I can't google it), I needed one, so I wrote a recursive descent parser. A recursive descent parser is a popular choice provided your grammar is LL(k) - Python, Perl et al run on hand-coded recursive-descent parsers. Also, recursive-descent parsers are easiest to handroll.

PS - There is a way to disagree. Your's isn't the right way.


I don't know a lot about XML, but that doesn't look like (legal) XML to me. Is it?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: