Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In order to execute a total functional language, you have to basically split the processor into two. I'll call these two parts the evaluator and the executor.

* It's the evaluator's job to evaluate expressions. Expressions are primitive recursive so they always normalize.

* The executor's job is to talk to the outside world and to provide data to the evaluator and receive commands from the evaluator. It also can't get stuck in an infinite loop; it must eventually provide data to the evaluator after receiving a command.

The only way an infinite loop can happen is if the evaluator and the executor get stuck together.

When I said processor before, I was referring to the part that did the processing: i.e. the evaluator.



I'm interested in approaches like this too, but there are some practical considerations that seem to complicate it:

* from a systems point of view, an infinite loop is no worse than a routine that doesn't return after a reasonable amount of time. PR is a huge complexity class (it contains NEXP); just because a function always terminates doesn't mean it's efficient (or as efficient as you would like/need it to be.)

* PR functions are somewhat easier to reason about than general recursive functions (no need to mess around with bottom and domain theory) but I haven't seen a lot of evidence that that translates to making them easier to aggressively optimize.

* In fact, I have heard many PR functions have a more efficient GR equivalent, and I don't know of any way to automatically derive the GR version from the PR; and I expect (just on hunch) that no perfectly general PR->GR conversion could exist.

* Granted, a function can be shown to be total without necessarily being PR, but then you have the burden of proving that it is total, and it seems inelegant to move that burden to hardware. Maybe it's not, maybe that's just "normal science" talking.

* In practice, if I want to run an interpreter for an existing TC programming language on this architecture, it has to treat the architecture as TC (i.e. conceptually break its separation between executor and evaluator) anyway.


You make some good points. Especially that a GR function may be able to work more efficiently than a PR function; do you have an example or can you cite a reference for this?

As for optimization, I believe that it may be possible to more effectively automatically parallelize a PR function than a GR one.


Unfortunately, it's hearsay to me; it came up in conversation (about computability and complexity) with a professor, and I took his word for it at the time; I've been meaning to ask him for a reference ever since, but never got around to it.

But at least one thing I can see is that in PR, you need to fix the upper bounds of the loops (/recursion depth) ahead of time, not "on the fly". If you're doing some kind of iterative approximation, you probably don't know what those bounds "should" be, because you're going to terminate on some other condition, like, the change in the value has become negligible. Your upper bound will be a worst-case estimate -- which you have to do extra work to compute -- and I don't see how it differs much, in practice, from a timeout, which has the advantage (again from a systems perspective) of being measured in clock time rather than cycles.

Not sure about parallelization. PR doesn't suggest any advantages to me for that offhand, but then, I haven't thought about it.


W.r.t parallelism, I'll copy here something I wrote elsewhere in this thread:

"I don't know for sure that this is possible yet, but I believe that the processor would be able to estimate the amount of work required to evaluate an expression. Using this ability, it would be able to automatically parallelize the evaluation of an expression by splitting it up into pieces of approximately equal size and then distributing them to sub processors."


The evaluator is the interesting part, of course. How do you stop it from getting stuck evaluating functions like this?

    def f():
        f()
If you can do general recursion, you can do infinite loops. If you can't do general recursion, I'm not sure the language will be very useful.


No, you can't do general recursion on the evaluator alone. But it would be possible to run any general recursive function across both parts, for example if you wanted to run software not designed for this architecture. The problem with this is that optimizations would only happen on per expression basis, so it wouldn't make very good use of the architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: