If you do not care about speed, you can get not arbitrary-precision (static number of significant digits, set upfront) but exact (dynamic number of significant digits, as large as you wish) results. Under the hood, your expressions will be directed acyclic graphs of lazy generators, each generator coresponding to a mathematical operation or function.
Here is a Python library that employs continued fractions to this end: [0]. You can pull successive partial quotients of the result from the root node of the dag. Whenever any node needs more precise input to deliver more precise output, it pulls a partial quotient from its argument(s).
Full disclosure: I wrote the original version of the library 13.5 years ago. Here are the slides from my talk about it: [1].
The Android calculator app. got some nice usability improvements by using a combination of rationals, as described in this story, and recursive (aka constructive or computable) reals.
A previous post on Microsoft's calculator brought my attention to an old package it uses called Ratpack (aka ratpak), another implementation of rationals:
It has some nice properties for use in a calculator, but it can be quite slow.
When I come across a library with, shall we say, an exotic numeric representation, it's nice to have a library of mathematical functions included. I see that python-cf indeed includes a solid batch.
See slides 43–60 and 72. A node emits ∞ as the partial quotient after 100 resultless iterations. The threshold is configurable. In particular, see lines 184–187 of cf.py:
# Idealistically inclined folks, who would prefer their
# computations to hang rather than to yield a heuristic result,
# can achieve this effect by setting accuracy to a negative
# value.
Seems reasonable, but then this is not "exact" precision as you claim, but "very large" or "dynamic precision". Saying that the precision of a real number is exact is very misleading, as it evokes inevitably an automatic theorem-proving context.
Thanks, "dynamic" is a good term. "Very large" would not describe it well: for non-contrived irrational numbers, the chance of an improper rounding to a rational is much smaller than the chance of a meteorite strike in your room within the next five minutes (slide 60).
This discussion would really go better if you followed the links. See lines 92–96 of cf.py:
By the
# Gauss-Kuzmin theorem [5, 6], a partial quotient in the
# continued fraction expansion of almost every number exceeds
# 1e+31 with probability log2(1+1e-31) =~= 1e-31/ln(2) =~=
# 1.5e-31.
I'm somewhat familiar with Gauss-Kuzmin theorem. If I understand it well, it means that large partial quotients are rare in most numbers. This is fairly intuitive, and conforming to the experiences of anybody who has done practical computations of continued fractions, as you certainly have.
Notice that this result does not say that most numbers have bounded partial quotients. In fact, a classical result in diophantine approximation theory, if I recall correctly, says that the set of numbers whose partial fractions are bounded (called badly aproximable numbers) is of Lebesgue measure zero. Thus, if your system represents real numbers by sequences of bounded partial quotients, then it can only represent a negligible (but uncountable) set of real numbers. A "random" number in [0,1] will certainly have an unbounded sequence of partial quotients (i.e., with probability 1).
Aren't both sentences synonymous? The problem is that the checking a condition "x<0" can take an arbitrarily long time to check, and maybe even be undecidable.
More of a comment on the author, I've found his other articles enlightening, useful as both introductions to functional programming ideas and as comprehensive deep dives demonstrating their intuitive nature both in code and contextual application.
A lot of the articles in conjunction with some other resources[1][2][3] are great for diving into functional programming (I've watched and read these many times).
If you do front end development with React, I highly recommend the following two videos[4][5] with Ryan Florence and Micahel Jackson (devs behind react-router and Remix) on the compositional nature of React components and hooks.
The last note I'll add is, once you get into Functional Programming, you start wanting to use `compose` or its counter part, `pipe` everywhere and coding becomes a lot more fun.
I'll put a final plug for one of my own posts on how composition manifests in Javascript/React code[6].
I always been amazed by arbitrary-precision math library like [GMP](https://gmplib.org/) taking the time to look from time to time in the source code and understand what to build with the functions provided. This article is a nice introduction to the how it works and the author as a great style.
Slight tangent, but I recently realised that the built in integer type in Python and Erlang support arbitrary-precision arithmetic while the built in integer types in Go don't.
In Go each integer type has a maximum value. eg int32 has a max value of 2,147,483,647. You can still perform arbitrary-precision arithmetic in Go but it requires using the math/big library.
That's the case for almost all programming languages, Python and Erlang (and a few others, like Raku) are the exceptions there. Any language that's made to be at least a little bit performant (C, C++, Rust, Go, Java, C#, and many, many others) works like this.
In Python 2, int and long were separate types, where int was either a 32- or 64-bit signed integer, and long was arbitrary precision. Python 3 unified the types under the name int:
I think this may comes down to interpreted vs compiled. Compiled language assumes some hardware instruction that interacts with primitives at some point. Where interpreted languages are using subroutines to deal with every object. So why not have a check if the value is in bounds, and if not, use arbitrary precision.
> // Create a new bigint of 10^n. This turns out to be a bit
> // faster than multiplying.
> function exp10(n) {
> return BigInt(`1${[...new Array(n)].map(() => > 0).join("")}`);
> }
Could we replace the 0 generation code with `'0'.repeat(n)`? Or `Array(n).fill(0).join("")` if old JS engine support is required?
I don't get it. They first show how you can make arbitrary precision rationals. But then they continue with square/cubic/etc root of these rationals, and then they throw the entire idea of arbitrary precision-ness overboard.
> But alas, we don’t have tail call recursion in V8 or SpiderMonkey yet. (At least, not at the time of writing).
That really is disappointing. BTW, a Javascript interpreter with proper tail calls is built in to every Mac, but it's not in the path by default: https://stackoverflow.com/a/61740889/5641202
Just FYI if you want to run recursive Javascript from the command line in MacOS.
Here is a Python library that employs continued fractions to this end: [0]. You can pull successive partial quotients of the result from the root node of the dag. Whenever any node needs more precise input to deliver more precise output, it pulls a partial quotient from its argument(s).
Full disclosure: I wrote the original version of the library 13.5 years ago. Here are the slides from my talk about it: [1].
[0] https://github.com/AdamPrzybyla/python-cf
[1] https://marcinciura.files.wordpress.com/2019/10/cf-slides.pd...