Four Algorithms for Computing Square Roots

One fine summer day in 1961, when I was just ten years old, I was helping Marvin J. Hurt -- one of the workers at Lazy Mountain Children's Home -- lay vinyl tile in the recently rebuilt main building. I remember that day vividly. The main building had been destroyed in a fire the preceding December. My mother, who was only 44 years old, had died just a few months later, on June 7, 1961. My sister Betty and I spent that summer in the care of our neighbors, the Hurts, while our father was attending summer school at Whitworth College in Spokane, Washington. "Uncle" John, a.k.a. hollow-leg Johnny because of his voracious appetite, was a polymath with a particular interest in arithmetic. He was spreading adhesive on the bare concrete, about 10 or 20 square feet at a time, then fitting white, pink, and black vinyl tiles into a random assortment in the room that was eventually to become the chapel in the rebuilt main building. I was his assistant, fetching more adhesive, rags, solvent, and tiles as he called for them. We labored for hours. Eventually John called a halt. "How would you like to learn something new?" he asked. Sounds good, I replied. "This is a procedure I learned in school many years ago," he continued. "I was in high school when I learned this trick, but you're pretty good with numbers, so I bet you can understand it." And with a carpenter's pencil instead of chalk, and with the bare concrete floor for a blackboard, he proceeded to sketch out a technique for computing the square root of two to an arbitrary number of decimal places. "We start like this."
"We write in a 1 as the first digit for √2 because we know that 1
"Now we have to think a little harder. See the question mark I wrote after the two?" Yes. "Now we have to ask ourselves 'What's the largest digit we can put in there so that when we multiply 2? by ? we get a result that is less than 100?' " [Marked in red above.] "Four," I said, after a moment's thought. So he wrote that in, and proceeded to make another calculation, like this.
"So now," I said, thinking aloud, "we should write in a '1', because 281 is less than 400, but 564 is more than 400. Is that right?" "Absolutely," John replied. "You catch on quickly." And we pressed ahead.
I really don't remember how far we continued to grind out the digits of √2 that afternoon; we probably stopped after six or seven places. And we eventually went back to laying tile, so that John's beautiful illustration of the root extraction algorithm was soon erased from view. But the procedure remains indelibly etched into my memory. Years later, after I had learned more mathematics, I came to understand that this algorithm is based
on the algebraic identity ⇚ ⇛ ⇚ ⇛ ⇚ ⇛ ⇚ ⇛ Some ten years later I learned about the Newton-Raphson method for locating the zeroes of a differentiable function which, when applied to the equation
Here's a simple example, to make the Newton-Raphson algorithm clearer. We will solve for √2 using the Newton-Raphson technique.
In case you're a little rusty with your arithmetic, the easiest way to form the average of two rational
fractions Newton-Raphson converges very quickly: 577 divided by 408 is 1.4142156..., 665,857 divided by 470,832 is 1.4142135623746899..., and √2 = 1.414213562373095048801688724210... So after just three iterations we've already got four decimal places right, and after the fourth iteration we're up to eleven-place accuracy. Not bad at all. But can we do any better? ⇚ ⇛ ⇚ ⇛ ⇚ ⇛ ⇚ ⇛ About twenty years after graduating from college, I got enthusiastic about learning some more math. For many years I was too busy with actuarial work and with computer programming to think about pure math very much. But along about 1990 my professional life began to get less hectic, and I was able to spend a little time just playing around with numbers. I noticed some interesting relationships, as noted below, and started asking myself a few questions. 1² − Given an arbitrary integer x^{2}
− Ny^{2} = 1? (Because of an historical accident, this particular problem is known as "Pell's Equation",
after John Pell, an English mathematician of the seventeenth century.) For which values of can the closely
related Diophantine Equation Nx^{2} − Ny^{2} = −1 be solved? Is there an efficient way
to find such solutions?An idea quickly popped into my head. Since for any natural number k such that k^{2} < < (Nk + 1)^{2}, we
can be certain that |√ − Nk| = ω < 1. Therefore the powers of ω, such as ω^{2}, ω^{3},
etc. must converge more or less rapidly towards 0. This suggests a third algorithm for approximating √: compute the
successive powers ωN^{2}, ω^{4}, ω^{8} by squaring the result again and again. Each iteration will yield an
expression of the form ±a√ ± Nb where a and b are integers. At each iteration, the fraction
^{b}∕_{a} will approach √ more closely. If Na and b have a common factor,
we should eliminate it before continuing. If is closer to (Nk + 1)^{2} than it is to k^{2},
we should start with ω = |k + 1 − √|; by minimizing the initial ω, we force the algorithm to converge more
quickly. Here are some examples.N
For the small integers shown above, this algorithm always yields solutions of Pell's Equation. For larger integers this isn't usually the case. Here are a few illustrative examples. Some solve Pell's Equation. Some don't.
Even though the procedure illustrated above involves raising a quadratic integer to higher and higher powers -- an operation conceptually much different from division followed by the computation of an average -- the basic underlying arithmetic comes out exactly the same as the procedure dictated by the Newton-Raphson algorithm outlined above. Here are some examples, presented in the Newton-Raphson format.
Here's another way to think about Pell's Equation. If cannot be expressed as a rational number N^{b}∕_{a}. But if we can find two
integers a and b such that b^{2} − Na^{2} = ±1, we can be certain that ^{b}∕_{a}
is the rational number closest to √N (among all the rational numbers with a denominator ≤ a). And if we can find one
such number pair { b, a }, we can find an infinite sequence of such pairs, because
In other words, if the number pair { ⇚ ⇛ ⇚ ⇛ ⇚ ⇛ ⇚ ⇛ Fast forward to the year 2000. By that time I had read just about all the math books available on the shelves of the Denver Public Library. One day, while I was contemplating the odd way regular continued fractions slice up the unit interval [0, 1], I realized there is another way to attack Pell's Equation. I saw how this procedure can be applied to develop an algorithm for computing the square root of any integer. The algorithm works like this. Suppose that . As before, let Nk be
the greatest integer such that k^{2} < . We begin with a simple equation:Nω = where, at the last step, we have multiplied the fraction k² (because ω = √), so (ω² − Nk²) will always be a positive integer. Now
let's make the substitution (ω² − k²) = − Nk² = d, just to simplify the algebra. Since we know that
k < ω < k + 1, we can find the largest integer m such that dm ≤ 2k, and then proceed as
follows.ω = After one stage of the algorithm we have extracted k_{1} = k − md, and repeat the same trick. By iterating this procedure, we can grind out the successive
partial denominators one by one. At some point -- a point we cannot always determine except by plowing through the algorithm -- we will reach a
"remainder" value ^{(ω + k)}∕_{1}, which always produces a partial denominator equal to 2k and a new "remainder"
(ω − k), and we're right back at our starting point. The process reminds me of the way one finds the decimal expansion of the reciprocal of
some integer m that is relatively prime to ten via long division: after running through a longer or shorter cycle of possible remainders
(modulo m), one eventually hits a remainder of unity, and the process then repeats itself from the beginning.I'm afraid this description is a little confusing. So let's work through a few concrete examples. And let's start with the square root of 5. √5 = 2 + (√5 − 2) = 2 + A moment's reflection should convince you that the square root of any natural number k^{2} + 1
will have a continued fraction expansion of the form k; 2k, 2k, .... And it is also obvious that the smallest solution of Pell's equation for such
an integer is { 2Nk^{2} + 1, 2k }. So let's tackle a tougher number. Let's try to find the square root of 20.√20 = 4 + √20 works out with a cycle length, or period, of 2: 4; 2, 8, 2, 8, ... the same two integers { 2, 8 } repeat over and over and over
again. If you think about it, you will realize that the square root of every number of the form k^{2} + k
will also have a canonical continued fraction representation with a period of length two, and that fraction will look like this: k; 2, 2k,
2, 2k, ... Similarly, if = Nk^{2} + 2k, the continued fraction for √ will be Nk;
1, 2k, 1, 2k, ... Here's an example using 35 (35 = 25 + 10 = 5^{2} + 2 * 5).√35 = 5 + OK, one more example and we'll be done. Let's take a bigger number -- √697 (which has a period of 3). I have an idea why it comes out the
way it does, but I don't know how to characterize all the numbers with a period of 3 simply or concisely. Anyway, a period of 3 is fairly short, and easy to
illustrate. Note first that 26 √697 = 26 + In case you're interested in investigating some canonical continued fraction expansions of integral square roots other than the 380 instances
I have cataloged in tables here and also over here, I'm sharing a copy of the Visual
Basic program I used to generate those tables. I ran this program (many times) under Microsoft's Excel spreadsheet software. I have also used some similar
VB programs to compute the periods of the square roots of all 1,955 non-square integers less than 2,000. The longest period in that range belongs to √1726,
which clocks in at 88 partial denominators. Interestingly, the square roots of integers can also be expressed using −1, instead of +1, as the constant partial
numerator. I have also used a VB program to investigate some of those, up to √999. Those "negatve" expansions are also periodic and palindromic, and tend to be
much longer than the cannical expansions with positive partial numerators. For the first 968 non-square integers (1 - 999, omitting 31 perfect squares), the
average period length is 10 OK, enough chit-chat. Here's that program. Sub Sqrtcf() Dim n, p, d, w, m, k, k1, b(200) As Currency Rem p is the number whose square root is to be extracted Rem n is greatest integer < √p Rem w is "previous" integer in continued fraction Rem m is "remainder" at each step Rem d, k are divisors of m Rem k1 is next denominator in continued fraction Rem b (a one-dimensional matrix) stores values of k1 for printing Dim i, j As Long Rem i and j are loop counters On Error GoTo 300 Rem Just in case something makes us blow up p = 415 Rem Select integer to be processed i = 1 Rem Initialize index for b() n = Int(Sqr(p)) Rem Extract first c. f. digit If n ^ 2 = p Then GoTo 300 Rem Abort if p is a perfect square b(i) = n Rem Store first partial denominator d = 1 Rem Initialize "factor" value w = n Rem Initialize "previous" partial denominator 100 i = i + 1 Rem Increment loop counter If i > 200 Then GoTo 300 Rem Avoid matrix overflow m = p - w ^ 2 Rem Compute next remainder k = Int(m / d) Rem Calculate second factor of m If k * d <> m Then GoTo 300 Rem This should never ever happen! k1 = Int((n + w) / k) Rem Extract next partial denominator b(i) = k1 Rem and save it in output vector If (k1 = 2 * n) Then GoTo 200 Rem If we're done, go print results w = k * k1 - w Rem Else reset "previous" value d = k Rem and reset factor, then iterate GoTo 100 200 For j = 1 To i Rem Display accumulated data Debug.Print p; j; b(j) Rem Processed integer, index, partial denominator Next j Exit Sub 300 Debug.Print p; i; "Error"; Erl(); Err(); Error() End Sub |