Ancient Egyptians^{1}Iβm talking about Egyptian mathematics from roughly 3,500BC to 350BC, at which point they were replaced with more advanced Greek mathematics. used a nonpositional system for writing numbers. It was based on signs which each represented a power of 10, and which were repeated to specify a number:
1  10  100  1,000  10,000  100,000  1,000,000 
πΊ  π  π’  πΌ  π  π  π¨ 
So the number 69,105, for instance, would be represented as πππππππΌπΌπΌπΌπΌπΌπΌπΌπΌπ’πΊπΊπΊπΊπΊ. This is very similar to roman numerals, or other nonpositional systems.
The modern technique for long multiplication (the one we learn in school) relies on a positional system, such as Arabic numerals. In a positional system, instead of a different symbol for each power of the base (usually 10), thereβs a different symbol (a digit) for each number between 0 and the base, and the power of the base with which a digit is associated depends on its position within the number.
Instead, ancient Egyptians performed multiplication by repeated addition. To multiply numbers, say 9Γ13, ancient Egyptians would instead decompose one of the two operands into a sum of powers of two (so, in modern notation, 9=1+8=2^{0}+2^{3}), then compute the product of the other number with these powers of two through repeated addition (again using modern notation):
Then they would sum the lines corresponding to the base2 decomposition of the other operand, yielding 13+104=117.
Compared to naive repeated addition, which requires $O(n)$ additions, where n is the smaller of the two operands, this one only needs $O(\log_2 n)$: $\log_2 n$ additions to compute the products with multiples of 2, and another $\log_2 n$ (at most) to sum the relevant multiples.
A somewhat more sophisticated version of Egyptian multiplication is known as peasant multiplication
Computer multiplication
Computers basically use this method to multiply numbers too. An additional trick is that they work entirely in base 2, so computing $n\times 2^k$ simply requires shifting n to the left^{2}Or to the right, depending on endianness. by k bits.
But we can also see what computers do as a form of long multiplication (the one you learn in school) in base 2:
1  1  0  1  β 13  
Γ  1  0  0  1  β 9  
1  1  0  1  β 13 Γ 2^{0} = 13  
+  1  1  0  1  .  .  .  β 13 Γ 2^{3} = 104 
1  1  1  0  1  0  1  β 117 
So Egyptian multiplication is in reality not far from being a halfbase10, halfbase2 version of long multiplication.
Generalizing
Just like multiplication is repeated addition, exponentiation is repeated multiplication, tetration is repeated exponentiation, etc. In fact, we can use the Egyptian technique to efficiently compute any associative iterated operation, not just those from the hyperoperation sequence. This section proves this generally, but feel free to skip it.
Generally, say we have a set with an associative operation β between its elements (i.e. a semigroup), and an operation β which represents repeated application of β (specifically, $aβn = aβaβ \cdots βa$, where a appears n times).
In our previous example, the set was that of the natural numbers, β was addition, and β was multiplication.
The goal is to compute $aβn$, where a is an element of our set, and n is an integer.
The most straightforward solution is to take the definition of $aβn$ above, and compute the $n1$ β operations. But we can do better!
The key idea is to notice that $aβ(m+n) = (aβm) β (aβn)$. This is easy to verify: if we expand both sides of the equation, we end up with β applied to a, $m+n$ times. (This is where our associativity requirement on β comes in.)
As a particular case of this equality, we have $aβ2^{k+1} = aβ(2^k + 2^k) = (aβ2^k)β(aβ2^k)$. So if n is a power of 2, such that $n = 2^k$ for some k, then we can
 compute $aβ2 = aβa$,
 using this result, compute $aβ4 = (aβ2)β(aβ2)$,
 using this result, compute $aβ8 = (aβ4)β(aβ4)$,
 and so forth until we get to $aβ2^k = (aβ2^{k1})β(aβ2^{k1})$.
(This looks a lot like Egyptian multiplication!) Obviously, this only involves $k = \log_2 n$ applications of β, as opposed to $2^k1=n1$ if we had done it naively by applying β repeatedly.
More generally, if n is any integer, we can decompose it into a sum of powers of two (equivalently, write it out in binary). Formally, we get a sequence of numbers $K = {k_0 < \cdots < k_\mathrm{max}}$ such that $n = \sum_{i\in K} 2^i$. (In other words, the elements of K are the bit indices that are set to 1 in the binary representation of n.)
Then using the $aβ(m+n) = (aβm) β (aβn)$ identity again, we end up with $aβn = aβ2^{k_0} β \cdots β aβ2^{k_\mathrm{max}}$. Once again, we can compute and store all the $aβ2^k$ values in $O(\log_2 n)$ operations, and do the final assembly in at most $\log_2 n$ operations as well.
Conclusion
Some modern algorithms, like binary exponentiation, are also basically this technique. This time β is multiplication, and β is exponentiation.
I find it cool that the key idea (even if itβs not incredibly complex) has been around for around 5,000 years!
For more, I recommend Mathematical Thought from Ancient to Modern Times by Morris Kline; volume 1 covers ancient civilizations.

Iβm talking about Egyptian mathematics from roughly 3,500BC to 350BC, at which point they were replaced with more advanced Greek mathematics.Β ↩

Or to the right, depending on endianness.Β ↩
Comments