Quantcast
Viewing latest article 12
Browse Latest Browse All 12

Happy numbers again: spoilers

If you’ve tried the Happy numbers kata you may have noticed a couple of things about the algorithm. Firstly,

13 => 32 + 12 => 10 => 1

is the same as

31 => 12 + 32 => 10 => 1.

That is, happiness and unhappiness are preserved under all permutations of the digits. So if you have calculated the status of 134, you also know the status of 143, 314, 341, 413 and 431!

And secondly, zeroes have no effect on the outcome. So 103 is just as happy as 13; and similarly 1300, 1000300000 and 30010 are all happy.

In a sense, that means it is not meaningful to ask how many numbers can be examined in 5 seconds, say — because the answer is always infinite! So let’s redefine the kata:

Write a program to report the number of happy numbers in the range [1, n]. What is the largest n you can deal with in 5 seconds?

Some interesting questions now come to mind:

  • Where does the time go? Is it faster to calculate the happiness of a number than it is to create all permutations of its digits?
  • What is the best data structure for holding the results? Can a recursive or functional algorithm compete with a procedural approach?
  • What is the best way to deal with numbers containing zeroes?
  • Are the answers to these questions dependent on programming language?
  • Is this really an algorithm on the integers, or is it an algorithm on lists of digits?

Finally, something quite bizarre. The observations above regarding zeroes and digit permutations mean that we can acquire the answer to the challenge while examining fewer integers. How many fewer?

My rough calculations suggest that we only need to examine 54 of the integers below 100 in order to have complete information about every number in the range 1-100. Thats 54%. But we only need to examine 229 of the first 1000, which is 22.9%. To cover 1-10000 we only need to examine 714 numbers, and for 1-100000 we only need to examine 2002 numbers — ie. 2% of them! It would appear that this algorithm gets significantly faster as we move to larger and larger data sets. How much difference does this make to what your implementation can achieve?


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Viewing latest article 12
Browse Latest Browse All 12

Trending Articles