By 2038, I'll probably be dust or gaga, but hopefully some readers will still be able to assess his suggestion -So What About the 2038 Problem?
There is another computer glitch in the works also based on how computers count time. This one is more limited in distribution than the Y2K bug but there is still the potential for problems. Computers count time by counting the number of seconds since an arbitrary start date, usually January 1, 1970 (called the “epoch”). Computers that use 32-bit encoding for this process can handle 2,147,483,647 seconds, which gets us to 03:14:07 UTC on Tuesday, 19 January 2038. At that point the digits will wrap around to the maximal negative number the code can handle, which will be interpreted as December 13, 1901.
The type of computer code that uses this kind of time format tends to be embedded in technology, rather than on desktop computers. Embedded systems are found in cars, transportation technology, communication devices, and other technology. Even though 2038 is more than 20 years away, it is possible that some of these systems will still be in use at that time.
The fix is to use 64-bit encoding. This can count enough seconds to last to 292 billion years. In fact, we could use 64-bit encoding and count milliseconds, or even microseconds, instead of seconds and still have enough for 300,000 years. This would give higher resolution to computers’ time stamps. In any case, we should settle on a standard and use it. It seems to me that 300,000 years is a comfortable margin for any such technology.
An interesting article, imo.that collectively we tend to ignore problems that seem far off, even just a couple of decades away. We will take short term benefit in trade for long term problems, and let our future selves, or future generations, deal with the consequences.
http://theness.com/neurologicablog/inde ... #more-9884