The late seventies are considered as prehistoric times by most data scientists. Yet it was the beginning of a new era, with people getting their first personal computer, or at least programmable calculators like the one pictured below. The operating system was called DOS, and later became MSdos, for Microsoft Disk Operating System. You could use your TV set as a monitor, and tapes and a tape recorder (then later floppy disks) to record data. Memory was limited to 64KB. Regarding the HP 41 model below, the advertising claimed that with some extra modules, you could write up to 2,000 lines of code, and save it permanently. I indeed started my career (and even inverted matrices with it), back in high school, with the very model featured below, offered as a birthday present. Math teachers were afraid by these machines, I believed they were banned from schools at some point.
One of the interesting features in these early times was that there was no real graphic device, not for personal use anyway (sure publishers had access to expensive plotting machines back then). So the trick was to produce graphs and images using only ASCII chars. Typical monitors could display 25 lines, each with 40 characters, in fixed font (courier font). More advanced systems would allow you to switch between two virtual screens, thus extending the length of a line to 80 chars.
Here are some of the marvels that you could produce back then - now this is considered an art. Has anyone ever made a video using just ASCII chars like in this picture? If anything, it shows how big data is shallow: a 1024 x 1024 image (or a video made up of hundreds of such frames) can be compressed by a factor 2,000 or more, and yet it still conveys pretty much all the useful information available in the big, original version. This brings another question: could this technique be used for face recognition?
This is supposed to be Obama - see details
Click here for details or to download this text file (the image)!