Dyson College of Arts and Sciences
Issue link: http://dysoncollege.uberflip.com/i/633753
Concomitant with public unease about what might happen to our nation's digitally- controlled infrastructure at the dawn of the new millennium was the rise of newly- designed 'Y2K' insurance contracts, which firms purchased as part of the prevailing survivalist mentality that arose as we drew closer to the dreaded stroke of midnight. The rush for Y2K insurance contracts were accompanied by citizenry hoarding cash, food, medicine and gasoline to ensure the continuance of some semblance of normal societal functions if the gloomy predictions had, in fact, come to pass. Soon after the clock struck midnight and into the first few minutes of January 1, 2000, it became apparent that there was to be no digital meltdown bringing about Armageddon-like consequences. The reason the calamity was avoided was simple and evident to practically any undergraduate student studying computer science during the countdown toward the event: even though many computer applications used input and output representations for dates using the familiar "mm/dd/yy" format, it turns out that computers store time and dates in much more precise fashion than what end users see. That is because ALL computers—together with their operating system—store date values as numbers, that is, counts of the number of days and fractions thereof (i.e., clock time) since some arbitrary date. Mainframe operating systems usually select midnight, December 30, 1899 as their initial time date while others use January 1, 1970 (Android-based machines), January 1, 1980 (original IBM PC) to January 1, 1601 (Microsoft Windows). 1 In the end, it turned out that the format that was used to display date and time values on computer screens and printed output was much different than what was stored in the computer's memory, hence a report that was run at exactly the stroke of midnight on 12/12/99 12:59:59 would appear a second later as "01/01/00" even though internally it was stored as one hundred years plus 1 second after midnight, December 30, 1899 (with additional time added for leap years as well as to correct slight errors within the CPU which are beyond the scope of this paper). What mattered most, it turned out, was how the data was stored and not how it was displayed. Crisis averted. Since that time, however, there has been an explosive use of inexpensive personal computers and related technologies—smart phones, GPS devices, ICS (Industrial Control System) components, and SCADA (Supervisory Control and Data Acquisition) for remote administration of infrastructure–some public & private, some critical and non-critical. While new computing technologies and accompanying software have waxed, mainframe computers and their robust hardware and software architectures (which historically were a bulwark against cyber intrusion) have waned in number. New technologies that weren't around or were on the drawing board when the Internet was first made available to the public (in the U.S.) during 1996 included the Android O/S (Google), IOS (Apple), and Win 7 / Win 8 (Microsoft), together with the TCP/ IP protocol suite of open source software created in the early 1970s that forms the backbone upon which these operating systems operate have exploded. So too has the number of vulnerabilities found in these new technologies and in such popular applications from companies as Adobe Reader & Flash (Adobe Systems), Microsoft Office, Firefox (Mozilla), Internet Explorer (Microsoft), Chrome (Google), Safari (Apple), iTunes (Safari), IOS-based Linksys networking hardware (Cisco), and literally thousands of other vulnerable software platforms. (Note: Interested readers can find these vulnerabilities and the application in which they reside by visiting the National Vulnerability Database, sponsored by DHS/NCCIC/US-CERT and available at https:// web.nvd.nist.gov.) 58