Computer History: Operating Systems

Photo by ThisIsEngineering on Pexels.com

Computer operating systems (OS) provide a set of functions needed and used by most applications, and provide the necessary linkages to control a computer’s hardware. On the first computers, without an operating system, each program would have to have drivers for your video card, memory card, and other peripherals. The evolution of the computer applications and their complexity led to the OS necessities.

Early computers lacked any form of operating system. The user had sole use of the machine; he would arrive at the machine armed with his program and data, often on punched paper tape. The program would be loaded into the machine, and the machine set to work, until the program stopped, or maybe more likely, crashed. Programs could generally be debugged via a front panel using switches and lights; it is said that Alan Turing was a master of this on the early Manchester Mark I machine. He drew the seminal concepts of operating systems from the Universal Turing Machine concept.

Later, machines came with libraries of support code which were linked to the user’s program to assist in operations such as input and output. This would become the genesis of the modern-day operating system. However, machines still ran a single job at a time; at Cambridge University in England the job queue was at one time a washing line from which tapes were hung with clothes pegs. The color of the pegs indicated the priority of the job.

As machines became more powerful, the time needed for a run of a program diminished and the time to hand off the equipment became very large by comparison. Accounting for and paying for machine usage went from checking the wall clock to using the computer to do the timing. Run queues went from being people waiting at the door to stacks of media waiting on a table to using the hardware of the machine such as switching which magnetic tape drive was online or stacking punch cards on top of the previous jobs cards in the reader. Operating the computer went from a task performed by the program developer to a job for full time dedicated machine operators. When commercially available computer centers found they had to deal with accidental or malicious tampering of the accounting information, equipment vendors were encouraged to enhance the properties of the runtime libraries to prevent misuse of the systems resources. Accounting practices were also expanded beyond recording CPU usage to also count pages printed, cards punched, cards read, disk storage used, and even operator action required by jobs such as changing magnetic tapes. Eventually, the runtime libraries became a program that was started before the first customer job, that read in the customer job, controlled its execution, cleaned up after it, recorded its usage, and immediately went on to process the next job. Jobs also evolved from being binary images produced by hand encoding to symbolic programs that were translated by the computer. An operating system, or “monitor” as it was sometimes called, permitted jobs to become multistep with the monitor running several programs in sequence to effect the translation and subsequent run of the user’s program.

The conceptual bridge between the precise description of an operating system and the colloquial definition is the tendency to bundle widely, or generally, used utilities and applications (such as text editors or file managers) with the basic OS for the sake of convenience; as OSes progressed, a larger selection of ‘second class’ OS software came to be included, such that now, an OS without a graphical user interface or various file viewers is often considered not to be a true or complete OS. To accommodate this evolution of the meaning most of what was the original “operating system” is now called the “kernel”, and OS has come to mean the complete package.

The broader categories of systems and application software are discussed in the computer software article.

The mainframe: Early operating systems were very diverse, with each vendor producing one or more operating systems specific to their particular hardware. Every operating system, even from the same vendor, could have radically different models of commands, operating procedures, and such facilities as debugging aids. Typically, each time the manufacturer brought out a new machine, there would be a new operating system. This state of affairs continued until the 1960s when IBM developed the System/360 series of machines which all used the same instruction architecture. Because there were enormous performance differences across the range, a single operating system could not be used and a family of operating systems were developed. (The problems encountered in the development of the OS/360 are legendary, and are described by Fred Brooks in The Mythical Man-Month – a book that has become a classic of software engineering).

OS/360 evolved to become successively MFT, MVT, SVS, MVS, MVS/XA, MVS/ESA, OS/390 and z/OS, that includes the UNIX kernel as well as a huge amount of new functions required by modern mission-critical applications running on the zSeries mainframes. It is worth mentioning, that IBM maintained full compatibility with the past, so that programs developed in the sixties can still run under z/OS with no change. Although z/OS runs UNIX applications, it is a proprietary OS, in opposition to an Open System.

Control Data Corporation developed the Scope operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the KRONOS and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma planel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.

UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.

Digital Equipment Corporation developed many operating systems for its various computer lines, including the simple RT-11 system for its 16-bit PDP-11 class machines, the VMS system for the 32-bit VAX computer, and TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.

The UNIX operating system was developed at AT&T Bell Laboratories in the 1970s. Because it was essentially free in early editions, easily obtainable, and easily modified, it achieved wide acceptance. It also became a requirement within the Bell systems operating companies. Since it was written in a high level language, when that language was ported to a new machine architecture UNIX was also able to be ported. This portability permitted it to become the choice for a second generation of minicomputers and the first generation of workstations. By widespread use it exemplified the idea of an operating system that was conceptually the same across various hardware platforms. It was still owned by AT&T and that limited its use to those groups or corporations who could afford to license it.

Many early operating systems were collections of utilities to allow users to run software on their systems. There were some companies who were able to develop better systems, such as early Digital Equipment Corporation systems, but others never supported features that were useful on other hardware types.

In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized Microprogramming to implement features on their systems in order to permit different underlying architecture to appear to be the same as others in a series. In fact most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations.

One system which evolved in this time frame was the Pick operating system. The Pick system was developed and sold by Microdata Corporation, and Dick Pick, who created the precursors of the system with an associate, Don Nelson. The system is an example of a system which started as a database application support program, graduated to system work, and still exists across a wide variety of systems supported on most UNIX systems as an addon database system.

Other packages such as Oracle are middleware and contain many of the features of operating systems, but are in fact large applications supported on many hardware platforms.

As hardware was packaged in ever larger amounts in small packages, first the bit slice level of integration in systems, and then entire systems came to be present on a single chip. This type of system in small 4 and 8 bit processors came to be known as microprocessors. Most were not microprogrammed, but were completely integrated general purpose processors.

Home computers: Although most smallest 8-bit home computers of the 1980s, such as the Commodore 64, the Amstrad CPC, ZX Spectrum series and others could use a “normal” disk-loading operating system, such as CP/M or GEOS they could generally work without one. In fact, most if not all of these computers shipped with a built-in BASIC interpreter on ROM, which also served as a crude operating system, allowing minimal file management operations (such as deletion, copying, etc.) to be performed and sometimes disk formatting, along of course with application loading and execution, which sometimes required a non-trivial command sequence, like with the Commodore 64.

The fact that the majority of these machines were bought for entertainment and educational purposes and were seldom used for more “serious” or business/science oriented applications, partly explains why a “true” operating system was not necessary.

Another reason is that they were usually single-task and single-user machines and shipped with minimal amounts of RAM, usually between 4 and 256 kilobytes, with 64 and 128 being common figures, and 8-bit processors, so an operating system’s overhead would likely compromise the performance of the machine without really being necessary.

Even the rare word processor and office suite applications were mostly self-contained programs which took over the machine completely, as also did videogames.

Finally, most of these machines didn’t even ship with a built-in flexible disk drive, which made using a disk-based OS impossible or a luxury option.

Since virtually all video game consoles and arcade cabinets designed and built after 1980 were true digital machines (unlike the analog PONG clones and derivatives), some of them carried a minimal form of BIOS or built-in game, such as the Colecovision, the Sega Master System and the SNK Neo Geo. There were however successful designs where a BIOS was not necessary, such as the Nintendo NES and its clones.

Modern day game consoles and videogames, starting from the PlayStation all have a minimal BIOS that also provides some interactive utilities such as memory card management, Audio or Video CD playback, copy prevention and sometimes carry libraries for developers to use etc. Few of these cases, however, would qualify as a “true” operating system.

The most notable exceptions are probably the Dreamcast game console which includes a minimal BIOS, like the PlayStation, but can load the Windows CE operating system from the game disk allowing easily porting of games from the PC world, and the Xbox game console, which is little more than a disguised Intel-based PC running a secret, modified version of Microsoft Windows in the background.

Furthermore, there are Linux versions that will run on a PlayStation or Xbox and maybe other game consoles as well, provided they have access to a large mass storage device and have a reasonable amount of RAM (the bare minimum for a GUI is around 512 kilobytes, as the case of the Commodore Amiga or early ATARI ST shows. GEOS however ran on a stock C64 which came with as little as 64 kilobytes).

Long before that, Sony had released a kind of development kit called the Net Yaroze for its first PlayStation platform, which provided a series of programming and developing tools to be used with a normal PC and a specially modified “Black PlayStation” that could be interfaced with a PC and download programs from it. These operations require in general a functional OS on both platforms involved.

In general, it can be said that videogame consoles and arcade coin operated machines used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while from the PlayStation era and beyond they started getting more and more sophisticated, to the point of requiring a generic or custom-built OS for aiding in developing and expandibility.

The personal computer era: Apple, DOS and beyond

The development of microprocessors made inexpensive computing available for the small business and hobbyist, which in turn led to the widespread use of interchangeable hardware components using a common interconnection (such as the S-100, SS-50, Apple II, ISA, and PCI buses), and an increasing need for ‘standard’ operating systems to control them. The most important of the early OSes on these machines was Digital Research’s CP/M-80 for the 8080 / 8085 / Z-80 CPUs. It was based on several Digital Equipment Corporation operating systems, mostly for the PDP-11 architecture. MS-DOS (or PC-DOS when supplied by IBM) was based originally on CP/M-80. Each of these machines had a small boot program in ROM which loaded the OS itself from disk. The BIOS on the IBM-PC class machines was an extension of this idea and has accreted more features and functions in the 20 years since the first IBM-PC was introduced in 1981.

The decreasing cost of display equipment and processors made it practical to provide graphical user interfaces for many operating systems, such as the generic X Window System that is provided with many UNIX systems, or other graphical systems such as Microsoft Windows, the RadioShack Color Computer’s OS-9, Commodore’s AmigaOS, Level II, Apple’s Mac OS, or even IBM’s OS/2. The original GUI was developed at Xerox Palo Alto Research Center in the early ’70s (the Alto computer system) and imitated by many vendors.

Birth of the Computer

Who invented the computer, which is part of everyday use?  To get to this point we need to start back at the time, where it all began!

The earliest form of counting device would be the Tally Stick, but the one we all remember from our childhood days would be the Abacus first used in Babylonia in 2400BC.

The Astrolabe and the Antikythera Analog Mechanical Computers, were used in Ancient Greece in 150-100BC to perform astronomical calculations.  The Panisphere in AD1000 by Abu Rayhan al-Biruni. The Equatorium by Abu Ishaq Ibrahim al–Zarqali in AD1015 and the Astronomical Clock Tower of Su Song during the Song Dynasty AD1090.

In 1617, John Napier Scottish mathematician and physicist invented the Napier Bones, a device similar in appearance to that of an abacus, which could perform multiplication and division calculations.  In the 1620’s the Slide Rule was invented, a device to allow multiplication and division, using the basis of distances and line intervals to create the answer.  The use of the slide rule, faded out with the invention of the Pocket Calculator.

Wilhelm Schickard, A German designed the Calculating Clock in 1623, but it was destroyed by fire during construction in 1624, and the clock was never rebuilt.

In 1642 Blaise Pascal invented the mechanical calculator, and duly named it; Pascal’s Pascaline.

The Stepped Reckoner was invented by Gottfried Wilhelm von Leibniz in 1672, and came about, whilst using Pascal’s Pascaline machine.

The Frenchman, Joseph Marie Jacquard invented the Powered Loom in 1801.  It used Punched Wooden Cards, which defined the weaves pattern.  These wooden cards in relation to today’s world of computers would be the equivalent of a software program.

The Arithmometer invented by Charles Xavier Thomas de Colmar, was to become the first mass produced mechanical calculator in the 1820’s.  For it could add, subtract, multiply and divide.

In 1837 Charles Babbage invented the first Mechanical Computer, using his Analytical Engine.  The device was never finished during his lifetime, and it was left to his son Henry, to complete the work in 1888 in a simplified form: The Mill.

Ada Lovelace daughter of the poet, Lord Byron, was analyst of the Babbage analytical engine, and went on to create the first computer program between 1842-1843.  For it was her vision that computers would be capable of performing more than basic arithmetic calculations.

Herman Hollerith invented a machine to read and record punched cards; the Tabulator, and Keypunch machines towards the end of the 1880’s, like the Hollerith Desk, used by the U.S. to carry out the 1890 census.  He went on to open a Tabulating Machine Company on the back of its success, which eventually became International Business Machines (IBM).

Alan Turing considered by many as the father of computer science.  For it was in 1936 he provided the concept of algorithm and computation with the Turing Machine, and the blueprint for the first electronic digital computer.

Just think, when you turn on your computer, your actually using a design based on the brain child of Alan Turing.

In 1947, one Howard Aiken had been commissioned by IBM to determine how many computers it would take to run the United States… His answer was six.  How wrong had he been, who would have believed most homes would have at least one computer, some sixty-six years later.

In 1936 the first computer was built by Kenrad Zuse the Z1, believed to be the first electro-mechanical binary programmable computer.

In November 1937 whilst working at Bell Labs George Stibitz invented the Model K relay based calculator, which used binary circuits to perform calculations.

John Atanasoff a Physics professor from Iowa, built the first electronic digital computer in 1937, assisted by graduate student Clifford Berry.  It hadn’t been constructed as a programmable machine, for its main purpose was to deal with linear equations.

Konrad Zuse who built the Z1 back in 1936, took his invention to the next stage in 1941, by building the first program controlled electromechanical computing machine; the Z3.

Thomas Flowers joined the Post Office Research Branch in 1930, where he became Head of Switching Research. During the 1930s Flowers pioneered large-scale digital electronics.  Then in 1943 he designed and constructed the British Computer; Colossus.

Harry Fensom joined Flowers’ inner circle of engineers at the Research Branch of the British Post Office in 1942. He participated in the construction of the code breaking machine, Colossus, and was responsible for keeping it in continuous operation at Bletchley Park.

It was the world’s first electronic programmable computer, consisting of a large number of vacuum tubes.  Even though it had its limits, when it came to programming, its main use was in breaking German wartime codes.

In 1939 development started on the Harvard Mark I, an Automatic Sequence Controlled Calculator.  In fact it was a general purpose electro-mechanical computer by Howard Aiken and Grace Hopper, and financed by IBM.  It came into use in May 1944.

The ENIAC Mark I computer was the brainchild of John Presper Eckert and John W Mauchly in 1946.  The architectural design required the rewiring of the plug board to change its programming.  It was capable of adding and subtracting five thousand times a second, and had the added ability to perform, multiplication, divide and square root calculations.  It weighed in at thirty tons, used two-hundred kilowatts of power, and contained eighteen thousand vacuum tubes, fifteen hundred relays, and hundreds of thousands resistors, capacitors and inductors.

The Small-Scale Experimental Machine, also known as Baby, was completed in 1948 at England’s; University of Manchester based upon the stored-program architecture.  On the 21st June 1948, it made its first successful run of a program, using 32-bit word length and a memory of 32 words. 

The Manchester Mark I, a more powerful machine was built to supersede the Baby with expanded size and power, using a magnetic drum for auxiliary storage.

Later that year, Cambridge University built the Electronic Delay Storage Automatic Calculator, which was fitted out with a built-in-program.

The Government requested Ferranti to build a commercial computer based on the design of the Manchester Mark I in October 1948.  The Ferranti Mark I, included enhancements making it more powerful and faster.  The first machine was rolled out in February 1951.

John Presper Eckert and John W Mauchly, who designed and built the ENIAC Mark I computer, updated their design in 1951, with the release of the UNIVAC, for use by the U.S, Census Bureau.  It used 5,200 vacuum tubes, and consumed some 125kw of power.  Storage was by way of serial-access mercury delay lines.

In the early 1950’s Sergei Sobolev and Nikolay Brusentsov two Soviet scientists designed the Setun, a ternary computer that operated on a base three numbering system, (-1,0,1) rather than the conventional binary numbering system.  The computer was used within the Soviet Union, but its life short lived, and the architecture was replaced with a binary system.

In 1952, IBM released their first Electronic Data Processing Machine; IBM701, and its first mainframe computer.  Then in 1954, the IBM704 came onto the market, using a magnetic core memory.  During 1955-1956, IBM developed the Fortran programming language, for use with the IBM704, which was released in 1957.

In 1954, IBM produced a smaller computer the IBM650, weighing in at 900kg and the power unit at 1350kg.  At the time of construct it had a drum memory unit which could hold 2,000 words, later increased to 4,000 words with a maximum of ten letters per word.  The IBM650 used; SOAP (Symbolic Optimal Assembly Program).

Microprogramming was invented by Maurice Wilkes in 1955.

Then in 1956 IBM created the disk storage unit; the IBM350RAMAC (Random Access Method of Accounting and Control).  It used fifty 24-inch metal disks, with one hundred tracks per side, capable of storing five megabytes.

John Presper Eckert and John W Mauchly, recognized the limitations of the ENIAC, even before construction had been completed in 1951.  They started researching the possibilities where programs and working data, could be stored in the same area, on the same disk, at the time this would have been considered rather radical.

Equipment of the mid-1950’s transmitted data by acoustic delay lines using liquid mercury or a wire.  It worked by sending acoustic pulses represent by a “1 or 0” causing the oscillator to re-send the pulse.  Other systems on the market at the time used cathode-ray tubes, storing and retrieving data on a phosphor screen.

The Magnetic Core Memory, where each core equals one bit, was created in 1954, replacing many forms of temporary storage, and would go on to dominate the market for many years to come.

The Bipolar Transistor of 1947 went on to replace vacuum tubes from 1955.  The early versions were the Germanium Point-Contact Transistors, consuming less power, but reliability was an issue.

The University of Manchester, built the first transistorized computer in 1953, and the updated version was running by 1955.  It used two-hundred transistors, thirteen-hundred solid-state diodes, with a power consumption of 150 watts.  Whereas the Harwell CADET, had no tubes it had a tendency to crash every ninety minutes, but by changing to a more reliable bipolar junction transistor, they found crash times were reduced.

Upon comparing vacuum tubes and transistors, the transistors had many advantages, being smaller in design, requiring less power, which gave off less heat.  Transistorized computers contained tens of thousands binary logic circuits in a compact space.

With the creation of transistorized electronics, we saw the Central Processing Unit, within would be the ALU (Arithmetic Logic Unit), which performed arithmetic and logic operations the first of many devices which would show enhanced improvements. 

In a sense new technology had opened the flood gates to improved parts for the computer, where once they would have taken up the space of a large room, technology had seen them reduced in size, capable of sitting upon a table.  One invention would be the Data-Disk Storage Unit, capable of storing tens of millions letters and digits, alongside removable data disk storage units.  Input/output, a means by which a computer exchanges information.

Telephone connections went on to provide sufficient speeds for early remote terminals like the Teletype or Telex machine.

Who would have believed, that these stand-alone computers, one day would be the basis for the Internet.

Jack Kilby and Robert Noyce, designed the Integrated Circuit (Microchip) in 1958 which led to the invention of the Microprocessor.  Then in 1964 IBM released its Solid Logic Technology modules in Hybrid Circuits.

Intel opened its doors in 1968, and in their early days produced the semi-conductor memory, and went on to create DRAM and EPROM.  Intel developed their Microprocessor in 1971, and crammed an entire computer on a single chip.

They produced the Intel 4004, the first microprocessor consisting of 2300 transistors and clocked at 108 KHz.  They followed up with the 8008 and 8080 models.

The 8080 was used in MITS Altair computer kit.  This machine attracted one Bill Gates a Harvard freshman to drop out of college and write programs for the computer.

Alan Shugart and IBM invented the Floppy Disk in 1971, and nicknamed it the “Floppy” based on its flexibility of use.

The idea of computers co-ordinating information between one another had been around for years, using telecommunication technology.  Then in 1973 Robert Metcalfe and Xerox created the Ethernet Computer Networking System.

Olivetti, a company more associated with typewriters, presented to the world, their first personal computer the P6060 in 1975.  It had a 32-character display, 80-column thermal printer, 48 Kbytes of RAM and used BASIC language, weighing in at 40 kg.

In 1991 Bill Gates and Microsoft, supplied the world with MS-DOS an operating system to run the computer.  That same year IBM released their home computer, and so the home computer revolution had started.

In 1983 Apple released their home computer with a graphical user interface.  In 1984 Apple Macintosh released a more affordable home computer with graphical user interface.

In 1985 Bill Gates and Microsoft released their new Operating System which would revolutionise the computer for decades to come; Microsoft Windows, which has been upgraded over the years.  We have now reached Windows 10.

With the 1990’s came E-mail and the World Wide Web … and computers and the Internet would change our world for ever.

Humoring the Goddess

Croning My Way Through Life

Marina Kanavaki

Art Towards a Happy Day

Tallbloke's Talkshop

Cutting edge science you can dice with

INNER THOUGHTS

INNER THOUGHTS

Love. Beauty. Poetry

love, beauty, poetry

Pick Me Up Poetry

Poems | Publishing | Articles | Community.

There's an Elephant in the Room blog

Thoughts about veganism. Promoting an end to the use and the property status of members of nonhuman species.

The Sperg Box

We must secure our existence for the future of huge tracts of land.

Grouse Beater

'The common man's philosopher' - Edinburgh students........'A polymath' - Marco Biagi SNP MSP........... 'Your politics are crap' - a friend ......grousebeater@Grouse_Beater

Bonjour From Brittany

Celebrating what makes Brittany unique

paintdigi

THE ARTIST ALOZADE A. IS LOOKING FOR ALL BEAUTY IMAGES IN THE WORLD, FOR YOUR TRIP AND FOR YOUR PLEASURE. AS WELL AS ARTISTIC WORKS, DESIGNS AND HIS IDEAS ON ART

Chen Song Ping

Along Hospital Corridors

muunyayo

Nature has us all in immutable checkmate...

Intellectual Shaman

Poetry for Finding Meaning in the Madness

emotionspassion.com

Emotional musings

Relationship Insights by Yernasia Quorelios

Concerning All Types Of Relationships

Pondus Meum Amor Meus

by love we are borne

A most awaited way to Express love

Love ,Friendship ,Emotions, wisdom words

Why Evolution Is True

Why Evolution is True is a blog written by Jerry Coyne, centered on evolution and biology but also dealing with diverse topics like politics, culture, and cats.

Archaeology Orkney

University of the Highlands and Islands Archaeology Institute blog.

Poetic Justice

My Take On Reality

allenrizzi

Sempre in Movimento! Published Every Monday and Friday at 12 PM EST

Nemorino's travels

aka operasandcycling.com

method two madness

a blog of two friends

Through The Cracked Window (Revisited)

A poetry site. (If you like this post please click, "Follow," on the right menu.)

Create your website with WordPress.com
Get started
%d bloggers like this: