Calculating Machines: A History

images.jpg

“The First Computer” was not a computer at all, but was a mechanical machine, using levers and gears called the Analytical Engine.

The counting board was an archaic tool and the earliest known device used to solve menial mathematical equations. This device was simple but used for over a millennium as the primary counting device of its time. Invented in 300 BC, this device is outdated compared to the computers we use today, but had an almost identical purpose, math. Computers as we know them have a lineage thousands of years in the making, so sit back and buckle up as you are taken on journey of over 2000 years.

The earliest known counting board, called the Salamis Tablet, was discovered in Greece. This specific tablet is thought to have been invented and used by the Babylonians as a device to track board games. It was not until later that the world started using lines and pebbles on a table to calculate complex mathematical equations. This was done by having different lines represent different values. The first line representing the first digit of a decimal number, and the subsequent lines being the next digit after that. So, to represent the number 32 you would place three stones on the ten’s line and two stones on the one’s line. This is very similar to the Abacus which would not be invented until the early 1200’s.

The Hand Abacus was a roman device designed with the counting board in mind. It was used in its popularity for making many groundbreaking discoveries in mathematics as it added functionally that the counting board was not capable of. Being a device with all the pieces built in, this made portability extremely easy. The abacus was used for hundreds of years, but as time passed the technologies of the day grew exponentially. In 1623 came the Polymath, the world’s first mechanical calculator. In 1858, the Slide Rule. Each of these devices building on their forbearers with greatness and innovation in mind. Did the Babylonians know they would design a device that would be so simple, yet so monumental? I think not. Just like previous generations the designers of these inventions were limited by the technology of their time but built and expanded upon what they were given to reach even greater heights.

 In the mid-1800s there were astronomical breakthroughs in technology paving the way for the experience we have today. Charles Babbage invented what many people refer to as “The First Computer.” It was not a computer at all, but was a mechanical machine, using levers and gears called the Analytical Engine. This device operated by holding numbers in its ‘Store’ and then the ‘Mill’ would use internal mechanisms to take in information and run arithmetical functions allowing for addition, subtraction, multiplication, and division.  This device sparked a drive in many other people to try and do better and create devices with similar uses.

 Alan Turing, the first person to design a digital computer, created the concept of the Turing Machine. This machine was said to be able to complete any mathematical computation given the correct algorithm. This was the first step into creating the computers we know and rely on today. This took place in the early 20th century and inspired the construction of ENIAC by J. Presper Eckert. This computer functioned by having mechanical vacuum tubes turning on and off, each one acting as a binary switch to represent numbers. This computer was massive, weighing about 50 tons, having an astonishing 18,000 vacuum tubes, and taking up 800 square feet, but had less computing power than an average watch. Construction on ENIAC was started to help the war efforts of WWII, but by the time it was finished being built the war was over. Begging the question; was the $400,000.00 it cost a waste? ENIAC, being the first programmable digital computer ever built, was a major accomplishment and created the foundation for how computers would function for the next 80 years.

Flash forward three years and the Universal Automatic Computer (also called the UNIVAC) was released. This was the first commercial computer created and was more powerful, yet took a significantly less portion of space, than the ENIAC. The UNIVAC was small enough to fit in an office and was similar in size to a desk. This computer also had different methods of input compared to its predecessor. ENIAC used plugboards as the input devices, and it was a precarious, long process to make updates to the machine as there was not a simplified way to input data. The UNIVAC on the other hand had an onboard console typewriter allowing for input in greater magnitudes.

One of the biggest advances in computer technology to date was the creation of the transistor. invented in the late 1940s it was an electrical switch that can turn on and off through electrical signals. Once fully developed, these transistors were built quieter, more reliable, more energy efficient, and smaller than vacuum tubes, all of which made for a viable replacement when it came to computer construction. The first transistors were, in fact, larger than the vacuum tubes used in ENIAC and the UNIVAC, about the size of the palm of your hand. As time went on though transistor technology only got smaller; Whereas vacuum tubes were limited in how small they could get. Transistors were truly a ground-breaking invention that we still use today and would catapult society even further into the age of computers.

The next large advancement was made by Jack Kilby in 1953 and was the invention of the integrated circuit. Commonly known now as the microchip, this small device allowed it so that electrical components, such as transistors, could be built much smaller than previously thought possible. The inspiration for the invention of the microchip came when Kilby was admiring Micro-Modules. A practice where electrical components would be made to slot into their needed locations instead of being wired, while effective, this did nothing to simplify the circuit of current machines. Kilby took this into his own hands and used Germanium to design the first integrated circuit. These devices functioned by creating all wanted component with the same material, this allowed for the electrical circuit to flow through the entire microchip, where before each electrical component would be yet another degree of separation.

Over the next 30 years computer science was at its peak. Memory chips, mouse and keyboard, programming languages, and huge developments all leading to Apple releasing the first ever graphical operating system. This operating system flopped, but still was the driving force behind Microsoft releasing its own OS just two years later. This was a huge success as it was extremely user-friendly allowing for the personal computer to reach the general populace easier than ever before. Windows 1.0 as it was called, was built on top of MS-DOS a command line operation system that is significantly harder to use than the GUI. Being built on MS-DOS, Windows was easily maintainable as its internal structure had been developed over the previous years and just needed to be adapted to an interface. This was probably one of the largest reasons why Apple’s first OS was a failure. And this is what got Microsoft’s foot in the door to become the largest computer company the world has ever seen.

 Another device that computer science had a positive effect on was telephones. Initially to communicate long distance with someone you had to have a wired connection to that person and have phone providers be a medium through which you were connected, but cellphones were invented in the early 70s and deprecated that system almost immediately. Thirty years later though and cellphones went from being bulky suitcase-sized devices to being able to fit in our pockets yet having a higher compute power than the ENIAC by a multitude of over a 1,000. This was only a foreshadow of what to come, because only 3 years later the first wide scale commercial smartphone was developed by Apple. The iPhone was also the first mobile device to allow full access to the internet taking the world off its feet with the influx of social media and application potential.

As time went on, mankind was never fully satiated by the newest developments and always strived to create the new best thing. In the early days of Apple, they eventually created a fully stable OS and Microsoft released newer versions of theirs, making them in direct competition. Competition is good though; it feeds innovation as two competing entities try and outdo the other. And this fact is one of the reasons we have the technology we have today. War stimulated computer technology growth in the 1940s and competition did the same thing in the late 20th century. As we delve into the modern age, computer technology is exponential, accumulative and there are always new players trying to get their hands on a piece of grandeur in the quest to find the perfect computer. The best part about new technology is it can be used to develop even better versions of itself. Computers now are used to build better more robust components, and this is how it has been for many years. As time goes on new tech will be released and the future will have a never-ending influx of amazing devices to use and learn about. The counting board really did start it all. From counting on a table with only a single calculation every few seconds to having devices that fit in our hands that can run millions of equations in the blink of an eye. We truly are living in the future.

- Crimson Wheeler, Service Technician

Previous
Previous

Self-Driving Cars

Next
Next

A Deep Dive into Fiber