AI, Technology / By Gal Ben-Tovim / July 11, 2019 / AI, OPERATING SYSTEMS

Over time, operating systems have developed and become more user-friendly and accessible. This is because computers have advanced and today they play an integral part in our lives.

Computers have shifted the paradigm towards everything we do nowadays.

Here are some reasons why the transition eventually happened.


The first programming language was developed in the early 1940s for the first computer the Z3 by Konrad Zuse called Plankalkül, then later on in the 50s other languages like Autocode, COBOL, FLOW-MATIC, and LISP were created. Two of these languages are still being used today by Nasa, credit cards and ATMs.

Through the 70s and 80s, a number of new languages were created. As the hardware advanced so did the language.

  • Pascal was an operating system that was used for the Apple Lisa.
  • C was created initially called B — this is the base for all the C languages such as C # and C++which was created as a low-level language for UNIX.
  • C++ and Objective c were created later in the 80s.

The challenge with the abovementioned languages is that they were very complex and weren’t very user-friendly.

A pinnacle moment for Microsoft was when they created Visual basic in the early 90s. This placed them ahead of their competitors as the language was very easy to understand and code with, compared to the rest of the programming languages Even kids could code with visual basic.


When Bell Labs started working on the origins of UNIX back in the 60s they had no intention of making the first computers user-friendly. All UNIX was essentially was a bunch of code on a screen. It was a command-line user interface. Only in 1985 did it become a text-based user interface.

After the launch of Apple’s series of home computers, the Apple II, the everyday Joe got to experience the technological marvel of the computer for the first time.

Shortly after the release of Apple II, other home computer competitors launched their renditions of computers. Two examples are Atari with the 400/800 and Commodore with VIC-20. These computers supported their own custom OSs. They had basic features and relied on understanding bits of code in order to run a program and control the user interface. This is regarded as a Text User Interface (TUI).

In the 90s, the 16-bit and 32-bit processors allowed for a Graphics User Interface (GUI). This was the next level of operating systems. GUI allowed for icons and images, as well as shortcuts to programs and processes. This drastically simplified ease of use.

Up until today we still witness the advances of graphics in order to render a cleaner and more visually appealing user interface.

Microsoft used this to their advantage and used this as a strategic move to slowly dominate the market by having developers learn a language that was in their tongue allowing them to control the developer world.


The basis of a computer or device is structured on this principle of input and output (I/O).

The computer hardware, indicated in the table below, is regarded as either Input or Output.

The computer works in these steps according to action.

Input >>>>>>>Processor>>>>>>>Output

Input=I Processor=P Output=O

The same principle exists for the interface.

When inputs are more intelligent, they require a higher level of AI.

The first computers only had a keyboard, which s an example of, a lower level of Input, so for every keyboard key, there is a corresponding visual output from the interface.

This would be considered a text(T) input and the output would be text(T) for instance: I/T>P>O/T

A mouse is a higher level of input for an interface and it was invented in 1968 and shipped with the first Macintosh. It needs to register the movement that you make with the physical mouse, (a combination of up, down, left & right) and the computer then processes that movement and translates it to the digital interface. This would be produced on the output in this case so you can see it on the screen.

The more you move the mouse the more the computer has to process.

The higher the level of AI the more processing power the computer/device requires.

This would be considered a 2D input and 2D output where:


Today operating systems have only been established to run up to 2D and not further than that. For example, there are inputs and outputs for 3D and Augmented Reality (AR) but they do not have the processing power available to create these types of operating systems yet.


There is a lot to expect in the future of operating systems.

*Spoiler alert*

In the new Spiderman movie, Far away from home, Peter Parker inherits Tony Stark’s sunglasses. These sunglasses are in fact an operating system for E.D.I.T.H. (Even dead, I’m the hero). This operating system controls augmented reality sunglasses which give detailed information about individuals, their surroundings and background. They can also operate Tony Stark’s technology and can even order a drone strike. Although the drone strike is a little bit over the top, technology like Google glass already exists. Google Glass in-fact has an augmented display.

Another movie example where we see advanced technology is ‘Her’ starring Joaquin Phoenix and Scarlett Johansson. In the movie, Theodore, a Joaquin Phoenix has an earpiece which gives instructions on how to approach a woman. The technology comforts him in the form of a human voice, Scarlett Johansson.

Technology like this could provide great benefits for society. For example, it could add an additional layer of support and guidance for blind individuals.

As of now these technologies seem a little farfetched but have already been predicted. We could find these technologies in the next 20 years playing a major part in our lives.

As we dramatically increase the accessibility and user-friendliness of operating systems, these are only some of the breakthroughs we can expect to see in the future,

Leave a Comment

Your email address will not be published. Required fields are marked *