cloudhistory.jpg

A brief history of the cloud

28.09.2021 | 18:57
Mogee

From mainframe to apps


Today, mobile applications are so commonplace that we've almost forgotten the days when mainframe computers like the IBM-704 were still among the absolute novelties in computing. That was in 1954, and the first mainframe to be produced in small numbers, the IBM mastered floating-point arithmetic - a finite arithmetic developed for numerical calculations with binary computers. In the span of just one human lifetime, the tape-drive mainframe evolved into modern cloud computing. But how exactly did we get here and what does the future look like?

 

The first mainframe for the mass market

 

The first mainframe system for the mass market was an important milestone for software applications. After IBM introduced the System/360 (S/360) in 1964, the company laid the foundation for porting later upgrade versions. These no longer had to be rewritten completely. From then on, mainframes were designed as powerful central computer systems that users could access simultaneously via a terminal consisting of keyboard and monitor. Another advantage: they could share their computing time.

 

With the introduction of the microprocessor in 1977, the era of the personal computer (PC) began. Three microcomputers for the mass market initially fought for first place: the Apple II, the Commodore PET 2001 and the TRS-80 from Tandy. But the market didn't really take off until 1981, when IBM introduced its 5150 PC model. Computing power shifted from shared mainframes, as central processing systems, to distributed processing with workstations and PCs. The advent of PCs also changed software development: Multiple operating systems and different high-level languages now began to compete for market share. In the late 1980s, the client-server model also became popular in companies, as it allowed better resource planning and distributed work intelligently across different PCs. This led to another effect on software design: part of the tasks were done on a central server and clients forwarded the workloads to the server, requiring the server and client applications to communicate with each other.


On-premise versus hosting


With the increasing popularity of the Internet, application service providers (ASP) appeared on the scene in the late 1990s and early 2000s: Application service providers offer applications for information exchange via a public or private data network and take care of the entire administration (data backup, patching, etc.). This meant that software was run centrally on a server environment and accessed via http. As a result, many users could be served simultaneously.

The next evolutionary step was the Software-as-a-Service (SaaS) model. Multiple tenants could thus use the same resources simultaneously and share their costs. For app vendors, this presented a series of decisions that impacted how applications were best available - such as where to host them: with a hosting provider or "on premise"? "On premise," that is, buying and installing software on your own servers on site, ties up computing power, memory and storage resources. As a result, vendors had to constantly think about how far their infrastructure could go to meet the demand for their products. They also had to think about all aspects of their Web services, especially security, data protection and redundancy.

 

One of the advantages is that on-premise software is easier to customize to your own, often very unique, requirements. Large companies and corporations in particular attach great importance to keeping all data under their control and integrating software comprehensively into their existing IT system, linking it directly with other software. However, this requires the appropriate technical know-how, as monitoring, security, maintenance, and backups have to be carried out by the company itself. This is often a challenge for smaller companies due to the high software license costs and the risk of failure of their own server environment.

 

Hosting or Software-as-a-Service, on the other hand, enables companies to subscribe to software and data from an external provider. This reduces the financial and technical outlay at the outset. Monthly usage fees replace high acquisition costs for own servers as well as for software licenses. Maintenance, monitoring and backup tasks are also the responsibility of the hosting provider. Smaller companies in particular, without the necessary technical know-how and financial resources, benefit from the lower setup costs and flexible use. Different subscription models offer different scopes of services in terms of storage space and functionalities, which can be reduced or expanded as needed. In addition to the lower effort and low-risk deployment, the ability to scale quickly and integrate SaaS solutions into existing systems so they don’t clash with other software is also important.


The cloud: from generation one to cloud-native

 

Amazon launched its Web Services (AWS) in 2006 to offer developers an on-demand IT infrastructure. The focus was primarily on enterprises rather than end users. With Amazon S3 Simple Storage Service, the company founded by Jeff Bezos introduced another milestone: to virtualize computing power and storage. What would later be called cloud computing abstracted the physical world from the virtual. This allowed applications to seamlessly allocate and scale resources without the need for dedicated servers - effectively a pool of connected resources with intelligent load balancers. There was also a wide range of managed services, allowing resources in use to be scaled as needed, and security and redundancy features to be used on an as-needed basis.

 

To take full advantage of cloud architectures, enterprises rethought their application development and deployment models as the cloud evolved. First-generation cloud providers, such as AWS, Microsoft Azure, Google Cloud Engine and others, primarily took "lift-and-shift" approaches: Applications and their associated data were moved to a cloud platform without changing the application design. This involved moving on-premise apps and infrastructure to an "unmanaged" standard platform to save on hardware and IT overhead costs. An evolution of this is the "cloud-native" approach. This takes advantage of the cloud architecture, combined with microservice architecture and container technologies. As such, this new approach provides the necessary options to fully exploit the potential of application architectures in a cloud-based world.

 

Cloud-native app strategies have sparked a revolution in IT, but require a new mindset to make the most of the power of the cloud.

 

Three pillars of cloud-native apps

 

The common definition of a cloudnative app includes three main pillars: first, that the app is packaged in a container technology such as Docker. Second, that it consists of a set of microservices that provide data and logical processes in discrete components. And third, that it is dynamically managed by a supervisor that orchestrates and brings together processes for functionality and scaling. Combined, these three pillars solve any number of problems for IT organizations, software and app developers. These include ease of scaling, high application security and distributed structures.

 

Symbiosis of infrastructure and application design

 

To make the most of these, application designs are needed that are best adapted to the use cases and the cloud infrastructure provided. A symbiosis of infrastructure and application design can reduce complexity, reduce dependencies on specialists, eliminate silos and promote collaboration, thus giving companies and developers new freedom. Freedom that they can use to focus their creative energy on developing new, innovative software and cloud-native apps.