A journey from the mainframe to the cloud
This is the story of one character's professional journey, as fictional as it is real, from her early days in Madrid, wiring landline telephony, to her encounter with IBM's first mainframes and their evolution to the cloud. ______ I first arrived in Madrid in 1955 and lived near Amaniel Street. I ended up, for a variety of reasons, working in cable laying for Telefónica, which was very different from what I was doing in my hometown. This physical job had an assigned pay of 60 pesetas a month. It was my first contact with a telecommunications company. I was laying cables for years... at the beginning of the 1960s I was able to see an IBM computer for the first time, it was the S/360 model. It was the time of the “mainframe”, the giant computers occupied entire rooms. These devices, often seen in archival photos, used vacuum tubes and covered about 140 square meters, weighing about 30 tons! Photo: NASA. The IBM S/360 revolutionized the industry by separating hardware from software, allowing programs to run on different machines with the same architecture. The IBM S/360 revolution What was so special about the S/360? This model became a significant milestone in the history of computing, as it was the first computer to separate hardware from software. It also represented my first contact with computing. A manifesto that appeared years later summed up that discovery well: "Today I have made a discovery, I have found a computer. Wait, this is incredible." Programs previously had to be written specifically for each machine. However, by separating the hardware from the software, programs could run on any other machine that shared the same architecture. I switched from running telephone cables to focus on data cables. During the 1960s transistors replaced vacuum tubes, achieving a significant reduction in the size and weight of computers. In the 1970s, Texas Instruments developed the chip (integrated circuits), which increased the speed of computation and processing and further contributed to the decrease in component size. Technological advances in the 1970s, such as the development of the chip by Texas Instruments, transformed computers, making them faster and more compact. The evolution of data processing centers I had the opportunity to see what we can consider as the first data processing centers (DPCs) during the 1980s. Servers started to become more compact and manageable, as did data storage cabinets. At that time it was common to “rack”, i.e. to place servers in large cabinets (racks) to organize them vertically. This allowed us to house several pieces of equipment in the same space and optimize the use of the available area in a room of the building. To give you an idea, the storage cabinets, consisting of hundreds of hard disks grouped together, weighed up to 1000 kilos. Today, with SSD/NVME disks, it is common for the cabinets to weigh no more than 500 kilograms. Whereas a server used to weigh between 50 and 60 kilograms, today it is not usually heavier than 35 kilograms. I first worked on a project to integrate data networks with data centers in 1980, which allowed me to see firsthand how information technology was beginning to transform the business world. In this new phase, servers are interconnected with storage and the network, allowing remote access from workstations via local networks. Here is a picture of those server cabinets. I'll be honest with you: it's usually not that tidy. As we would say today, it's an intended picture for Instagram. Rack de servidores IBM. Fuente: izquierda, derecha. A small diagram of traditional architecture: Source. The advent of virtualization In the early 2000s, a new concept called “virtualization” emerged. It was said to be a revolution, and it was. When I first saw what virtualization was capable of, I was surprised: it allowed us to have several servers within a single physical server, running different applications. It was a groundbreaking breakthrough, although it is true that IBM had already launched the CP/CMS in 1964, which included the first hypervisor capable of creating virtual machines. Before that, in the traditional DPCs I knew at Telefónica, where the process of installing servers in racks and their cabling was physically laborious, it was necessary to have a dedicated physical server for each functionality. Now everything operates in one place, which has allowed a significant reduction in both hardware and physical effort. This represented a significant advantage for the companies in terms of cost reduction. For us technicians, it was a step up in level. Virtualization simplified the complexity of architectures and infrastructures, greatly improving project delivery times to customers. Below is a diagram illustrating the evolution from a traditional architecture to one that incorporates large-scale virtualization. Fuente. Towards software-defined data centers Can we already consider this as Cloud? Not yet, there is still some way to go. Virtualized data centers had to evolve towards software-defined data centers (SDDC). The SDDC concept emerged well into the second decade of the 2000s, with a first mention in 2012. More than a DPC architecture, SDDCs represent a “philosophy”, an idea, a concept. In SDDCs the fundamental pillars of the architecture are virtualized, including hardware, network and storage, on top of which sits a management layer. Each of these components brings specific benefits to the infrastructure, which can be summarized as follows: Virtualization: The first step toward virtual data centers. CPU and memory are separated from the physical hardware, allowing applications and operating systems to run encapsulated and separated, in isolated virtual machines. Network: The network can also be virtualized, known as SDN (Software defined network). This allows the decoupling of network resources from hardware. It makes it easier to emulate network components without having to physically own them. Storage: Storage virtualization and SDS (Software Defined Storage) allow savings, simplification, efficiency and greater user control. Fuente. The transformation to the Cloud And this is what we know as “Cloud”: large physical Data Processing Center (DPC) facilities that house Software-Defined Data Centers (SDDC) in hundreds of 'rack-mounted' servers, mounted in racks (the cabinets we saw earlier) for virtualization, and storage arrays combined with Software-Defined Storage (SDS) for capacity enhancement. All this combined with both virtual (SDN) and physical network infrastructure. A Cloud service allows users to connect to their data or servers in data centers via the Internet. Cloud computing has ushered in a new era: technology that once occupied entire rooms is now available to anyone with an Internet connection. The evolution to AI in data centers Arriving at the end of the last decade, around 2018 or 2019, a new concept that introduces Artificial Intelligence (AI) emerges. One begins to speak then of Self-Driving SDDCs or self-managed data centers. The aim of this conceptual evolution is to reduce human intervention in the management of daily operations in these centers. It will instead be the tools around the DPC that will be in charge of managing the platform and addressing any problems that may arise. The chapter on the integration of AI with data centers and the cloud will be very relevant, although with my almost 94 years of age, I may not see its conclusion. The persistence of the mainframe in the cloud era Finally, a thought: Is the mainframe obsolete? Not completely, since not everything is in the “cloud”. As one major IT company puts it, “with the rise of cloud computing some may think of the mainframe as a technological dinosaur.” The reality is that a 2022 report indicates that 45 of the top 50 global banks and 11 of the top 15 airlines use mainframes in their infrastructure. The mainframe has evolved along with other technologies to accommodate improvements in performance, speed and capacity. Today these are considered to be high-performance computers, capable of performing billions of operations and transactions in real time. Large companies currently use hybrid environments (distributed architectures) that combine the advantages of different systems, complementing each other. Fuentes: — What is a mainframe? — [ES] Historical evolution of the Cloud
March 18, 2025