Pages

Friday, December 10, 2021

Evolution of Data Center/Virtualization/SDDC

During 70’s everything was stored on physical files placed in a cabinet also referred as files in the cabinet.

The problems with this way of storing files were 

- Accessibility, difficult to locate a particular file  

- Location constraints, People from different location won’t have access to it

- Disaster Recovery (DR): In case of fire/or any natural calamities, we won’t be able to recover it. There was no concept of failover

This led to some of the expectation

Anything has to keep track of and stored should be accessible 100% time.

Then came the era of 90s. The age of mainframe computers/servers.

Large companies started buying servers (Compaq, dell) and they moved file from cabinet to servers also known as a file server.

Basically, this is like converting physical files to electronic files.

With the help of electronic files you can copy, take a backup, put on disk, share it. Now anyone in the world can access these files

The problem with this type is they are using one server per application. This is really difficult to maintain as there are different severs for each application running (email, web, file,)

Moreover, every one of these servers will be built differently, different requirement and different set of Operation systems.

Around 2000, companies started buying the same type of servers which were basically compact in size and started putting them on the rack. Basically known as rack mount servers.

So we can have multiple servers fitted in the same rack so this will have less footprint and Less space required.

Here they started building customised rooms with dedicated power, cooling,  electricity to maintain these serveres.

But problem with this was as company started growing they have to buy more and more servers to run same applications say email. so this was not cost effective at all.

Another big issue was these expensive servers sitting on Lab or Datacenter are utilising around 3-4% of resources only.

Here come Vmware to solve this problem with the help of virtualization. 

With virtulaization, on a single server or Bare Metal  we can run multiple VMs. This is done through the hypervisior called ESXI

This reduce cost, increase efficiency, utilisation and uptime. 

With virtualization we turned storage into software ie files .vmdk (and this is the shared storage), with the help of vMotion we can move VMs between different cluster/ESXI’s

In case of server crash, upgrade server, all VMs on that host will be to moved to another host and up and running using HA. This minimises the downtime.

Similar way we can have disaster recovery to mitigate risk. In case of natural calamities, we can take a copy, backup, move same .vmdk files and replicate on another site.

All these components: servers + storage + networking contribute to  cloud 

When we push all these together on software, we build a SDDC(software defined data center)

Vmware Software defined Infrastructure consists of 

VC + ESXI + vSAN + NSX

Vsphere Center(VC) is used to manage multiple ESXI hosts.

ESXI is used to virtualize the BM to multiple VMs

vSAN is software defined storage

NSX is the software defined Networking and Security.

VMware Cloud Foundation(VCF) = ESXi + vSAN + NSX

With VMware Cloud foundation, we can move/migrate VM between different Data center running similar s/w stack between different VMware Cloud partners.

We can also build machines in mega clouds likes AWS, Azure and GCP. The problem in building in native form is once you build there you cannot move them back to VCF

So to manage,deploy, monitor both cloud native and VCF we come up with a central cloud management platform called vrealise

There is also another component which is called VRNLI : verealise network logging insight specially used to manage NSX components

These all products are offered as part of SAAS (VMware Cloud hosted)


No comments:

Post a Comment