What are Dedicated AWS EC2 Instances?

Ec2 instances are the virtual machines provided by AWS that you can rent to run your application workloads. They can can vary from powerful instances with hundreds of processors capable of processing vast amounts of data quickly, down to small micro instances suitable for running microservices.

They are in reality areas of a piece of physical hardware with the capability to support the compute and memory characteristics of the virtual machine EC2 instance being offered.

There are different classifications of EC2 instance types.

The M class for instance is a middle of the road instance type with an even proportion of compute, processor, memory and network throughput suitable for most business applications.

The C class EC2 instances are optimised for compute capabilities whereas the R class instances are optimised in favour of memory availability.

The physical hardware within the AWS data centres is built specifically for the different instance classes, so that when you select an EC2 instance type, the virtual machine provisioned will be allocated to a server with the appropriate resources. As they say, there is no cloud, only other people’s computers.

So say you select an M4.xlarge instance type, there are a specific number of processors, a specific amount of network capacity and a specific amount of memory associated with the M4.xlarge instance type. AWS then provisions your virtual machine on an appropriate server and ensures that only you have access to the resources required by your instance on the server, even though there may be other AWS customer virtual machines on the same piece of multi-tenant equipment.

Dedicated vs shared infrastructure

When you select dedicated hosting, your EC2 instance will be the only virtual machine on the server hardware and all other instances will be prevented from running on that machine, this includes other EC2 instances you wish to run. This is the main reason dedicated infrastructure is much more expensive than shared hosting.

EC2 instance types.

The general purpose category of EC2 instances provide a good balance of compute, memory and networking resources. The general purpose instances include

Mac — powered by mac mini computers using AWS Nitro

T4g — ARM based AWS Graviton 2 processors

T3 — Burstable CPU Intel Xeon

T3a — Burstable 2.5GHz AMD EPYC 7000 processors

T2 — Burstable Intel Xeon 3.0–3.3Ghz CPU

M6g — ARM based Graviton CPU with ARM Neoverse cores plus EBS or SSD

M6i — 3rd Gen (Ice Lake) 3.5GHz Intel Xeon processors

M5 — Intel Xeon Platinum 8175M Processors with intel AVX-512 EBS or SSD

M5a — AMD EPYC 7000 series 2.5GHz processors with EBS or SSD storage

M5n — Intel Xeon Cascade Lake 3.5GHz CPUs with EBS or SSD

M5zn — Intel Xeon Cascade Lake 4.5GHz CPUs

M4 — Intel Xeon 2.3GHz E5–2686 Broadwell CPU or 2.4GHz E5–2676 Haswell

A1 — Custom AWS Graviton CPU with Neoverse cores

Each EC2 instance type typically has a number of options with various CPU, memory and storage options. The T3 EC2 type for example has the following options:

Compute Optimized EC2 Instances

Compute optimized instances are a good choice for processing large compute workloads like batch processing, media transcoding, high volume web servers, scientific modelling, gaming servers or other compute intensive workloads.

C6g — ARM based AWS Graviton2 CPU with up to 25Gbps network bandwidth

C6gn — ARM based AWS Graviton2 CPU with up to 100Gbps bandwidth

C5 — Choice of Intel Xeon scalable CPUs up to 3.9GHz

C5a — 2nd Gen AMD EPYC 7002 CPUs up to 3.3GHz

C5n — Intel Xeon Platinum CPUs with Intel AVX-512

C4 — EC2 optimised Intel Xeon E5–2666 V3 Haswell CPUs

Memory Optimized EC2 Instances

These memory optimized instances are designed to deliver fast performance for applications that process large data sets in memory.

R6g — ARM based Graviton2 CPU with 8–512 GB Memory

R5 — Intel Xeon 3.1Ghz with up to 768GB Memory per instance

R5a — AMD EPYC 7000 Series CPUs with up to 768GB Memory

R5b — Up to 96 vCPUs 2nd Gen Intel Xeon and 768GB Memory

R5n — 2nd Gen Intel Xeon CPUs

R4 — High frequency Intel Xeon E5–2686 CPUs up to 64 vCPUs 488GB Ram

X2gd — ARM based Graviton2 CPUs

X1e — Intel Xeon E7–8880 CPUs up to 3904GB DRAM instance memory

X1 — Intel Xeon E7–8880 CPUs with up to 1952GB DRAM

HighMemory — Intel Xeon CPU with up to 24TB Memory

Z1d — Intel Xeon 4.0GHz with AVX-512 and up to 384GB Ram

Accelerated Computing EC2 instances

These instances use hardware acceleration and coprocessors to perform functions like floating point number calculations, graphics processing or data pattern matching that out perform standard CPUs in machine learning and high performance computing workloads.

P4 — Up to 8 Nvidia A100 Tensor Core GPUs, Intel Xeon 3.0GHz

P3 — Up to 8 Nvidia Tesla GPUs, Intel Xeon 2.5GHz

P2 — Up to 16 Nvidia K80 GPUs & Intel Xeon Broadwell CPUs

Inf1 — Up to 16 AWS Inferentia Chips for ML inference applications

G4dn — Up to 8 Nvidia T4 Tensor core GPUs & 2.5Ghz Cascade Lake CPUs

G4ad — Up to 4 Radeon Pro V520 GPUs & AMD EPYC CPUs

G3 — Up to 4 Nvidia Tesla M60 GPUs

F1 — Up to 8 Xilinx Virtex Ultrascale Field Programmable Gate Arrays

Storage Optimized Ec2 instances

Storage optimised EC2 instances are designed to handle workloads that require high sequential read/write access on large data sets. They deliver tens of thousands of low latency IOPS

I3 — Non-volatile Memory Express (NVMe) SSD backed storage, Intel Xeon

I3en — Up to 60TB NVMe SSd storage 3.1Ghz Intel Xeon Skylakes

D2 — Up to 48TB Hdd local storage, Intel Xeon Haswell CPUs

D3 — Up to 48TB Hdd storage, Intel Xeon Scalable Cascade Lake CPU

D3en — Up to 366TB of Hdd storage, Intel Xeon, up to 75Gbps network bandwidth

H1 — Up to 16TB HDD storage, Intel Xeon 2.3Ghz Broadwell CPUs

How does AWS assign hardware for EC2

A single server will typically host as many instances as it can without oversubscribing the amount of CPU or memory available to service the instances. These will usually be from multiple AWS accounts

For example 1 rack mounted server may have the capacity to host 3 x M4.xlarge

While simplified, the above image represents AWS deploying 3 M4.xlarge instances on a single physical server. As we can see from the available capacity, there are not enough resources left on the server to host a fourth M4.XL instance on this piece of equipment, but there is still some spare space.

Never one to let an opportunity slip, AWS will find a use for this spare capacity. While another M4 category EC2 instance will not fit, smaller instances like say a t2.micro will, so AWS will deploy whatever virtual machine will fit into the remaining space.

In reality multiple smaller images might occupy this remaining space alongside the M4.xlarge instances.

In reality the shared resources on this server aren’t strictly reserved exclusively for each instance. Sometimes you will be able to steal memory and CPU above and beyond what you have reserved using something called CPU credits and bursting.

When your instance is consuming less than you are paying for, AWS will issue you CPU credits. These credits are available for you to redeem during the times where your compute or memory requirements exceed what you have reserved.

In the above example if the M4 instances are not using their full allocation of CPU or memory and your t2.micro instance needs to temporarily exceed your allocated CPU or memory allocation to meet a spike in traffic or transaction processing, if you have CPU credits in hand, AWS will borrow some from the other instances on this server.

If you have no CPU credits or the other instances on the server are using their maximum allocation, then AWS will throttle the CPU, Memory and network traffic down to the maximum your EC2 instance should have access to.

In reality this system works well because the instances on a server will typically be running different applications and workloads and not all hammering the processors and available RAM on the server at the same time.

What happens when you select dedicated instances?

When you provision a new Ec2 instance and you tell AWS that you want it to be a dedicated instance, then AWS will find a suitable server in the data centre that has no other instances running and then will provision your dedicated instance and block the other capacity on the server from being used by anyone else, including you.

While bursting will still be possible if you accrue CPU credits, the entire server does not become available to service your dedicated EC2 Instance.

You do however pay for the entire server within the pricing structure of a dedicated instance which is why dedicated instance pricing is significantly higher that shared workload hosting.

So it’s almost never a good idea to request that your EC2 instances be dedicated. While no one else can access the server your instance is running on, you don’t get exclusive use of the CPU and RAM on the server either, you will only get access to the maximum amount of CPU, memory and network bandwidth defined within your EC2 instance specifications.

So why would you use a dedicated instance?

Part of the compliance regime is that no other organisation should have access to the hardware your patient records are stored on. This was defined long before cloud computing but worked on the assumption that if you had digital access to hardware and you knew what you were doing, you could access any other information held on that hardware, even if it was in a different virtual network.

So what that means in cloud computing terms, is that if you are provisioning an EC2 instance that stores or enables access to patient information, then nobody else can have access to the hardware the EC2 instance lives on.

So in this situation a dedicated instance will be the only resource on the server, so no one else will be able to access your server and snoop about checking out your blood sugar levels.

AWS EC2 dedicated hosts.

When you need to run multiple EC2 instances on the same server, this service allocates a physical server for your exclusive use, as opposed to the standard multi tenant server methodology. Dedicated hosts provides the visibility and option to control how you place instances on a specific physical server and lets you use existing per socket/per core/per VM software licences like Windows Server, SQL server, linux which can help reduce costs if you already have licences in place.

Launching virtual machines on Dedicated hosts is managed through your AWS License Manager.

So that’s a quick run through AWS EC2 dedicated instances. Once you start to deploy EC2 instances using AWS, it’s invaluable to be able to visualize your infrastructure using easy to understand, accurate and perpetually self updating network topology diagrams that are automatically generated.

This is what hava.io was built for. You can get up to date interactive diagrams of your AWS, GCP and Microsoft Azure networks by connecting Hava and letting the magic happen.

You can take Hava for a free 14 day trial here: https://www.hava.io

Originally published at https://www.hava.io.

Tech Writer, Developer, Marketer and Generator of Leads.