Microsoft and Google have announced updates to their respective virtual-machine (VM) instances for highly confidential information to be processed in Microsoft Azure and Google Compute Engine.
Microsoft has moved its Azure DCsv2-Series VMs to general availability. The VMs feature hardware-based trusted execution environments (TEE) that are based on Intel’s SGX or Software Guard eXtensions.
TEEs – also known as secure enclaves – are isolated from the host operating system and hypervisor, and are located in a part of the CPU with its own memory.
People with physical access to hypervisor cloud servers, such as a cloud admin or workers in a data center, can’t access data actively being processed in a TEE. It offers an additional protection to encryption of data at rest and in transit.
While SGX makes it very difficult to run malware in a secure enclave, researchers have found ways a person with physical access can tamper with data stored inside SGX.
The feature is likely to be of interest to private sector and government organisations that process financial data, healthcare and intelligence data.
“By combining the scalability of the cloud and ability to encrypt data while in use, new scenarios are possible now in Azure, like confidential multi-party computation where different organisations combine their datasets for compute-intensive analysis without being able to access each other’s data,”
Google meanwhile this week made its Unified Extensible Firmware Interface (UEFI) and Shielded VM the default for all Google Compute Engine users for free. The feature helps ensure that VMs boot with a verified bootloader and kernel.
The Shielded VM offers protection from malicious guest system firmware, UEFI extensions, and drivers; a persistent boot and kernel compromise in the guest OS; and VM-based secret exfiltration and replay.
Shielded VM is available for customers using CentOS, Google’s Container-Optimized OS, CoreOS, Debian, RHEL, Ubuntu, SUSE Linux Enterprise Server, Windows Server, and SQL Server on Windows Server images.
Being in a position of cloud solution architect, It’s vital to propose a solution that’s nominal in all ways . From cost to flexiblity. From secure to always operational. let’s discuss those in this write-up.
With the boost cloud computing has seen in recent years, a cloud war was imminent, especially one anchored around the cloud architecture. In the current struggle,many giants have cemented their positions at the top, and some others have beentaking measures to prepare for the race. However, a lot of these organisations fail to build a good cloud architecture. This is because they often do not emphasise on best practices that must be followed.
Building a cloud-ready application architecture requires one to pay attention to many things. Among these are traditional concepts like stable design, testing, and correcting a previously committed mistake, and more. Some of the other vital aspects that one should consider are mentioned below:
Design Components Assuming Failure
This pessimistic approach to designing a cloud architecture often works best. Assuming that things will fail, will drive one to look at design needs, as well as implement and deploy for automated recovery from failure.
This entails designing architecture with the mindset that its hardware might fail, preparing for outages or any other disaster, that may force one to think of every possible recovery strategy during design time, which will only help the system.
This pessimistic approach should not just be applied for the hardware, but also to the software side. One needs to ask questions related to what could happen to the application dependent services if the interface changes. Or what could happen if cache keys grow beyond the memory limit of the instance.
This approach helps one design operation-friendly applications and have a better cloud architecture in it.
Loosely Coupled Components For Better Scalability
Building components that do not have tight dependencies on each other will result in the overall operation running as it should be in case a component fails, does not respond or responds slowly.
This means that when one of the loosely built components fail, the other components of the system are built so that they continue the work as if the failure never happened. This is something which can be called as a black box, where each component interacts asynchronously with others. This also allows for more scalability.
Decoupling components, building asynchronous systems and scaling are three of the most important aspects when it comes to cloud architecture.
Giving Emphasis To Security Within The Application
Security is often mid-level on the priority scale for many when thinking about designing a cloud architecture. However, it must be built into the application and must always be prioritised. One needs to pick a security approach and technology before building the application. These must be chosen according to the type of application one is running, and these should be able to address any compliance or other data-level security issues.
Generally, cloud-based applications must leverage identity and access management (IAM). Mature IAM capabilities can reduce a business’s security costs and gives it the option of being more agile at configuring security for cloud-based applications.
Freedom To Migrate
There is no correct size when it comes to choosing a cloud. The cloud strategy that one wants must give them the freedom to migrate to other clouds or run services balanced between two clouds. Planning a strategy by taking a multi-cloud approach will give one flexibility, along with the balance between the best price and performance.
One of the things to keep in mind when it comes to choosing a strategy or having a good cloud architecture is to design a tailored environment, where one can extract the maximum potential from the cloud. This includes the ability of hybridisation, freedom to use applications and multi-cloud approach, which result in tailored and cost-effective solutions.
With the rising adoption of the cloud market and the fierce competition, businesses are always searching for the best option where they can optimise their spend and increase performance. Withcost optimization strategy, one can reduce costs to a minimum and use savings to improve some of the business strategies or any other place they see fit.
Some of the points to be kept in mind:
One needs to remove the operational burden of management and maintenance of infrastructure by taking the help of services provided by a cloud-managing service provider. Doing so will result in efficient architecture and lowering the cost at the same time.
Consider shifting from CapEx to Opex. One should keep in mind that they do not need to invest heavily on the hardware they do not need. One’s CapEx shift to Opex could mean better scalability, redundancy, and reliability.
Price To Performance Ratio – This ratio gives the ability of a cloud architecture design to deliver lower cost and higher performance. A high price to performance ratio is always desirable.
Allocate expense to the functionality of the cloud and resources one requires while dropping or replacing the services.
It’s recommended to configure cloud at nominal level , with proper security inplace