DataCore Labs
Innovations, Technology, Solutions


DataCore & AWS DR Strategies

Dec 01, 2016 AT 10:37 AM

Many are looking to migrate workloads or setup a DR strategy to the cloud, and it soon becomes very apparent that doing so can become complex and expensive. It’s my goal in this post to illustrate a couple inexpensive and easy ways to move data to the cloud using DataCore software. The focus of today’s post will be using Amazon AWS and the integration options available, however this is certainly not a limiting factor as there are other cloud offerings that can also be utilized using DataCore software. 

There are two ways for someone to migrate, replicate, move or backup data between an on-premises DataCore installation and an Amazon service. The first way is using the AWS Storage Gateway appliance. This local appliance securely transfers your data to AWS over SSL, and securely stores your data in Amazon S3 and/or Amazon Glacier. You can use this service to backup and archive one’s storage but the gateway can also be used to migrate workloads to the cloud. 

The Amazon gateway can also be used with DataCore’s tiering functionality as mentioned in Jeff Slapp’s post recently.

For example, the AWS Storage Gateway can take a snapshot of your on-premises data volumes exposed to DataCore, so that it can be transparently copied into Amazon S3 for backup. You can then subsequently create local volumes or Amazon EBS volumes from these snapshots to run the workloads on AWS EC2 instances. Notice in the diagram that the replication moves the data first to Amazon S3 at which an AWS snapshot can be taken, then it can be attached as an EBS volume to an EC2 instance. Note, this method doesn’t require a separate DataCore node to reside on AWS. 

For more information on the Amazon Storage Gateway product, check out this overview as it has a new interface to expose migration, bursting and tiering use cases.


The second way to migrate data to AWS is using our own DataCore replication functionality. This can be done synchronously or asynchronously depending on your use case constraints between an on-premises DataCore node and an DataCore AWS EC2 windows instance. 


This means one would install DataCore software on a EC2 windows instance where it would be connected to an on-premises DataCore node using either either use a VPN or AWS Direct Connect service. This allows you to mirror your data synchronously or setup an asynchronous policy for transparent data migration to AWS. Once the data has been migrated to AWS it then becomes possible to take a DataCore snapshot for prosperity or further migrations to another AWS region or availability zone.  As an optional step you could also take a AWS snapshot like in the first example, however one would need to make sure that all data has been persisted on the EBS volume so that there is data consistency. 

Another great post that goes into detail regarding DataCore migration strategies can be found here.

As you can see there are two great ways to migrate your on-premises workloads to AWS. All without the heavy expense of a consultant or specialized expensive transcoding software to move data blocks. The next post in this AWS series will look at performance options using DataCore software on Amazon AWS. 


Read More

Can I Have A Witness?

Nov 08, 2016 AT 09:19 AM

There are many storage systems on the market today that require a separate protected witness construct for high availability and fault tolerant data access. The quorum mandates that at least one server instance has ownership and active access to the underlying data subsystem. This arrangement is typical in cluster architectures. It’s important to recognize that a witness node in a clustered solution is vital for data awareness and data availability. For more information see here 

As an example, here you will find a rudimentary design that illustrates where a witness makes a decision during a failure event, where Node 3 and 4 are isolated from the rest of the cluster. This arbitrated vote by the witness maintains order for a quorum to be met and eliminate split brain scenarios.


However, there is more than one way to meet the required demands of data availability and fault tolerance. DataCore is one such example of a data aware system that provides cluster like availability but without the high costs and complexity of a witness architecture. DataCore is a true "active-active" grid architecture and not a cluster architecture. Each of the nodes within the grid presents mirrored disks as "active-active" storage devices. This means that the backend storage is not only presented through one DataCore node, but it can be addressed via both DataCore nodes simultaneously (read and write, R/W). 

Unlike a cluster solution, this grid approach won’t be affected by a split brain scenario, because every mirrored DataCore virtual disk is synchronously in sync. This means that each DataCore node functions similar to a witness node in that a decision is made where the active I/O needs to be acknowledged from, during a failure event. This also means one doesn’t have to figure out where to place a witness node for optimal design and failure handling. 

Another way to think about this architecture is from the witness/quorum perspective. The responsibility of the witness is to make sure it has at least two votes within a clustered system and can thus achieve a majority decision on which site/node will be active during a failover scenario. For this comparison, one can think of DataCore’s grid architecture as having multiple intelligent witness nodes all actively participating making sure there is high availability and data redundancy. Having an active/active architecture with all server nodes acting as an intelligent witness with each other, one no longer needs to design another site/node just for witness protection. 

All DataCore nodes can provide individual, autonomous services, and still only be accessed via a single DataCore node. The other peer DataCore node constitute a hot stand-by node for the (active) node and only becomes active should it be required during a failure event. This means one achieves a modern cluster like architecture without the limitations or complexity of dedicated witness nodes spread across geographic sites. There is no need to design a solution using DataCore software around a witness architecture. All DataCore nodes will function similar to a witness construct but provides a better architecture for data availability and redundancy. 

So before asking ‘Can I have a witness’ the better question is why do you need one? If the objective is to keep your applications running with continuous availability, then why add the extra cost and complexity required to manage and support extra witness nodes. DataCore software will provide continuous availability, all with fewer nodes to manage while minimizing costs.  


Read More


(DataCore Labs on YouTube)