Data Expedition, Inc.
Articles, events, announcements, and blogs
I have seen many cloud migrations start with the idea of uploading thousands or even millions of files into object storage. Compared to traditional filesystems, object storage is cheaper, easier to access, and has practically unlimited capacity. These gains alone can be a compelling driver for cloud migration. But there are subtle differences in how object storage behaves that can negate all of those benefits and turn what is often seen as the first step toward the cloud into a walk off a cliff.
AWS describes S3 as "primary storage for cloud-native applications; as a bulk repository, or 'data lake,' for analytics; as a target for backup & recovery and disaster recovery; and with serverless computing." In other words, not as a target for traditional file based applications. Objects work best when each object is large, frequently read, but rarely changed. Traditional applications count on files to be quick to access, easy to change, and hierarchically organized. There is some overlap between cloud and file storage, but the devil is in the details.
There are three areas where I have most often seen cloud plans founder because of overlooked storage details: latency, throughput, and organization.
When reading data, the time between when the request is made and the first byte arrives is the latency. This is different from throughput, which is how fast the data arrives once it starts. With traditional filesystems, latency ranges from about 1 millisecond (direct attached SSD) to about 100 milliseconds (network attached hard-disk). But with object storage, latency ranges from 100 milliseconds to over 500 milliseconds.
To understand the impact of increasing latency by hundreds of times, imagine an application that needs to search a collection of 10,000 photos for some metadata tags. On a desktop with an SSD drive, the cumulative storage latency would be 10 seconds. However, if the same application were simply shifted to cloud storage, an average cloud latency of 250 milliseconds would balloon that job to nearly 42 minutes.
Most object storage can only deliver a single stream of data at 100 to 200 megabits per second, and that only within a single cloud region. To access an individual object faster requires a lot of behind-the-scenes optimization.
For example, CloudDat can read or write an individual object at up to 900 megabits per second by dividing the last yard to S3 into parallel or "multipart" streams. To get that speed across the Internet requires transport acceleration. This is where Data Expedition, Inc. does very well, but the need to take that step beyond the default HTTP object interfaces must be considered early on in planning.
Did you know that object storage has no folders? Many interfaces, including our own CloudDat for AWS, filter object listings to create the illusion of folders. But behind the scenes, every object has to be sifted through every time. Operations that must enumerate the contents of object storage need to be avoided or shifted to a database. Another subtle organizational detail is that object names can affect performance. AWS recommends that objects which need to be accessed in parallel be given key names that start with very different strings because those will be stored on different partitions. Making the start of each name volatile is the opposite of naming in traditional filesystems, but is essential to rapid access.
There are still more differences between object storage and filesystems: metadata, access rights, and contention controls to name a few. But my point is not to say that cloud object storage should be avoided in favor of cloud filesystems. The advantages of object storage are real. They just require careful consideration of how the data will be used before that data is lifted into the cloud. We're happy to guide you around these pitfalls, so let us know how we can help.