Digital transformation has created new product and service capabilities and untold additional yottabytes of data. It has become increasingly clear that data is a key creator of value. Take, for example, the realm of digital entertainment. For proof, just scan your monthly credit card bill for streaming service subscriptions. Also, take a moment to think of truly impactful digital content — the MRI that aids a doctor in early disease detection for a patient, the genome data that helps unlock a cure and the convenience of planning our daily lives online for work, family, travel and entertainment.
There has been a corresponding change in how data is created, stored and consumed. People generate data both in their business and personal lives, but we now also see machine-generated data being created at a massive pace in manufacturing locations, utilities, vehicles and so on. Data lives in our homes, cars, on cruise ships, in airplanes, hospitals, sports stadiums and many more places.
Consequently, organizations need to create a plan for infrastructure to consume, manage, store and protect data anywhere. This now translates into data everywhere, from the data center to the cloud and to the emerging “edge” — and this edge is a dramatically growing area of technology innovation and consumption.
Data storage’s level playing field
A decade or two ago, the storage administrator was the employee who managed storage within the enterprise data center. These deeply knowledgeable and technical professionals understand that protecting data is key to their business success and making it consumable to the right people (and only the right people) is the primary objective of their jobs. Understanding how data is stored, its formats and how it is accessed and consumed gave rise to a specialized world of users who understand the speeds and feeds of storage and fluently speak the language of technical data storage acronyms.
As change continues at record pace, it’s no longer just the enterprise IT staff who have the responsibility of capturing, protecting and giving access to data storage. It has become the domain of a broad range of application owners and technical architects as well as highlighted the role of development operations or “DevOps” teams. This collection of people now makes critical decisions within enterprises for solutions — which encompass applications, people, processes and infrastructure — and all of these decisions are made in a more independent manner than before.
Cloud-native shakes things up
Whereas we used to hear about enterprise resource planning (ERP) and business process re-engineering (BPR), we now hear about business applications, data lakes, big data analytics, artificial intelligence and machine learning. These workloads are driving major changes in data, how much of it needs to be stored and how it gets consumed.
Workloads of this type welcome modern design methodologies and principles in application development, design and deployment. This new wave, termed cloud-native, includes the use of distributed software services packaged and deployed as containers and orchestrated on Kubernetes. The promises of this new approach include efficiency, scalability and — very importantly — portability. The latter aspect will allow software applications and infrastructure to support the new dynamic described earlier: data is created and lives everywhere.
That’s the technical aspect of the change. The storage aspect sees that cloud-native applications will also change how storage is accessed, provisioned and managed. This is a world of software services and interactions between services through well-defined interfaces or APIs. Storage has historically been an area where standard interfaces have been adopted. In the realm of file systems, specifically, there are well-known SMB and NFS protocols.
For cloud-native applications, there is a natural fit of API-based access to storage, which object storage supports naturally through its RESTful APIs. The popular Amazon S3 API is now fully embraced by independent software vendors (ISVs) and storage vendors alike for the cloud, data center and the edge. APIs also apply to storage management and monitoring, and API-based automation is another central theme in this cloud-native wave.
Object storage brings all the right ingredients together – offering portability, API-based access, automation and scalability to effectively unbounded levels – to be the optimal storage model for the new cloud-native world. Next-generation object storage solutions can and will go further in providing higher levels of performance for new applications and workloads and will also provide simplicity of operations to ensure that wider ranges of users will be able to fully exploit them.
Data storage and management have become increasingly complex in the age of apps. Demands shift with the technology, mandating a new method of data management and delivery. Lightweight, cloud-native object storage is what’s needed to power this next generation of cloud-native applications throughout their entire lifecycle – no matter where your data resides.