Motivation Behind CohortFS
Large data-center providers like Amazon and Google, and also federated data centers such as those now being build for U.S federal systems, are the future of both application hosting and data storage. In these environments, storage utilization is constantly growing, and storage units (along with supporting infrastructure) are continually being expanded, retired, and replaced. At the same time, this large and (over time) heterogeneous infrastructure must be shared effectively by a very large number of actual customers who are all very different from each other. For this reason, data centers need software-defined storage architectures to partition the available storage infrastructure into arbitrary sub-units, with sophisticated (algorithmic) guarantees on size, performance, bandwidth utilization, data availability, and data isolation, not to mention delegated management and charge-back accounting on these metrics for individual end customers or applications.