--
How can the size of the data be reduced to enable more cost effective storage and increased data movement mobility when faced with very large amounts of data?
How can large amounts of processed data be ported from a Big Data platform directly to a relational database?
How can different distributed processing frameworks be used to process large amounts of data without having to learn the programmatic intricacies of each framework?
The Random Access Storage compound pattern represents a part of a Big Data platform capable storing high-volume and high-variety data and making it available for random access.
How can processed data be ported from a Big Data platform to systems that use proprietary, non-relational storage technologies?
How can large amounts of unstructured data be imported into a Big Data platform from a variety of different sources in a reliable manner?
How can the execution of a number of data processing activities starting from data ingress to egress be automated?
How can large amounts of data be stored in a fault tolerant manner such that the data remains available in the face of hardware failures?
How can large amounts of raw data be analyzed in place by contemporary data analytics tools without having to export data?
How can very large amounts of data be processed with maximum throughput?