--
How can the size of the data be reduced to enable more cost effective storage and increased data movement mobility when faced with very large amounts of data?
How can large amounts of processed data be ported from a Big Data platform directly to a relational database?
The Random Access Storage compound pattern represents a part of a Big Data platform capable storing high-volume and high-variety data and making it available for random access.
How can processed data be ported from a Big Data platform to systems that use proprietary, non-relational storage technologies?
How can processed data be exported in realtime from a Big Data platform to other systems?
Storing large amounts of data, arriving at fast pace, as a dataset and processing it in a batch manner incurs processing latency, causing a delay before analysis results become available.
How can the execution of a number of data processing activities starting from data ingress to egress be automated?
How can complex processing tasks be carried out in a manageable fashion when using contemporary processing techniques?
How can large amounts of data be stored in a fault tolerant manner such that the data remains available in the face of hardware failures?
How can large amounts of data be accessed instantaneously without any delay?
How can high velocity data be imported reliably into a Big Data platform in realtime?