Big Data Pipeline

--

Version 1.0
Created date 08-05-2016

Poly Storage*

The Poly Storage compound pattern represents a part of a Big Data platform capable of storing high-volume, high-velocity and high-variety data.

Author Bert
Alias --
Stereotypes ApplicationFunction
Details of Poly Storage*

Poly Sink*

The Poly Sink compound pattern represents a part of a Big Data platform capable of egressing high-volume, high-velocity and high-variety data out to downstream enterprise systems.

Author Bert
Alias --
Stereotypes ApplicationFunction
Details of Poly Sink*

Poly Source*

The Poly Source compound pattern represents a part of a Big Data platform capable of ingesting high-volume and high-velocity data from a range of structured, unstructured and semi-structured data sources.

Author Bert
Alias --
Stereotypes ApplicationFunction
Details of Poly Source*

Big Data Pipeline*

The Big Data pipeline compound pattern generally comprises multiple stages whose objectives are to divide complex processing operations into down into modular steps for easier understanding and debugging and to be amenable to future data processing requir

Author Bert
Alias --
Stereotypes ApplicationFunction
Details of Big Data Pipeline*

Automated Dataset Execution

How can the execution of a number of data processing activities starting from data ingress to egress be automated?

Author Bert
Alias --
Stereotypes ApplicationFunction
Details of Automated Dataset Execution

Big Data Processing Environment*

The Big Data Processing Environment represents an environment capable of handling the range of distinct requirements of large-scale dataset processing.

Author Bert
Alias --
Stereotypes ApplicationFunction
Details of Big Data Processing Environment*