Typical Data Flows

As illustrated in the Simplified Data Diagram at the beginning of this document, data flows from the edge data sources (such as Historians, PLCs, and more) to Extractors to DataMosaix in the cloud. Once it gets into the cloud, it may need additional processing before it is ready to be used in an application. This is covered further in the Cognite basic training. In summary, for certain types of data, like SQL DB data, the extractor pushes the data into what is called Raw tables. You will then write one or more transformations to convert the raw data into assets, events, timeseries, or properties in an object model. Not all extractors push data to Raw tables first. Some, like the Historian extractor push time series data directly into a time series.
Another useful concept you will see in the Cognite training is the data set. Data sets are a way to group data for a particular purpose. It will help you find related data if you group it together and you can use data sets to restrict access to authorized personnel if you need to.
Provide Feedback
Have questions or feedback about this documentation? Please submit your feedback here.