Create FTService - PIAF Dataset

Perform the following steps to create a FTService PIAF Dataset:
  1. Navigate to the Connections tab and select the folder where the connection is added.
  2. Click the [] icon and click [Add Dataset].
  3. Add Dataset The Add Dataset page displays. Provide the following details:
    • Dataset Name: Provide a unique Dataset name.
      NOTE:
      If the Dataset with the provided name already exists, an error message displays and also maintains XSS validation.
    • Connection Name: Displays the selected Connection Name.
    • Direct Query: Toggle the Direct Query option using the slider to enable Direct Query. Refer to Appendix A, “Supported Features - Direct Query and Non-Direct Query” for further details.
      • Direct Query is a mechanism to fetch the updated data directly from the data source (i.e. connector).
      • When Direct Query feature is enabled, the Storyboard data will not be stored in Elasticsearch.
    Add Data
  4. Verify existing tag (while installing PIAF server) present in PIAF server through InfoPlatform connector like below.
  5. Drag and drop the PIAF server tag and click [Next].
    Select Dataset
  6. The Customize Data page displays configure with start date time, end date time, aggregation, sampling, and quality. Click [Next].
    Customize Data Page
  7. Click [Next]. The Modify Entities page displays.
  8. Select the [Unique_ID] checkbox (by default, the date column is selected) and select the aggregation form the drop-down list.
    By default, the Source Time Zone is disabled.
    • Click the [] icon to clear all the Unique ID columns.
    Modified Entities Page
  9. Modified Entities Page Click [Next], the Preview Dataset page displays the date.
  10. Hover the mouse over the column header and click the [] icon to sort the data or arrange the column size or to apply aggregation or to apply filters.
    NOTE:
    User needs to select the [Group by <column_name>] on numeric field to view the aggregation option.
    Preview Dataset
  11. Click [Next].
  12. The Configure Dataset page displays. Provide the following details:
    • Data Sync Configuration
      : By default the Sync configuration is set to Full refresh (All rows will be added)
    • Data Sync Schedule
      :
      • Default
        : 30 minutes
      • Custom
        : Select the Time interval to sync data from the Schedule by drop-down list and select the duration.
      • Custom Cron Expression
        : Enables the Users to setup the data synchronization schedule based on the User defined time. For more information on the Custom Cron Expression, refer to User Guide Appendix A, “Custom Cron Expression”.
        • Specify the valid Cron expression and click [Validate].
        • The “
          Valid Cron Expression
          ” success message displays.
          NOTE:
          For any timezone, system will sync the data from the last date available in Elasticsearch. Due to any error, if the DataSource has the future timestamp, data sync will not happen.
    • Default Display Count
      : From the drop-down list (100, 1000, 10000 or All Available) select the default number of records to display for Charts using this Dataset. By default 100 records is selected.
      NOTE:
      Tested till 99999999 number of records to display on Charts.
    • Advanced Settings
      :
      • Shards And Replicas
        : Provide the required number of shards and replicas.
        NOTE:
        The default number of shards are one and the replicas are zero. The number of shards and replicas are editable only before saving the Storyboard.
    Configure Dataset
  13. Click [Finish] the “
    Dataset Saved Successfully
    ” message displays and “Create a Content for this Dataset.” dialog displays.
    NOTE:
    The Dataset and the folders created are arranged in ASCII order.
    Customize Data Page
  14. If user does not want to continue with creating the Content, click [Not Now] to save the dataset.
    (OR)
    Click [Sure] to continue creating the Content. The Add Content page displays.
Provide Feedback
Have questions or feedback about this documentation? Please submit your feedback here.