Interface DatamartContext


public interface DatamartContext
Provides an API to query and load PA data from a formula context.

Example:

This example demonstrates how to run a query to read data from PriceAnalyzer datamart and return it as in a form of ResultMatrix


                     def ctx = api.getDatamartContext()
                     def dm = ctx.getDatamart("txDM")
                     def query = ctx.newQuery(dm)
                     query.select("invoiceDateQuarter", "Q")
                     query.select("SUM(amount)", "R")
                                           query.orderBy("Q")
                     return ctx.executeQuery(query)?.data?.toResultMatrix().entries
 
See Also:
  • Method Details

    • calendar

      Instantiates a PA Calendar utility object.
      Returns:
      Calendar
    • getDataFeed

      DatamartContext.Table getDataFeed(String name)
      Gets a table object representing a DataFeed with the given name. A reference to this table can be used when building a DatamartContext.Query on DataFeedLoad.
      Parameters:
      name - sourceName, uniqueName or label of the DF.
      Returns:
      Table representing the DF in DataContext.
    • getDataSource

      DatamartContext.Table getDataSource(String name)
      Gets a table object representing a DataSource with the given name. A reference to this table can be used when building a DatamartContext.Query on that DataSource.
      Parameters:
      name - sourceName, uniqueName or label of the DS.
      Returns:
      Table representing the DS in DataContext.
    • getDatamart

      DatamartContext.Table getDatamart(String name)
      Gets a table object representing a Datamart with the given name. A reference to this table can be used when building a DatamartContext.Query on that Datamart.
      Parameters:
      name - sourceName, uniqueName or label of the DM.
      Returns:
      Table representing the DM in DataContext.
    • getDatamart

      DatamartContext.Table getDatamart(String name, Boolean useRefreshData)
      Gets a table object representing a Datamart with the given name, using either it's refresh or published data, depending on the value of the useRefreshData argument. This method is intended to be used in jobs that enrich/transform DM data, requiring access to data that has been loaded (through the DM's Refresh DL), but has not yet been published (by the DL's Publish DL). A reference to this table can be used when building a DatamartContext.Query on that Datamart.
      Parameters:
      name - sourceName, uniqueName or label of the DM.
      useRefreshData -
      Returns:
      Table representing the DM in DataContext.
    • getModel

      DatamartContext.Table getModel(String name)
      Gets a table object representing a Model with the given name. A reference to this table can be used when building a DatamartContext.Query on that Model.
      Parameters:
      name - sourceName, uniqueName or label of the Model.
      Returns:
      Table representing the Model in DataContext.
    • getFieldCollection

      DatamartContext.Table getFieldCollection(String sourceName)
      Get a table object representing the FC with the given source name. A source name consists of two parts, separated with a '.':

      • 'DMF', 'DMDS', 'DM' or 'DMSIM', i.e. the typeCode of DataFeed, DataSource, Datamart and Sim-Datamart resp.
      • the FC's uniqueName

      A reference to this tbale can then be used when building a DatamartContext.Query on that FC.
      Parameters:
      sourceName - The sourceName of the FC.
      Returns:
      Table representing the FC in the DataContext.
    • getRollup

      DatamartContext.Table getRollup(String label)
      Gets a table object representing a FieldCollection representing a rollup with the given label. If more than one such rollup exists, the first one found is returned, in no particular or predefined order.
      Parameters:
      label - Label of the Rollup.
      Returns:
      Table representing the FC form of the rollup in DataContext.
    • newQuery

      Builds a new query on the given table.

      Note: The default rollup setting for this method is set to true, i.e., newQuery(table, true). It means that this method creates a group by query. Any Dimension field where you use query.select() will be grouped on.
      Any Measure field where you use query.select() must be aggregated - "field" defaults to SUM(field), otherwise be explicit as in "AVG(field)" or "SUM(field)/SUM(quantity)".
      When doing select() of other than Dimensions or Measures fields, some form of aggregation is needed (e.g., COUNT()). When doing query.selectAll() these fields will be ignored.
      To fetch individual rows use the newQuery(Table, boolean) method and set a rollup to false - newQuery(table, false).

      Parameters:
      table - Table representing the FC to query.
      Returns:
      DatamartContext.Query builder.
    • newQuery

      DatamartContext.Query newQuery(DatamartContext.Table table, boolean rollup)
      Builds a new query on the given table allowing you to disable a rollup aggregation.
      Parameters:
      table - Represents the FC to query.
      rollup - Line level or rollup query.
      Returns:
      DatamartContext.Query builder.
    • newQuery

      Creates a new query from an existing one. This pattern can be used to build a base or template query, and then instantiate several variants, for example each for a different DataSlice (this year sales data vs. last year etc.).
      Parameters:
      otherQuery - Query to use as a basis for a new query.
      Returns:
      DatamartContext.Query builder.
    • newQueriesFromQueryBuilder

      LinkedHashMap<String,DatamartContext.Query> newQueriesFromQueryBuilder(Map<String,Object> builderState)
      Gets all Query Builder source series, ordered as configured in the UI (i.e. ordered as in the provided builderState. The map's key is the query/series alias.

      Note: if a join query was configured, you can find out via isQueryBuilderJoin(Map) and then use newJoinFromQueryBuilder(Map) to get the join query.

      Example:

      
           def param = api.inputBuilderFactory()
                      .createDmQueryBuilder("query")
                      .setLabel("Build your query")
                      .getInput()
      
           def qbState = api.input("query")
           def queries = dmCtx.newQueriesFromQueryBuilder(qbState)
      
           if (!queries.isEmpty()) {
               dmCtx.executeQuery(queries[0])
           }
       
      Parameters:
      builderState -
      Returns:
      an ordered map of DatamartContext.Query access by the query alias.
      Since:
      11.0 Paper Plane
    • newQueryFromQueryFilterBuilder

      DatamartContext.Query newQueryFromQueryFilterBuilder(Map<String,Object> queryBuilderState, Map<String,Object> filterBuilderState)
      When combining a Query Builder and a Query Filter this method returns the selected source query with the filter already applied in the DatamartContext.Query.where(Filter...) clause.

      Note: if a join query was selected, this will return null and you can get the join query via newJoinFromQueryFilterBuilder(Map, Map).

      Parameters:
      queryBuilderState - properties map representing the queries configured in the Query Builder UI
      filterBuilderState - properties map representing the queries configured in the Query Filter Builder UI
      Returns:
      the selected DatamartContext.Query including the configured Filter, or null if the join series (or nothing) was selected.
      Since:
      11.0 Paper Plane
      See Also:
    • isQueryBuilderJoin

      boolean isQueryBuilderJoin(Map<String,Object> queryBuilderState)
      Find out if a join query was configured in a Query Builder input.
      Returns:
      true if the queryBuilderState includes a join query
      Since:
      11.0 Paper Plane
      See Also:
    • newJoinFromQueryBuilder

      DatamartContext.SqlQuery newJoinFromQueryBuilder(Map<String,Object> builderState)
      Gets the join query defined in Query Builder source series.

      Note: The return value is an DatamartContext.SqlQuery and not an DatamartContext.Query.

      Example:

      
           def param = api.inputBuilderFactory()
                      .createDmQueryBuilder("query")
                      .setLabel("Build your query")
                      .getInput()
      
           def qbState = api.input("query")
      
           if (dmCtx.isQueryBuilderJoin(qbState) {
               def query = dmCtx.newJoinFromQueryBuilder(qbState)
               dmCtx.executeQuery(query)
           }
       
      Returns:
      the join DatamartContext.SqlQuery or null if none was defined in builderState
      Since:
      11.0 Paper Plane
    • newJoinFromQueryFilterBuilder

      DatamartContext.SqlQuery newJoinFromQueryFilterBuilder(Map<String,Object> queryBuilderState, Map<String,Object> filterBuilderState)
      When combining a Query Builder and a Query Filter this method returns the join query with the filter already applied in the SQL query.

      Note: if it's not the join query that was selected, this will return null and you can try to get the selected query via newQueryFromQueryFilterBuilder(Map, Map).

      Example:

      
           def param = api.inputBuilderFactory()
                      .createDmQueryBuilder("query")
                      .setLabel("Build your query")
                      .getInput()
           def param = api.inputBuilderFactory()
                      .createDmQueryFilterBuilder("filter", qbState)
                      .setLabel("Build your filter")
                      .getInput()
      
           def qbState = api.input("query")
           def filterState = api.input("filter")
      
           def data
           def joinSqlQuery = dmCtx.newJoinFromQueryFilterBuilder(qbState, filterState)
           if (joinSqlQuery == null) {
               def query = dmCtx.newQueryFromQueryFilterBuilder(qbState, filterState)
               data = dmCtx.executeQuery(query).data
           }
           else {
               data = dmCtx.executeQuery(joinSqlQuery)
           }
       
      Parameters:
      queryBuilderState - properties map representing the queries configured in the Query Builder UI
      filterBuilderState - properties map representing the queries configured in the Query Filter Builder UI
      Returns:
      the selected join query including the configured Filter, or null if it's not the join series that was selected.
      Since:
      11.0 Paper Plane
      See Also:
    • getLabelsFromQueryBuilder

      Map<String,String> getLabelsFromQueryBuilder(Map<String,Object> queryBuilderState, String queryAlias)
      Convenience method to get the labels defined in a Query Builder input.
      Returns:
      a Map with alias as key and label as value.
      Since:
      11.0 Paper Plane
    • newQuery

      Deprecated.
    • newQuery

      @Deprecated DatamartContext.Query newQuery(DatamartContext.Query query1, DatamartContext.Query query2, LinkedHashMap<String,String> joinFieldsMap, boolean rollup)
      Deprecated.
    • newQuery

      @Deprecated DatamartContext.Query newQuery(DatamartContext.Query query1, DatamartContext.Query query2, LinkedHashMap<String,String> joinFieldsMap, String joinMode, boolean rollup)
      Deprecated.
      The preferred method for running a join query is now executeSqlQuery(java.lang.String, java.lang.Object...).
      Parameters:
      query1 - Sub-source1 represented by the result of query1.
      query2 - Sub-source2 represented by the result of query2.
      joinFieldsMap - Map of fields in source1 to fields in source1, representing the join condition.
      joinMode - How to combine source1 and source2 into one, to be the source to this query. Note that the source field names adhere to different naming schemes depending on the join mode:

      • 'INNER': Join semantics as in the SQL inner join. If source2 has the same named fields as source1, they are not added to the combined source.
      • 'LEFT_OUTER': As in SQL left outer join. If source2 has same named fields as source1, they are not added to the combined source.
      • 'FULL_OUTER': As in SQL full outer join. Fields in the combined source get a '_<query.alias>' postfix (defaults to '_1' for source1 and '_2' for source2).
      • 'INNER_ALL': As in SQL inner join. Fields in the combined source get a postfix as above.
      • 'LEFT_OUTER_ALL': As in SQL left outer join. Fields in the combined source get a postfix as above.

      rollup - True if this is to be a rollup query, i.e. with a Group by clause.
      Returns:
      New query (select) on the combined source.
    • executeQuery

      Executes the given DatamartContext.Query. For a rollup query, if the internal row limit, set by 'datamart.query.internalRowLimit' Pricefx instance param, is exceeded (default = 1000000), a warning is displayed and DatamartQueryResult.isMaxRowsExceeded() will be true. The rationale for this behaviour is that a rollup, or so called analytical query result, is unreliable if not all data in scope could be examined. This is different to a fetch or paging query, which can safely request one page at the time. In syntax check mode, a maximum of 200 rows is returned in all cases. Sample code:
      
           def ctx = api.getDatamartContext()
           def dm = ctx.getDatamart("Transaction DM")
           def query = ctx.newQuery(dm)
           query.select("CustomerId")
           query.select("MaterialID")
           query.select("SUM(Sales)", "Revenue")
           query.select("SUM(Quantity)", "Volume")
           def result = ctx.executeQuery(query)
           for (def r=0; r < result.data.getRowCount(); r++){
               def row = result.data.getRowValues(r)    // row #r as map
               api.trace("query", "row $r", row)
           }
       
      Parameters:
      query - DatamartContext.Query to execute.
      Returns:
      DatamartQueryResult providing data in the Matrix2D form and summary information in the map form.
      Throws:
      InterruptedException
      See Also:
    • streamQuery

      Executes the given DatamartContext.Query and returns a result, so that one row at the time can be examined. This is different to executeQuery, which always returns the full data set in the scope of the query. When a row is retrieved and moved onwards from, it is no longer available to the client code. The typical usage is to consume the result row by row, processing it into some accumulating data structure.
      Note: After iterating through the data, ensure the iterator is closed by using .withCloseable {}. Using result.close() is not recommended, as it may not execute in case of an exception.

      Important note: streamQuery is not executed in the syntax check mode and returns null.

      Sample code:

      
       def ctx = api.getDatamartContext()
       def dm = ctx.getDatamart("Transaction DM")
       def query = ctx.newQuery(dm)
       query.select("CustomerId")
       query.select("MaterialID")
       query.select("SUM(Sales)", "Revenue")
       query.select("SUM(Quantity)", "Volume")
      
       ctx.streamQuery(query).withCloseable { results ->
           def r = 0
           while (results.next()) {
               def row = results.get() // current row as map
               api.trace("streamQuery", "row $r", row)
               r++
           }
       }
       
      Parameters:
      query - DatamartContext.Query to execute.
      Returns:
      StreamResults Similar to a JDBC ResultSet, but only implementing next(), get() and close().
      Throws:
      InterruptedException
    • streamSqlQuery

      Executes the given DatamartContext.SqlQuery and returns a result, so that one row at the time can be examined. This is different from executeQuery, which always returns the full data set in the scope of the query. When a row is retrieved and moved onwards from, it is no longer available to the client code. The typical usage is to consume the result row by row, processing it into some accumulating data structure.
      Note: After iterating through the data, ensure the iterator is closed by using .withCloseable {}. Using result.close() is not recommended, as it may not execute in case of an exception.

      Important note: streamQuery is not executed in the input generation mode (former syntax check mode) and returns null.

      Sample code:

      
           def ctx = api.getDatamartContext()
           def dm = ctx.getDatamart("Transaction DM")
           def query = ctx.newSqlQuery()
           query.addSource(dm, "TXN")
           query.setQuery("""
               SELECT
                   CustomerId,
                   MaterialID,
                   SUM(Sales) AS Revenue,
                   SUM(Quantity) AS Volume
               FROM TXN
               GROUP BY CustomerId, MaterialID
           """)
           ctx.streamQuery(query).withCloseable { results ->
               def r=0
               while(results.next()){
                   def row = results.get()  // current row as map
                   api.trace("streamQuery", "row $r", row)
                   r++
               }
           }
       
      Parameters:
      query - DatamartContext.SqlQuery to execute.
      Returns:
      StreamResults Similar to a JDBC ResultSet, but only implementing next(), get() and close().
      Throws:
      InterruptedException
      Since:
      11.2.0 - Paper Plane
    • executeSqlQuery

      Matrix2D executeSqlQuery(String sql, Object... sources) throws InterruptedException
      Executes an ANSI compliant SQL SELECT statement in the PA DB.
      Important note: A non-compliant statement that does not fail at the present time may well fail in the future releases.
      The DB schema that can be queried is constructed on the fly by means of view definitions. A view is defined using the usual query API. At least one view needs to be defined. The first view gets an alias 'T1', the second 'T2' etc.
      The columns of a view are named from the defining query projections' aliases. Note that in the SQL standard the identifiers need to be double-quoted to preserve the case. Therefore, unless a projection alias is all lowercase, the SQL statement will need to double quote references to a view's column names.

      Example:

      
                      def ctx = api.getDatamartContext()
                      def dm = ctx.getDatamart("TransactionsDM")
                      def ds = ctx.getDataSource("ForecastDS")
                      def t1 = ctx.newQuery(dm)
                      t1.select("ProductID", "product")
                      t1.select("SUM(InvoicePrice)", "revenue")
                      t1.select("SUM(Quantity)", "volume")
      
                      def t2 = ctx.newQuery(ds, false)
                      t1.select("ProductID", "product")
                      t2.select("Revenue, "revenue")
                      t2.select("Volume", "volume")
      
                      def sql = """ SELECT T1.product, T1.revenue AS ActualRevenue, T2.revenue AS ForecastRevenue
                                                      T1.volume AS ActualVolume, T2.volume AS ForecastVolume
                                                      FROM T1 LEFT OUTER JOIN T2 USING (product) """
                      return ctx.executeSqlQuery(sql, t1, t2)?.toResultMatrix()
       
      Parameters:
      sources - Views that make up the DB schema that can be queried, in the form of query definitions of type DatamartContext.Query or Strings representing 'SELECT' statements that will be added to the final SQL statement's WITH clause. The sources are assigned the 'Ti' relation alias in the order of appearance in the source Collection ((T1 for the first source).
      Returns:
      Query result as a Matrix2D object.
      Throws:
      InterruptedException
      See Also:
    • newSqlQuery

      Instantiates a query object for building SQL statements from source queries, with clauses and parameter bindings. To be executed by executeSqlQuery(java.lang.String, java.lang.Object...).

      Example:

      
           def ctx = api.getDatamartContext()
           def dm = ctx.getDatamart("TransactionsDM")
           def ds = ctx.getDataSource("ForecastDS")
           def t1 = ctx.newQuery(dm)
                       .select("ProductID", "product")
                       .select("ProductGroup", "PG")
                       .select("SUM(InvoicePrice)", "revenue")
                       .select("SUM(Quantity)", "volume")
      
           def t2 = ctx.newQuery(ds, false)
                        .select("ProductID", "product")
                        .select("Revenue, "revenue")
                        .select("Volume", "volume")
      
           def sqlQuery = ctx.newSqlQuery()
                             .addSource(t1)
                             .addSource(t2)
      
           def with = """ SELECT T1.product, T1.revenue AS ActualRevenue, T2.revenue AS ForecastRevenue
                                 T1.volume AS ActualVolume, T2.volume AS ForecastVolume
                                 FROM T1 LEFT OUTER JOIN T2 USING (product)
                                 WHERE T1.PG = ? """
           sqlQuery.addWith(with, "PG-ABC")   // binding some product group value;  with-clause gets assigned the T3 alias
           def sql = " SELECT SUM(ActualRevenue) - SUM(ForecastRevenue) FROM T3 "
           sqlQuery.setQuery(sql)
           return ctx.executeSqlQuery(sqlQuery)?.toResultMatrix()
       
      Returns:
      New DMSqlQuery object.
    • executeSqlQuery

      Matrix2D executeSqlQuery(DatamartContext.SqlQuery sqlQuery) throws InterruptedException
      Executes an ANSI compliant SQL SELECT statement in the PA DB.
      Parameters:
      sqlQuery - SQL statement definition.
      Returns:
      Query result as a Matrix2D object.
      Throws:
      InterruptedException
    • consumeData

      @Deprecated void consumeData(DatamartContext.Query query, Closure<?> consumer) throws Exception
      Deprecated.
      Use streamQuery(Query) instead because of the limitation described above.
      Like streamQuery(Query query), executes the given DatamartContext.Query but consumes the result in a closure, which is faster and allows for more fluent coding. The closure is fed one query result row at the time.

      Important limitations:

      • PA queries within the closure are not allowed.
      • Modifications to persisted objects might succeed even when the logic as a whole fails
      Both are deemed less than intuitive, and hence this method is best used in cases where the consuming logic is fairly simple and speed is of the utmost importance.

      Sample code:

      
           def ctx = api.getDatamartContext()
           def dm = ctx.getDatamart("Transaction DM")
           def query = ctx.newQuery(dm)
           query.select("CustomerId")
           query.select("MaterialID")
           query.select("SUM(Sales)", "Revenue")
           query.select("SUM(Quantity)", "Volume")
           def r=0
           ctx.consumeData(query, { row ->
               api.trace("query", "row $r", row)
               r++
           }
       
      Parameters:
      query - DatamartContext.Query to execute
      consumer - A closure that accepts a row of the query result. The closure is called for each row in the result set. To abort before all rows are consumed, the closure needs to throw an exception.
      Throws:
      Exception - When the closure executes a PA query or throws an exception.
    • buildQuery

      ResultPAQuery buildQuery(DatamartContext.Query query)

      EXPERIMENTAL: Not all aspects and properties of a query are supported. For example, currently not supported are dim filters, row limit, join queries etc.

      Builds and validates a query to be rendered in the client. Technically, a DataTransferObject representing the query is created, compatible with the query format in the REST API.

      Parameters:
      query - Query to be rendered in the client.
      Returns:
      ResultPAQuery (calculation result) embedding a REST API compatible map representation of the query.
    • batchFilters

      List<Filter> batchFilters(DatamartContext.Table table, Filter filter, long batchSize) throws InterruptedException
      Generates Filters that partition the rows in the target in such a way that each batch has the number of rows as specified by the batchSize param. The intended use is in a distributed calculation, where source data is partitioned into multiple batches, which are processed in parallel. Cf. DistCalcFormulaContext.addOrUpdateCalcItem(java.lang.Object, java.lang.Object, java.lang.Object)
      Parameters:
      table - The target to define filters on; ex. api.datamartContext.getDatamart("TransactionDM"
      filter - Optional filter to apply on the target data before divvying up the rows
      batchSize - The #rows to go into each batch. The last batch can obviously have fewer rows.
      Returns:
      List of Filter object, which can be applied in a query on the target to retreive the rows in a given batch.
      Throws:
      InterruptedException
    • profileData

      Calculates "Min", "Max", "#", "#Nulls", "#Distinct", "Sample" for dimension projections, and "Min", "Max", "Mean", "Std", "#", "#Nulls" for numeric projections.
      Parameters:
      query - Query defining the data to a profile.
      Returns:
      DataProfilerResult split up in dimensions and numeric projections results.
    • newDatamartSlice

      DatamartContext.DataSlice newDatamartSlice()
      Creates a new DatamartSlice which allows for setting filter criteria along the Time, CustomerGroup, ProductGroup or any other dimension in a Datamart.
      Returns:
      Instantiated, empty DatamartContext.DataSlice object.
    • newDatamartSlice

      Object newDatamartSlice(String dateFieldName, Object... timePeriodsAndProductAndCustomerGroups)
      Creates a new DatamartSlice which allows for setting filter criteria along the Time, CustomerGroup, ProductGroup or any other dimension in a Datamart, initialized with the name of the time dimension field and an optional set of filter criteria.
      Parameters:
      dateFieldName - Name of the time dimension field.
      timePeriodsAndProductAndCustomerGroups - TimePeriod, CustomerGroup, ProductGroup filters.
      Returns:
      Initialized DatamartContext.DataSlice object.
    • sourceSelectionEntry

      Object sourceSelectionEntry(String entryName, String... typeCode)
    • dimFilterEntry

      Object dimFilterEntry(String entryName, DatamartContext.Column column)
      DimFilter input parameter: renders a selection of all possible values for the given dimension field, in the FC which the column's table represents.
      Parameters:
      entryName - Input param name.
      column - Columns from the table representing the FC to get a dimension field value from.
      Returns:
      Selected dim field value.
    • dimFilterEntry

      Object dimFilterEntry(String entryName, DatamartContext.Column column, String defaultValue)
      DimFilter input parameter: renders a selection of all possible values for the given dimension field, in the FC which the column's table represents.
      Parameters:
      entryName - Input param name.
      column - Columns from the table representing the FC to get a dimension field value from.
      defaultValue - Value to use if no value has been selected yet.
      Returns:
      Selected dim field value or the default value if not yet set.
    • fieldSelectionEntry

      Object fieldSelectionEntry(String entryName, DatamartContext.Table table, String sType)
      FC field selector, optionally limited to fields of a given type.
      Parameters:
      entryName - Input param name.
      table - Table that represents the FC to select a field from.
      sType - Type of the field to allow the user to select:

      • NUMBER
      • QUANTITY
      • TEXT
      • DATE
      • MONEY
      • CURRENCY
      • UOM
      • LOB
      • DATETIME

      Returns:
      Selected FC field.
    • fieldSelectionEntry

      Object fieldSelectionEntry(String entryName, DatamartContext.Table table, String sType, Boolean multiple)
      FC field selector, optionally limited to fields of a given type.
      Parameters:
      entryName - Input param name.
      table - Table that represents the FC to select a field from.
      multiple - Allow one field or multiple fields selection.
      Returns:
      Selected FC fields' names.
    • fieldSelectionEntry

      Object fieldSelectionEntry(String entryName, String sourceName)
    • fieldSelectionEntry

      Object fieldSelectionEntry(String entryName, String sourceName, Collection<String> sTypes, Boolean multiple)
    • newDataLoader

      Instantiates a new DatamartContext.DataLoader to load rows of data into a DMDataFeed or DMTable. A DatamartContext.DataLoader works in any logic context, as opposed to DatamartRowSet which is available only in a PA DataLoad context.

      Example:

      
           def ctx = api.getDatamartContext()
           def df = ctx.getDataFeed("Contract_Log")
           def loader = ctx.newDataLoader(df, "ApprovalDate", "ApprovalUser",...)
           try {
               loader.addRow(approvalDate, approvalUser, ...)
               loader.flush() // force to flush the already added data in the data feed
               loader.addRow(approvalDate, approvalUser, ...)
               [...]
           } finally {
               loader.close() // ensure all added rows have been flushed, ensure indexes are up-to-date and release all the resources
           }
       
      Parameters:
      table - Table representing a DMDataFeed or DMTable to load data in.
      Returns:
      DatamartContext.DataLoader instance providing an API to add (buffer) and flush (commit to the DB) rows to a feed/table.
    • newDataLoader

      DatamartContext.DataLoader newDataLoader(DatamartContext.Table table, List<String> headerFieldNames)
      Instantiates a new DatamartContext.DataLoader to load rows of data into a DMDataFeed or DMTable. A DatamartContext.DataLoader works in any logic context, as opposed to DatamartRowSet which is available only in a PA DataLoad context.

      Example:

      
           def ctx = api.getDatamartContext()
           def df = ctx.getDataFeed("Contract_Log")
           def loader = ctx.newDataLoader(df, "ApprovalDate", "ApprovalUser",...)
           try {
               loader.addRow(approvalDate, approvalUser, ...)
               loader.flush() // force to flush the already added data in the data feed
               loader.addRow(approvalDate, approvalUser, ...)
               [...]
           } finally {
               loader.close() // ensure all added rows have been flushed, ensure indexes are up-to-date and release all the resources
           }
       
      Parameters:
      table - Table representing a DMDataFeed or DMTable to load data in.
      headerFieldNames - Fields for which values will be loaded. Defaults to all (persisted) fields in the feed/table if not set.
      Returns:
      DatamartContext.DataLoader instance providing an API to add (buffer) and flush (commit to the DB) rows to a feed/table.