Skip to main content

Data Pipeline Controls

Sleep Time

Description: 

Sleep Time operation helps in giving a pause of some seconds which is defined by the user.

Number of Parameters : 1

 
Parameter : Sleep Time

It specifies the sleep time duration in seconds.

Below is an example where we are keeping sleep time duration of 60 seconds. 

60


Python Operation

Description: 

Python operation helps in when a user has a particular need or requirement to perform a special operation that is not predefined, a user can develop a new operation that will meet their unique requirements.

Python Operation Parameters to Know When Writing Python Code

When writing Python code for this operation, it is important for users to be aware of the following parameters:

1. Responsedata = []

Description:
Always start your Python script for this operation by initializing Responsedata as an empty list. This serves as a placeholder for appending any results. By keeping Responsedata as an empty array, users can easily append their results to this variable as needed.

2. pycode_data

Description: This variable holds the data that is flowing in the Integration Bridge. If you want to make changes to the existing data using a Python script, you need to use pycode_data and write your script according to this variable.

These parameters are essential for effectively utilizing the Python operation in your scripts, ensuring that your code interacts correctly with the data flow and result handling within the Integration Bridge.

Number of Parameters : 1

 
Parameter : Pycode

Pycode example

Input Data with multiline json

{
                "attributeid": 212077,
                "attributename": "item_no",
                "attributevalue": "999898",
                "attribute_groupid": 24315,
                "attribute_groupname": "Default",
                "Isvariant": "false"
            }
            {
                "attributeid": 212078,
                "attributename": "Product Name",
                "attributevalue": "Product ABC",
                "attribute_groupid": 24315,
                "attribute_groupname": "Default",
                "Isvariant": "false"
            }

Below is the pycode ops

 When we want to run a script for the data which is flowing. In this case the data will be stored in `pycode_data` variable, so use this variable for making changes in data using your script.

Responsedata = []
keyname = ["attributename"]
keyvalue = ["attributevalue"]
for i in keyname:
    attributename = pycode_data.pop(i, None)
for j in keyvalue:
    attributevalue = pycode_data.pop(j, None)
pycode_data[attributename] = attributevalue


In the above example we can see that we are running a script to make changes in the existing data so we are using `pycode_data` variable.

Output data after applying Python Pycode

{
                "attributeid": 212077,
                "item_no": "999898",
                "attribute_groupid": 24315,
                "attribute_groupname": "Default",
                "Isvariant": "false"
            }
            {
                "attributeid": 212078,
                "Product Name": "Product ABC",
                "attribute_groupid": 24315,
                "attribute_groupname": "Default",
                "Isvariant": "false"
            }

                                  

Filter Operation

Description:

if else is called as the filter operation in eZintegrations, basically it has 1 parameter: Conditional Statement

  1. Source has the JSON data of the previous operation.
  2. Target is the JSON data after performing the filter operation.
  3. Conditional Statement is the parameter where we include logical operators (and, or, not), Identity operators(is, is not), membership operators (in, not in), comparisional operators(==,!=,>, <,<=,>=)

Note: 'data' is fixed in the statement, we can change the name of the key and value as per requirement.

List of Operators that can be used : [==, !=, <, >, =<, =>, or, and]

Rules for Using Filter Operations 
 
1. When the Target is an API 
 
Single Filter Condition: 

If a single filter condition is used to filter data and the filtered data is sent to the API as the request body (dropping unfiltered data), Data Aggregation operation is not required.

Multiple Filter Conditions: 
 
If multiple filter conditions are used, and all the filtered data needs to be sent to the API as the request body: 

  1. After applying the first Filter condition, use the API operation to send the filtered data. 

  1. Use the Filter End operation to close the first filter flow. 

  1. Apply the second Filter condition and send the resulting data to the API operation or Target.

2. When the Target is Datalake

General Rule: 
 
Whether using one or more filter conditions, always: 

  1. After applying the first Filter condition, use the Datalake operation to send the filtered data. 

  1. Use the Filter End operation to close the first filter flow. 

  1. Apply the second Filter condition and send the resulting data to the Datalake operation or Target. (if want to use second filter) 

Dropping Unfiltered Data:

After applying the Filter operation, send the filtered data to the Datalake using the Datalake operation or target. 

Retaining All Data:

Use the first Filter operation to filter the data, then send the filtered data to the Datalake using the Datalake operation. After the Filter End operation, apply another Filter operation using the opposite condition of the first filter and send the resulting data to the Datalake using the Datalake operation or target. This ensures both filtered and unfiltered data are sent to the Database.

3.When the Target is a Database

General Rule: 
 
For one or more filter conditions, always: 
   

  1. Use Append ops to add a dummy key for aggregation 

  1. Aggregate the data using the Data Aggregation operation. 

  1. Apply the SingleLine to Tuple operation. 

  1. Send the data to the Database. 

Dropping Unfiltered Data: 

After the filter operation, append a dummy key to serve as a group-by key in the Data Aggregation operation. This ensures only filtered data is sent to the Database. 
 
Retaining All Data:

Apply a Filter operation and send the filtered data to the Database after grouping it using Data Aggregation and converting it into a tuple. After the Filter End operation, apply another Filter using the opposite condition and send the remaining data to the Database in the same way. This ensures both filtered and unfiltered data are sent to the Database. 

When Ending a Filter Condition:

Always use the Filter End operation once data transformation and data transfer are completed.

The Filter End operation finalizes the active filter condition. After Filter End, no further filtering is applied, and the original data flow is restored.

Important:
If more than one filter condition is used within the same flow, applying Filter End after each filter block is mandatory to avoid unintended data filtering and to ensure correct data processing.

Example 1: Filtering data where Order Status is pending

data['Order Status']=='pending'

Example 2: Filtering data where the value of "id" is 5 or 6.

data['id']==5 or data['id']==6

Example 3: Filtering data where order status is not failed and placed before a specific date.

data['Order Status']!= 'failed' and data['Date']<'6/03/2022'

Example 4 : When the key  ipAddress does not exists in the pipeline. You can copy the below snippet directly in the Filter operations

'ipAddress' not in data

Example 5 : When the key  ipAddress exists in the pipeline. You can copy the below snippet directly in the Filter operations

'ipAddress' in data
AI Operation 

Description: 

The AI Operation helps you modify and improve your data using simple instructions written in everyday English. 

Parameters  

Number of Parameters: 1  

Parameter: Instructions 

A natural language instruction that describes how the input data should be transformed.  

Below is an example where we are using Instruction parameter to add new key. 

AI operation behaves as follows: 

  1. Input Data: Data provided to the AI Operation 

  2. Instructions: Natural language instruction describing the required transformation 

  3. Output: JSON object with transformations applied as per the prompt.  

Example 1: Use Case for Grouping Order Data into Relevant Categories

Input Data: 

{ 
      "Row ID": 43, 
      "Order ID": "CA-2016-101343", 
      "Order Date": "17-07-2016", 
      "Ship Date": "22-07-2016", 
      "Ship Mode": "Standard Class", 
      "Customer ID": "RA-19885", 
      "Customer Name": "Ruben Ausman", 
      "SEGMENT": "Corporate", 
      "COUNTRY": "United States", 
      "CITY": "Los Angeles", 
      "STATE": "California", 
      "Postal Code": 90049, 
      "REGION": "West", 
      "Product ID": "OFF-ST-10003479", 
      "CATEGORY": "Office Supplies", 
      "Sub-Category": "Storage", 
      "Product Name": "Eldon Base for stackable storage shelf, platinum", 
      "SALES": 77.88, 
      "QUANTITY": 2, 
      "DISCOUNT": 0, 
      "PROFIT": 3.89, 
      "Order Returned": "Yes" 
} 

Instruction:  

Group the given order data in {%bizdata_dataset_response%} into customer, product, shipping, and financial sections. 

Output:  

{ 
    "customer": { 
        "Customer ID": "RA-19885", 
        "Customer Name": "Ruben Ausman", 
        "SEGMENT": "Corporate", 
        "COUNTRY": "United States", 
        "CITY": "Los Angeles", 
        "STATE": "California", 
        "Postal Code": 90049, 
        "REGION": "West" 
    }, 
    "product": { 
        "Product ID": "OFF-ST-10003479", 
        "CATEGORY": "Office Supplies", 
        "Sub-Category": "Storage", 
        "Product Name": "Eldon Base for stackable storage shelf, platinum", 
        "QUANTITY": 2 
    }, 
    "shipping": { 
        "Order ID": "CA-2016-101343", 
        "Order Date": "17-07-2016", 
        "Ship Date": "22-07-2016", 
        "Ship Mode": "Standard Class", 
        "Order Returned": "Yes" 
    }, 
    "financial": { 
        "SALES": 77.88, 
        "DISCOUNT": 0, 
        "PROFIT": 3.89 
    } 
} 

Example 2: Use Case for Generating a Full Name from First Name and Last Name

Input Data: 

{
    "id": 1,
    "email": "george.bluth@reqres.in",
    "first_name": "George",
    "last_name": "Bluth",
    "avatar": "https://reqres.in/img/faces/1-image.jpg"
}

Instruction:  

Take first name : '{%first_name%}, and last name : {%last_name%} and generate full name. return a python dict with key as 'full_name' and generated full name in its value.

Output:  

{
    "id": 1,
    "email": "george.bluth@reqres.in",
    "first_name": "George",
    "last_name": "Bluth",
    "avatar": "https://reqres.in/img/faces/1-image.jpg",
    "full_name": "George Bluth"
}

Example 3: Use Case for Generating a Primary Key from Markdown Data

Input Data: 

{
    "markdown_data": "{{markdowndata}}"
}

Instruction:  

Check this markdown data '''{%neural_field%}''' and help me in generating a unique, human-readable webpage name which will be used as primary key and store the value in this key 'webpage_name' The generated name must be between 100 and 120 characters in length. Return  exactly one new key called 'webpage_name' with the generated name as its value inside a python dict.

Output:  

{ "markdonw_data":"{{markdowndata}}", "webpage_name" : "webpage_name" }

Example 4: Use {%data%} – Format Location String

Input Data (Key: data):

{
  "data": {
    "city": "New York",
    "country": "USA"
  }
}


{%data%} refers to the entire object under the data key.

Instruction:



From the location {%data%}, create a new key 'full_location' by combining city and country as "{city}, {country}".

Output:

{
  "full_location": "New York, USA"
}


Text to Operations:

Description:

Text to Operations is an enhanced version of the previous Python operation. It allows users to either provide plain English instructions or write Python code directly to perform tasks on data. When using natural language, instructions are automatically transformed into executable Python code, enabling users to focus on describing their desired outcomes without needing to write code manually. Alternatively, users can write their own Python code if preferred. The operation supports natural language input, intelligent code generation, real-time execution, and direct code input, offering flexibility based on user preference.

Text to Operations converts natural language instructions or user-written code into runnable logic, built on Python and designed to support a wide range of operations moving forward.

How Parameters Are Handled in Text to Operations:

When users provide natural language instructions, they do not need to explicitly mention parameters. Instead, they can describe the desired changes in plain English, and the system will generate Python code in the backend that utilizes the following parameters to interact with the data. If users choose to write Python code directly, they can use these parameters as needed:

  1. Responsedata = []

    Description: The generated or user-written Python script typically initializes Responsedata as an empty list. This serves as a placeholder where results from the operation are appended. For natural language instructions, the system handles Responsedata automatically. For user-written code, users can manage Responsedata to collect results as needed.

  2. pycode_data

    Description: This variable holds the data flowing through the Integration Bridge. For natural language instructions, users describe modifications to the data, and the system generates code that uses pycode_data to apply those changes. For user-written code, users can directly reference pycode_data to manipulate the data.

These parameters are managed automatically by the system for natural language instructions, ensuring seamless integration with the data flow and result handling in the Integration Bridge. For user-written code, users have the flexibility to use these parameters as required.

Number of Parameters: 1

Parameter: Instruction (Plain English or Python code)

Users can choose to either provide a plain English instruction describing the desired data transformation or write Python code directly. The system will process the input accordingly, either generating code from the instruction or executing the provided code.

Examples:

Example 1: Converting last_name to Lowercase

Prompt:

From the source data, convert the value of the last_name key to lowercase.

Input Data:

{
    "bizdata_dataset_response": {
        "data": [
            {
                "id": 1,
                "email": "george.bluth@reqres.in",
                "first_name": "George",
                "last_name": "Bluth",
                "avatar": "https://reqres.in/img/faces/1-image.jpg"
            }
        ]
    }
}

Generated Code (From Plain English Instruction):

Responsedata = []
for user in pycode_data['bizdata_dataset_response']['data']:
    user['last_name'] = user['last_name'].lower()
    Responsedata.append(user)

Output Data:

[
    {
        "id": 1,
        "email": "george.bluth@reqres.in",
        "first_name": "George",
        "last_name": "bluth",
        "avatar": "https://reqres.in/img/faces/1-image.jpg",
        "python_code": "Responsedata = []\nfor user in pycode_data['bizdata_dataset_response']['data']:\n    user['last_name'] = user['last_name'].lower()\n    Responsedata.append(user)"
    }
]


Example 2: Converting last_name to Uppercase

Prompt:

From the source data, convert the value of the last_name key to uppercase.

Input Data:

{
    "bizdata_dataset_response": {
        "data": [
            {
                "id": 1,
                "email": "george.bluth@reqres.in",
                "first_name": "George",
                "last_name": "Bluth",
                "avatar": "https://reqres.in/img/faces/1-image.jpg"
            }
        ]
    }
}

Generated Code (From Plain English Instruction):

Responsedata = []
for user in pycode_data['bizdata_dataset_response']['data']:
    user['last_name'] = user['last_name'].upper()
    Responsedata.append(user)

Output Data:

[
    {
        "id": 1,
        "email": "george.bluth@reqres.in",
        "first_name": "George",
        "last_name": "BLUTH",
        "avatar": "https://reqres.in/img/faces/1-image.jpg",
        "python_code": "Responsedata = []\nfor user in pycode_data['bizdata_dataset_response']['data']:\n    user['last_name'] = user['last_name'].upper()\n    Responsedata.append(user)"
    }
]

Example 3: Previous Example from Python Operation

Prompt:

For each item in the source, create a new key using the value of 'attributename', set its value to the value of 'attributevalue', and remove the 'attributename' and 'attributevalue' keys and replace it with newly created key and value.

Generated Code (From Plain English Instruction)

Responsedata = []
for item in pycode_data:
    new_item = item.copy()
    new_key = new_item.pop('attributename', None)
    new_value = new_item.pop('attributevalue', None)
    if new_key and new_value:
        new_item[new_key] = new_value
    Responsedata.append(new_item)

Input Data:

[
    {
        "attributeid": 212077,
        "attributename": "item_no",
        "attributevalue": "999898",
        "attribute_groupid": 24315,
        "attribute_groupname": "Default",
        "Isvariant": "false"
    },
    {
        "attributeid": 212078,
        "attributename": "Product Name",
        "attributevalue": "Product ABC",
        "attribute_groupid": 24315,
        "attribute_groupname": "Default",
        "Isvariant": "false"
    }
]

Output Data:

[
    {
        "attributeid": 212077,
        "item_no": "999898",
        "attribute_groupid": 24315,
        "attribute_groupname": "Default",
        "Isvariant": "false"
    },
    {
        "attributeid": 212078,
        "Product Name": "Product ABC",
        "attribute_groupid": 24315,
        "attribute_groupname": "Default",
        "Isvariant": "false"
    }
]

Forward Code is a toggle button in the UI. When enabled, the generated Python code can be carried forward to the next operation.