Once you create the data file required for the current import job, you can import it to the Capillary database using the 6 steps provided in the following.

The following are the important points to consider before starting with import.

*The mentioned time period is in
- IST for APAC & APAC2 cluster
- CST for China cluster;
- GMT for EU cluster




Duration for data import in the day time (Peak hours)

* 6 AM - 10 PM
Duration for data import in night time (FTP hours)    * 10 PM - 6 AM
No of records allowed in the last 24 hours by direct import 200,000
No of records allowed in the last 24 hours in the day time (Peak hours) day FTP import 200,000
No of records allowed in last 24 hours in night time (FTP hours) by FTP import     1,000,000
Maximum size of a single file (in MB)   200 MB


Step 1: Create data file (.csv)

First, create a CSV file based on the purpose of the import. Each purpose is associated with a different profile. For example, register customers, add transactions, issue points, add inventory, and so on. You need to create CSV files according to profiles.

  • Before uploading the file, ensure that no invalid entries are present in the file.  To make this process easy, sort columns by mobile number and transaction date wherever applicable


To know more about different fields supported for each profile, see Profiles, Supported Fields, and Template Configuration.

Step 2: Upload the CSV file

Once you create a file, upload it via. Import Framework.

  1. On the Settings page (), click Master Data Management > Data Import
  2. Click Data Import. You will see the following page


In the Direct Upload field, do the following

  1. In Upload data csv, choose the CSV file that you want to import
  2. In the No. of rows to be ignored from top, set the number of rows from the top that you want to exclude from importing

For example, if number 2 is set in this field then while uploading data to the server the top two rows will not be considered. This can be used when there are field names, duplicate data or blank data in the top rows

  1. In No. of rows to be ignored from bottom, set the number of rows from the bottom that you want to exclude from importing
For example, if 15 is set, then while importing the last 15 rows will be ignored. You can set this option in case of duplicate/blank/invalid entries in the last rows of the CSV file.
  1. In File Delimiter,  type in the character used to separate each value in the file
For example, a file can have either ',' separated or 'tab' separated or ';' separated values. If ','  is mentioned then values will be separated after every , found in the file.
  1. In Enclosure, type the special character used in the file for 'string' data types
  2. In Escape Character, type in the special character that needs to be ignored from the file
  3. Click Submit.
You will see the following page. Proceed with the configuration steps provided below.

Step 3: Choose the respective profile and create template

Based on the type of data, choose the profile name. create a new template for the profile or use an existing template. A profile template stores the Template Configurations set and lets you reuse whenever required instead of creating a new template every time.

To choose from the existing templates,

  1. In Choose Profile, select the profile based on the data that you are uploading. For more information on profiles, see Profiles, Supported Fields, and Template Configuration
  2. In Choose Template, select the template that you want to use for the current import
  3. Click Submit.

To create a new template

  1. Check Create new template box.
  2. In the Name of the template, type a name for the new template
  3. Click Submit.

Step 4: Configure the Profile Template

The options vary based on the profile chosen. Configure the profile template as explained in Profiles, Supported Fields, and Template Configuration and click Submit


Step 5: Map each field with the respective column of the CSV file (Field Mapping)

Map each field with the respective column names of the CSV file and click Submit. Fields that are marked with * are mandatory. 

  • Not Null

If you do not want to allow saving null values for a field, check this box. It will ignore the entries with null values for that specific field while importing.

  • Data Transformer & Transformer Inputs

This option is used to modify the existing data in a column automatically by making use of the functions and other configurations.

There are four types of data transformers as mentioned in the following table.

None

Field is not modified while importing data
MergeMerges the values of two or more columns to a single field. Write the merge function in the Transformer Inputs box.
Columns=>x,y,z or Columns=>2,3,4;Separator=>[] where x, y and z are column numbers.
For example, if Column=>1,2,3 is set then the text in the columns 1, 2, 3 will be merged to a single field separating each column with a comma (,), a default value.
You can set any delimiter for merging two or more columns using the Separator=> function.
For example, if Column=>1,2,3;Separator=>; is set in Transformer Inputs, the columns 1, 2 and 3 will be merged separating the value of each column with a semicolon
Default ValueReplaces null values present in the column with a value specified in Transformer Inputs.
Value=>[Any Value].
For example, if Value=>store.server is set in Transformer Inputs, all null values identified in the column will be replaced with 'store.server'
Convert to Date & Time

Changes the date and time format across the column. This is applicable only to the columns with date and time.
Set the standard date and time format in the Transformer Inputs box.
Date parameters
d - Day of the month (1-31)
m - Month (1-12)
Y - Year in four digits (example: 2013)
y: Year (13)
Time Parameters
H: Hours (24hr format)
s: Seconds
h: 12 hour format(1-12)
i: Minutes (00 - 60)
The parameters of date and time format can be used in any order. For example, date can be set as d/m/Y or m/d/Y or Y/m/d. However, it is recommended to use the standard date and time format of the organization.
Expression

This function lets you apply some mathematical calculations on two or more columns.
Exp=>{Column no. of A} {+ or - or * or /} {Column no. of B}
For example, if the expression is set to Exp=>{1}{2}-10,
All null values of that column will be replaced with - (value of that field will be (value of column 1) * (value of column2) - 10).
You can use this function only on columns with integer values but not for string data types.


Note:
  • If the date format in the file is in the form YYYY-MM-DD hh:ii:ss, then date formatting is not required.
  • Select Not Null only for the fields that are not supposed to be empty. If any null values are identified for those particular fields, an error report will be generated with the entire list of null values.
  • Options for each configuration varies based on the org and profile selected
  • Apart from the standard fields, you can also import custom fields and map with the respective column of the CSV file. However, no validation is performed on custom fields.


Step 6: Import data to the local database

Once submitting the configurations and field mappings you will see the preview of the import jobs. Any errors identified in the import file will be highlighted in red.

To import data to the temporary database, scroll down to the Import to Temporary DB field and click Submit

Note:
  • The import process consumes time depending on the size of the data
  • To cancel an import job at any point, click Cancel File Import


Step 7: Import data to the main database

Once the data is imported into the local database, you will see an option to import it to the main database.

Main DB.jpg

Once the data is imported to the temporary database, you will see an option to import data to the main database.

  1. Check Import to the Database checkbox
  2. Click Submit


Note:
  • The Preview File Details table shows the file name, number of entries, file properties, profile configured and template assigned
  • The Error Summary table shows the error type and error count.
  • The Auto Correction Summary table shows the corrections performed and the count
  • The Download Reports table lets you download various reports such as auto-corrected data, error data, temp table dump, and new customers data. Click on the respective Download Report link to download the report
  • At any point to cancel the data import, in the Cancel File Import field check the respective option and click Submit