IGinX

IGinX Installation and Use Manual (One-Click Start)

[TOC]

IGinX is an open source polystore system. A polystore system provides an integrated data management service over a set of one or more potentially heterogeneous database/storage engines, serving heterogeneous workloads.

Currently, IGinX directly supports big data service over relational database PostgreSQL, time series databases InfluxDB/IoTDB/TimescaleDB/OpenTSDB, and Parquet data files.

Installation

Java Installation

Since ZooKeeper, IGinX and IoTDB are all developed using Java, Java needs to be installed first. If a running environment of JDK >= 1.8 has been installed locally, skip this step entirely.

  1. First, visit the official Java website to download the JDK package for your current system.

  2. Installation

$ cd ~/Downloads
$ tar -zxf jdk-8u181-linux-x64.gz # unzip files
$ mkdir /opt/jdk
$ mv jdk-1.8.0_181 /opt/jdk/
  1. Set the path

Edit the ~/.bashrc file and add the following two lines at the end of the file:

export JAVA_HOME = /usr/jdk/jdk-1.8.0_181
export PATH=$PATH:$JAVA_HOME/bin

Load the file with the changed configuration (into shell scripts):

$ source ~/.bashrc
  1. Use java -version to determine whether JDK installed successfully.
$ java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

If the words above are displayed, it means the installation was successful.

IGinX installation

IGinX is the main part of the system, and the installation package can be downloaded and started with one click:

$ cd ~
$ wget https://github.com/IGinX-THU/IGinX/releases/download/v0.7.0/IGinX-FastDeploy-0.7.0.tar.gz
$ tar -xzvf IGinX-FastDeploy-0.7.0.tar.gz

Launch

$ cd ~
$ cd IGinX-FastDeploy-0.7.0
$ chmod +x ./runIginxOn1Host.sh
$ ./runIginxOn1Host.sh

The following display of words means the IGinX installation was successful:

ZooKeeper is started!
IoTDB is started!
IGinX is started!
=========================================
You can now test IGinX. Have fun!~
=========================================

If you encounter file not found error on Windows OS, try modify file ./runIginxOn1Host.bat.

#./runIginxOn1Host.bat
# line 2
start "zookeeper" /d "include/apache-zookeeper-3.7.0-bin/" bin\zkServer.cmd
# ↓ change to
start "zookeeper" /d "include\apache-zookeeper-3.7.0-bin\" bin\zkServer.cmd

# line 5
start "iotdb" /d "include/apache-iotdb-0.12.6/sbin" start-server.bat
# ↓ change to
start "iotdb" /d "include\apache-iotdb-0.12.6\sbin" start-server.bat

# line 9
start "iginx" /d "./sbin" start_iginx.bat
# ↓ change to
start "iginx" /d ".\sbin" start_iginx.bat

Using IGinX

RESTful interface

After the startup is complete, you can easily use the RESTful interface to write and query data to IGinX.

Create the file insert.json and add the following to it:

[
  {
    "name": "archive_file_tracked",
    "datapoints": [
        [1359788400000, 123.3],
        [1359788300000, 13.2],
        [1359788410000, 23.1 ]
    ],
    "tags": {
        "host": "server1",
        "data_center": "DC1"
    }
  },
  {
      "name": "archive_file_search",
      "timestamp": 1359786400000,
      "value": 321,
      "tags": {
          "host": "server2"
      }
  }
]

Use the following command to insert data into the database:

$ curl -XPOST -H'Content-Type: application/json' -d @insert.json http://127.0.0.1:6666/api/v1/datapoints

After inserting data, you can also query the data just written using the RESTful interface.

Create a file query.json and write the following data to it:

{
"start_absolute" : 1,
"end_relative": {
"value": "5",
"unit": "days"
},
"time_zone": "Asia/Kabul",
"metrics": [
{
"name": "archive_file_tracked"
},
{
"name": "archive_file_search"
}
]
}

Use the following command to query the data:

$ curl -XPOST -H'Content-Type: application/json' -d @query.json http://127.0.0.1:6666/api/v1/datapoints/query

The command will return information about the data point just inserted:

{
    "queries": [
        {
            "sample_size": 3,
            "results": [
                {
                    "name": "archive_file_tracked",
                    "group_by": [
                        {
                            "name": "type",
                            "type": "number"
                        }
                    ],
                    "tags": {
                        "data_center": [
                            "DC1"
                        ],
                        "host": [
                            "server1"
                        ]
                    },
                    "values": [
                        [
                            1359788300000,
                            13.2
                        ],
                        [
                            1359788400000,
                            123.3
                        ],
                        [
                            1359788410000,
                            23.1
                        ]
                    ]
                }
            ]
        },
        {
            "sample_size": 1,
            "results": [
                {
                    "name": "archive_file_search",
                    "group_by": [
                        {
                            "name": "type",
                            "type": "number"
                        }
                    ],
                    "tags": {
                        "host": [
                            "server2"
                        ]
                    },
                    "values": [
                        [
                            1359786400000,
                            321.0
                        ]
                    ]
                }
            ]
        }
    ]
}

If you see the following information returned, it means you are able to successfully use RESTful interface to write and query data to IGinX.

For more interfaces, please refer to the official IGinX manual.

If you want to use a different interface, there is another option.

In addition to the RESTful interface, IGinX also provides the RPC data access interface. For this specific interface, please refer to the official IGinX manual.

At the same time, IGinX also provides some official examples, showing the most common usage of the RPC interface.

Below is a short tutorial on how to use it.

RPC Interface

Since the IGinX jars have not been released to the Maven central repository, if you want to use it, you need to manually install it to the local Maven repository.

The specific installation method is as follows:

# download the newest IGinX release version source code package
$ wget https://github.com/IGinX-THU/IGinX/archive/refs/tags/v0.7.0.tar.gz
# Unzip the source package
$ tar -zxvf v0.7.0.tar.gz
# go to the main project's directory
$ cd IGinX-release-v0.7.0
# Install to local Maven repository
$ mvn clean install -DskipTests

Specifically, when using it, you only need to introduce the following dependencies in the pom file of the corresponding project:

<dependency>
    <groupId>cn.edu.tsinghua</groupId>
    <artifactId>iginx-core</artifactId>
    <version>0.7.0</version>
</dependency>

Before accessing IGinX, you first need to create a session and try to connect. The Session constructor has 4 parameters, which are the ip and port IGinX will to connect to, and the username and password for IGinX authentication. The current authentication system is still being written, so the account name and password to access the backend IGinX can directly fill in root:

Session session = new Session("127.0.0.1", 6888, "root", "root");
session.openSession();

You can then try to insert data into IGinX. Since IGinX supports the creation of time-series when data is written for the first time, there is no need to call the relevant series creation interface in advance. IGinX provides row-style and column-style data writing interfaces.

The following is an example of using the column-style data writing interface:

private static void insertColumnRecords(Session session) throws SessionException, ExecutionException {
        List<String> paths = new ArrayList<>();
        paths.add("sg.d1.s1");
        paths.add("sg.d2.s2");
        paths.add("sg.d3.s3");
        paths.add("sg.d4.s4");

        int size = 1500;
        long[] timestamps = new long[size];
        for (long i = 0; i < size; i++) {
            timestamps[(int) i] = i;
        }

        Object[] valuesList = new Object[4];
        for (long i = 0; i < 4; i++) {
            Object[] values = new Object[size];
            for (long j = 0; j < size; j++) {
                if (i < 2) {
                  values[(int) j] = i + j;
                } else {
                  values[(int) j] = RandomStringUtils.randomAlphanumeric(10).getBytes();
                }
            }
            valuesList[(int) i] = values;
        }

        List<DataType> dataTypeList = new ArrayList<>();
        for (int i = 0; i < 2; i++) {
            dataTypeList.add(DataType.LONG);
        }
        for (int i = 0; i < 2; i++) {
            dataTypeList.add(DataType.BINARY);
        }

        session.insertColumnRecords(paths, timestamps, valuesList, dataTypeList, null);
}

After completing the data writing, you can use the data query interface to query the data just written:

private static void queryData(Session session) throws SessionException, ExecutionException {
        List<String> paths = new ArrayList<>();
        paths.add("sg.d1.s1");
        paths.add("sg.d2.s2");
        paths.add("sg.d3.s3");
        paths.add("sg.d4.s4");

        long startTime = 100L;
        long endTime = 200L;

        SessionQueryDataSet dataSet = session.queryData(paths, startTime, endTime);
        dataSet.print();
}

You can also use the downsampling aggregation query interface to query the interval statistics of the data:

private static void downsampleQuery(Session session) throws SessionException, ExecutionException {
        List<String> paths = new ArrayList<>();
        paths.add("sg.d1.s1");
        paths.add("sg.d2.s2");

        long startTime = 100L;
        long endTime = 1101L;

        // MAX
        SessionQueryDataSet dataSet = session.downsampleQuery(paths, startTime, endTime, AggregateType.MAX, 100);
        dataSet.print();

        // MIN
        dataSet = session.downsampleQuery(paths, startTime, endTime, AggregateType.MIN, ROW_INTERVAL * 100);
        dataSet.print();

        // FIRST
        dataSet = session.downsampleQuery(paths, startTime, endTime, AggregateType.FIRST, ROW_INTERVAL * 100);
        dataSet.print();

        // LAST
        dataSet = session.downsampleQuery(paths, startTime, endTime, AggregateType.LAST, ROW_INTERVAL * 100);
        dataSet.print();

        // COUNT
        dataSet = session.downsampleQuery(paths, startTime, endTime, AggregateType.COUNT, ROW_INTERVAL * 100);
        dataSet.print();

        // SUM
        dataSet = session.downsampleQuery(paths, startTime, endTime, AggregateType.SUM, ROW_INTERVAL * 100);
        dataSet.print();

        // AVG
        dataSet = session.downsampleQuery(paths, startTime, endTime, AggregateType.AVG, ROW_INTERVAL * 100);
        dataSet.print();

}

After the session is completed, you need to manually close and release your connection from your terminal/backend:

session.closeSession();

For the full version of the code, please refer to: https://github.com/IGinX-THU/IGinX/blob/main/example/src/main/java/cn/edu/tsinghua/iginx/session/IoTDBSessionExample.java