This page explains how to install EFAK dependencies, download and install EFAK, get the service up and running on your Linux or macOS system, and the installation package details.
2.2.2 Download and install
You can download the EFAK source code on GitHub to compile and install by yourself, or by downloading a binary .tar.gz file.
We recommend using the official compiled binary installation package.
2.2.3 Install JDK
If there is a JDK environment on the Linux server, this step can be ignored, and the installation of the next steps. If there is no JDK, first to the Oracle official website to download JDK.
Extract the binary installation package to the specified directory:
Finally, we use the source /etc/profile to enable the configuration to take effect immediately.
2.2.5 Configure EFAK system file
Configure EFAK according to the actual situation of its own Kafka cluster, For example, zookeeper address, version type of Kafka cluster (zk for low version, kafka for high version), Kafka cluster with security authentication enabled etc.
cd ${KE_HOME}/confvisystem-config.properties# Multi zookeeper&kafka cluster list -- The client connection address of the Zookeeper cluster is set hereefak.zk.cluster.alias=cluster1,cluster2cluster1.zk.list=tdn1:2181,tdn2:2181,tdn3:2181cluster2.zk.list=xdn1:2181,xdn2:2181,xdn3:2181# Add zookeeper aclcluster1.zk.acl.enable=falsecluster1.zk.acl.schema=digestcluster1.zk.acl.username=testcluster1.zk.acl.password=test123# Kafka broker nodes online listcluster1.efak.broker.size=10cluster2.efak.broker.size=20# Zkcli limit -- Zookeeper cluster allows the number of clients to connect to# If you enable distributed mode, you can set value to 4 or 8kafka.zk.limit.size=8# EFAK webui port -- WebConsole port access addressefak.webui.port=8048####################################### EFAK enable distributed######################################efak.distributed.enable=false# master worknode set status to master, other node set status to slaveefak.cluster.mode.status=slave# deploy efak server addressefak.worknode.master.host=localhostefak.worknode.port=8085# Kafka offset storage -- Offset stored in a Kafka cluster, if stored in the zookeeper, you can not use this optioncluster1.efak.offset.storage=kafkacluster2.efak.offset.storage=kafka# Whether the Kafka performance monitoring diagram is enabledefak.metrics.charts=false# EFAK keeps data for 30 days by defaultefak.metrics.retain=30# If offset is out of range occurs, enable this property -- Only suitable for kafka sqlefak.sql.fix.error=falseefak.sql.topic.records.max=5000# Delete kafka topic token -- Set to delete the topic token, so that administrators can have the right to deleteefak.topic.token=keadmin# Kafka sasl authenticatecluster1.efak.sasl.enable=falsecluster1.efak.sasl.protocol=SASL_PLAINTEXTcluster1.efak.sasl.mechanism=SCRAM-SHA-256cluster1.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
# If not set, the value can be emptycluster1.efak.sasl.client.id=# Add kafka cluster cgroupscluster1.efak.sasl.cgroup.enable=falsecluster1.efak.sasl.cgroup.topics=kafka_ads01,kafka_ads02cluster2.efak.sasl.enable=truecluster2.efak.sasl.protocol=SASL_PLAINTEXTcluster2.efak.sasl.mechanism=PLAINcluster2.efak.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
cluster2.efak.sasl.client.id=cluster2.efak.sasl.cgroup.enable=falsecluster2.efak.sasl.cgroup.topics=kafka_ads03,kafka_ads04# Default use sqlite to store dataefak.driver=org.sqlite.JDBC# It is important to note that the '/hadoop/kafka-eagle/db' path must be exist.efak.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.dbefak.username=rootefak.password=smartloli# (Optional) set mysql address#efak.driver=com.mysql.jdbc.Driver#efak.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull#efak.username=root#efak.password=smartloli
2.2.6 Start the EFAK server (Standalone)
In the $KE_HOME/bin directory, there is a ke.sh script file. Execute the start command as follows:
cd ${KE_HOME}/binchmod+xke.shke.shstart
After that, when the EFAK server is restarted or stopped, execute the following command:
ke.shrestartke.shstop
As shown in the following figure:
2.2.7 Start the EFAK server (Distributed)
In the $KE_HOME/bin directory, there is a ke.sh script file. Execute the start command as follows:
cd ${KE_HOME}/bin# sync efak package to other worknode node# if $KE_HOME is /data/soft/new/efakfor i in`cat $KE_HOME/conf/works`;do scp -r $KE_HOME $i:/data/soft/new;done# sync efak server .bash_profile environmentfor i in`cat $KE_HOME/conf/works`;do scp -r ~/.bash_profile $i:~/;donechmod+xke.shke.shclusterstart
After that, when the EFAK server is restarted or stopped, execute the following command: