首先,Arkime对数据库Elasticsearch有版本要求,这可以使用现有的docker镜像来提供,这里就不多说了,附上docker-compose配置:
Arkime 2.7 requires ES 7.4+
version: '2.2'
services:
elasticsearch:
image: elasticsearch:7.8.1
container_name: elasticsearch_server
environment:
TAKE_FILE_OWNERSHIP: 'true'
TZ: Asia/Shanghai
bootstrap.memory_lock: 'true'
discovery.type: single-node
ulimits:
memlock:
hard: -1
soft: -1
nofile:
hard: 262144
soft: 262144
volumes:
- /etc/localtime:/etc/localtime:ro
官网现下访问有点问题,这里另外找到了下载的链接位置:https://s3.amazonaws.com/files.molo.ch/builds/centos-7/moloch-2.7.1-1.x86_64.rpm
Instructions for using the prebuilt Arkime packages.
Please report any bugs or feature requests by opening an issue at https://github.com/arkime/arkime/issues
Basic Arkime Installation steps:
1) 下载对应的安装包(这里使CentOS7,所以下载的RPM包)
2) 安装包
3) 运行Configure脚本进行配置,只需运行一次
/data/moloch/bin/Configure
4) Configure脚本可以安装elasticsearch,也可以自行安装
systemctl start elasticsearch.service
5) 安装或升级ES的配置
a) 第一次安装或者想重置数据
/data/moloch/db/db.pl http://ESHOST:9200 init
b) 如果这是一次更新升级
/data/moloch/db/db.pl http://ESHOST:9200 upgrade
6) 如果是新安装的,或者重置数据库,则添加管理用户
/data/moloch/bin/moloch_add_user.sh admin "Admin User" THEPASSWORD --admin
7) 启动
systemctl start molochcapture.service
systemctl start molochviewer.service
8) 这里是两个日志文件
/data/moloch/logs/viewer.log
/data/moloch/logs/capture.log
9) 可以通过浏览器访问 http://MOLOCHHOST:8005
user: admin
password: THEPASSWORD from step #6
If you want IP -> Geo/ASN to work, you need to setup a maxmind account and the geoipupdate program.
See https://arkime.com/faq#maxmind
Any configuration changes can be made to /data/moloch/etc/config.ini
See https://arkime.com/faq#moloch-is-not-working for issues
Additional information can be found at:
* https://arkime.com/faq
* https://arkime.com/settings
rpm的依赖包:
yum install -y net-tools perl-libwww-perl perl-JSON ethtool libyaml-devel perl-LWP-Protocol-https
直接启动容器环境命令:
docker run -it --name=test --network host centos:7
运行python自带的http服务命令,端口默认8000:
python3 -m http.server
构建镜像命令:
docker build -f DockerFile . --network host -t moloch:2.7.1
手动安装过程中,会执行Configure脚本,不过可以执行后把配置文件拿到即可不用该脚本。
FROM centos:7 as build
ENV DOWN_URL http://127.0.0.1:8000
ENV MOLOCH_RPM moloch-2.7.1-1.x86_64.rpm
RUN echo "[INFO] Setup Moloch"
RUN yum install -y net-tools perl-libwww-perl perl-JSON ethtool libyaml-devel perl-LWP-Protocol-https
RUN mkdir /tmp/download && \
curl -o /tmp/$MOLOCH_RPM $DOWN_URL/$MOLOCH_RPM && \
rpm -ivh /tmp/$MOLOCH_RPM
RUN mkdir /usr/share/GeoIP && \
curl -o /usr/share/GeoIP/GeoLite2-ASN.mmdb $DOWN_URL/GeoIP_20200526/GeoLite2-ASN.mmdb && \
curl -o /usr/share/GeoIP/GeoLite2-City.mmdb $DOWN_URL/GeoIP_20200526/GeoLite2-City.mmdb && \
curl -o /usr/share/GeoIP/GeoLite2-Country.mmdb $DOWN_URL/GeoIP_20200526/GeoLite2-Country.mmdb
RUN curl -o /data/moloch/etc/ipv4-address-space.csv $DOWN_URL/etc/ipv4-address-space.csv && \
curl -o /data/moloch/etc/oui.txt $DOWN_URL/etc/oui.txt && \
rm -f /data/moloch/etc/config.ini && \
curl -o /data/moloch/etc/config.ini $DOWN_URL/etc/config.ini
# 去掉一些解析模块
RUN mv /data/moloch/parsers /data/moloch/parsers.bk && mkdir /data/moloch/parsers && \
cp /data/moloch/parsers.bk/arp.so /data/moloch/parsers.bk/icmp.so /data/moloch/parsers.bk/tcp.so /data/moloch/parsers.bk/udp.so /data/moloch/parsers
RUN curl -o /home/start.sh $DOWN_URL/start.sh
RUN mkdir /data/moloch/logs
CMD ["/bin/bash", "/home/start.sh"]
附:config.ini配置文件
# Latest settings documentation: https://molo.ch/settings
#
# Moloch uses a tiered system for configuration variables. This allows Moloch
# to share one config file for many machines. The ordering of sections in this
# file doesn't matter.
#
# Order of config variables:
# 1st) [optional] The section titled with the node name is used first.
# 2nd) [optional] If a node has a nodeClass variable, the section titled with
# the nodeClass name is used next. Sessions will be tagged with
# class:<node class name> which may be useful if watching different networks.
# 3rd) The section titled "default" is used last.
[default]
# Comma seperated list of elasticsearch host:port combinations. If not using a
# Elasticsearch load balancer, a different elasticsearch node in the cluster can be specified
# for each Moloch node to help spread load on high volume clusters. For user/password
# use http://user:pass@host:port
elasticsearch=http://localhost:9200
# How often to create a new elasticsearch index. hourly,hourly6,daily,weekly,monthly
# Changing the value will cause previous sessions to be unreachable
rotateIndex=daily
# Cert file to use, comment out to use http instead
# certFile=/data/moloch/etc/moloch.cert
# File with trusted roots/certs. WARNING! this replaces default roots
# Useful with self signed certs and can be set per node.
# caTrustFile=/data/moloch/etc/roots.cert
# Private key file to use, comment out to use http instead
# keyFile=/data/moloch/etc/moloch.key
# Password Hash and S2S secret - Must be in default section. Since elasticsearch
# is wide open by default, we encrypt the stored password hashes with this
# so a malicous person can't insert a working new account. It is also used
# for secure S2S communication. Comment out for no user authentication.
# Changing the value will make all previously stored passwords no longer work.
# Make this RANDOM, you never need to type in
passwordSecret = password
# Use a different password for S2S communication then passwordSecret.
# Must be in default section. Make this RANDOM, you never need to type in
#serverSecret=
# HTTP Digest Realm - Must be in default section. Changing the value
# will make all previously stored passwords no longer work
httpRealm = Moloch
# The base path for Moloch web access. Must end with a / or bad things will happen
# Default: "/"
# webBasePath = /moloch/
# Semicolon ';' seperated list of interfaces to listen on for traffic
interface=em1
# The bpf filter of traffic to ignore
#bpf=not port 9200
# The yara file name
#yara=
# Host to connect to for wiseService
#wiseHost=127.0.0.1
# Log viewer access requests to a different log file
#accessLogFile = /data/moloch/logs/access.log
# Control the log format for access requests. This uses URI % encoding.
#accessLogFormat = :date :username %1b[1m:method%1b[0m %1b[33m:url%1b[0m :status :res[content-length] bytes :response-time ms
# The directory to save raw pcap files to
pcapDir = /data/moloch/raw
# The max raw pcap file size in gigabytes, with a max value of 36G.
# The disk should have room for at least 10*maxFileSizeG
maxFileSizeG = 12
# The max time in minutes between rotating pcap files. Default is 0, which means
# only rotate based on current file size and the maxFileSizeG variable
#maxFileTimeM = 60
# TCP timeout value. Moloch writes a session record after this many seconds
# of inactivity.
tcpTimeout = 600
# Moloch writes a session record after this many seconds, no matter if
# active or inactive
tcpSaveTimeout = 720
# UDP timeout value. Moloch assumes the UDP session is ended after this
# many seconds of inactivity.
udpTimeout = 30
# ICMP timeout value. Moloch assumes the ICMP session is ended after this
# many seconds of inactivity.
icmpTimeout = 10
# An aproximiate maximum number of active sessions Moloch/libnids will try
# and monitor
maxStreams = 1000000
# Moloch writes a session record after this many packets
maxPackets = 10000
# Delete pcap files when free space is lower then this in gigabytes OR it can be
# expressed as a percentage (ex: 5%). This does NOT delete the session records in
# the database. It is recommended this value is between 5% and 10% of the disk.
# Database deletes are done by the db.pl expire script
freeSpaceG = 5%
# The port to listen on, by default 8005
viewPort = 8005
# The host/ip to listen on, by default 0.0.0.0 which is ALL
#viewHost = localhost
# By default the viewer process is https://hostname:<viewPort> for each node.
#viewUrl = https://HOSTNAME:8005
# NOTE: A MaxMind account is now required, we will try and use the old files or new files on the system. See
# https://molo.ch/faq#maxmind
geoLite2Country = /usr/share/GeoIP/GeoLite2-Country.mmdb;/data/moloch/etc/GeoLite2-Country.mmdb
geoLite2ASN = /usr/share/GeoIP/GeoLite2-ASN.mmdb;/data/moloch/etc/GeoLite2-ASN.mmdb
# Path of the rir assignments file
# https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.csv
rirFile = /data/moloch/etc/ipv4-address-space.csv
# Path of the OUI file from whareshark
# https://raw.githubusercontent.com/wireshark/wireshark/master/manuf
ouiFile = /data/moloch/etc/oui.txt
# User to drop privileges to. The pcapDir must be writable by this user or group below
dropUser=nobody
# Group to drop privileges to. The pcapDir must be writable by this group or user above
dropGroup=daemon
# Semicolon ';' seperated list of tags which once capture sets for a session causes the
# remaining pcap from being saved for the session. It is likely that the initial packets
# WILL be saved for the session since tags usually aren't set until after several packets
# Each tag can optionally be followed by a :<num> which specifies how many total packets to save
#dontSaveTags=
# Header to use for determining the username to check in the database for instead of
# using http digest. Use this if apache or something else is doing the auth.
# Set viewHost to localhost or use iptables
# Might need something like this in the httpd.conf
# RewriteRule .* - [E=ENV_RU:%{REMOTE_USER}]
# RequestHeader set MOLOCH_USER %{ENV_RU}e
#userNameHeader=moloch_user
#
# Headers to use to determine if user from `userNameHeader` is
# authorized to use the system, and if so create a new user
# in the Moloch user database. This implementation expects that
# the users LDAP/AD groups (or similar) are populated into an
# HTTP header by the Apache (or similar) referenced above.
# The JSON in userAutoCreateTmpl is used to insert the new
# user into the moloch database (if not already present)
# and additional HTTP headers can be sourced from the request
# to populate various fields.
#
# The example below pulls verifies that an HTTP header called `UserGroup`
# is present, and contains the value "MOLOCH_ACCESS". If this authorization
# check passes, the user database is inspected for the user in `userNameHeader`
# and if it is not present it is created. The system uses the
# `moloch_user` and `http_auth_mail` headers from the
# request and uses them to populate `userId` and `userName`
# fields for the new user record.
#
# Once the user record is created, this functionaity
# neither updates nor deletes the data, though if the user is no longer
# reported to be in the group, access is denied regardless of the status
# in the moloch database.
#
#requiredAuthHeader="UserGroup"
#requiredAuthHeaderVal="MOLOCH_ACCESS"
#userAutoCreateTmpl={"userId": "${this.moloch_user}", "userName": "${this.http_auth_mail}", "enabled": true, "webEnabled": true, "headerAuthEnabled": true, "emailSearch": true, "createEnabled": false, "removeEnabled": false, "packetSearch": true }
# Should we parse extra smtp traffic info
parseSMTP=true
# Should we parse extra smb traffic info
parseSMB=true
# Should we parse HTTP QS Values
parseQSValue=false
# Should we calculate sha256 for bodies
supportSha256=false
# Only index HTTP request bodies less than this number of bytes */
maxReqBody=64
# Only store request bodies that Utf-8?
config.reqBodyOnlyUtf8 = true
# Semicolon ';' seperated list of SMTP Headers that have ips, need to have the terminating colon ':'
smtpIpHeaders=X-Originating-IP:;X-Barracuda-Apparent-Source-IP:
# Semicolon ';' seperated list of directories to load parsers from
parsersDir=/data/moloch/parsers
# Semicolon ';' seperated list of directories to load plugins from
pluginsDir=/data/moloch/plugins
# Semicolon ';' seperated list of plugins to load and the order to load in
# plugins=tagger.so; netflow.so
# Plugins to load as root, usually just readers
#rootPlugins=reader-pfring; reader-daq.so
# Semicolon ';' seperated list of viewer plugins to load and the order to load in
# viewerPlugins=wise.js
# NetFlowPlugin
# Input device id, 0 by default
#netflowSNMPInput=1
# Outout device id, 0 by default
#netflowSNMPOutput=2
# Netflow version 1,5,7 supported, 7 by default
#netflowVersion=1
# Semicolon ';' seperated list of netflow destinations
#netflowDestinations=localhost:9993
# Specify the max number of indices we calculate spidata for.
# ES will blow up if we allow the spiData to search too many indices.
spiDataMaxIndices=4
# Uncomment the following to allow direct uploads. This is experimental
#uploadCommand=/data/moloch/bin/moloch-capture --copy -n {NODE} -r {TMPFILE} -c {CONFIG} {TAGS}
# Title Template
# _cluster_ = ES cluster name
# _userId_ = logged in User Id
# _userName_ = logged in User Name
# _page_ = internal page name
# _expression_ = current search expression if set, otherwise blank
# _-expression_ = " - " + current search expression if set, otherwise blank, prior spaces removed
# _view_ = current view if set, otherwise blank
# _-view_ = " - " + current view if set, otherwise blank, prior spaces removed
#titleTemplate=_cluster_ - _page_ _-view_ _-expression_
# Number of threads processing packets
packetThreads=2
# HSTS Header
# If set to true, adds a Strict-Transport-Security response header with a max age of a year
# and includes subdomains (the app must be served over https)
#hstsHeader=true
# Business Hours
# If set, displays a colored background on the sessions timeline graph during business hours
# Values are set in hours from midnight UTC (default is off)
#businessDayStart=9
#businessDayEnd=17
# Business Days
# Comma separated list of days
# If set, displays the business hours on only the days provided here
# Business hours must be set for these to be of use
# Values are the days of the week as numbers, the week starts at Sunday = 0 and ends on Saturday = 6
# (default is Monday - Friday 1,2,3,4,5)
#businessDays=1,2,3,4,5
# ADVANCED - Semicolon ';' seperated list of files to load for config. Files are loaded
# in order and can replace values set in this file or previous files.
#includes=
# ADVANCED - How is pcap written to disk
# simple = use O_DIRECT if available, writes in pcapWriteSize chunks,
# a file per packet thread.
# simple-nodirect = don't use O_DIRECT. Required for zfs and others
pcapWriteMethod=simple
# ADVANCED - Buffer size when writing pcap files. Should be a multiple of the raid 5 or xfs
# stripe size. Defaults to 256k
pcapWriteSize = 262143
# ADVANCED - Number of bytes to bulk index at a time
dbBulkSize = 300000
# ADVANCED - Compress requests to ES, reduces ES bandwidth by ~80% at the cost
# of increased CPU. MUST have "http.compression: true" in elasticsearch.yml file
compressES = false
# ADVANCED - Max number of connections to elastic search
maxESConns = 30
# ADVANCED - Max number of es requests outstanding in q
maxESRequests = 500
# ADVANCED - Number of packets to ask libnids/libpcap to read per poll/spin
# Increasing may hurt stats and ES performance
# Decreasing may cause more dropped packets
packetsPerPoll = 100000
# ADVANCED - Moloch will try to compensate for SYN packet drops by swapping
# the source and destination addresses when a SYN-acK packet was captured first.
# Probably useful to set it false, when running Moloch in wild due to SYN floods.
antiSynDrop = true
# DEBUG - Write to stdout info every X packets.
# Set to -1 to never log status
logEveryXPackets = 100000
# DEBUG - Write to stdout unknown protocols
logUnknownProtocols = false
# DEBUG - Write to stdout elastic search requests
logESRequests = true
# DEBUG - Write to stdout file creation information
logFileCreation = true
### High Performance settings
# https://molo.ch/settings#high-performance-settings
# magicMode=basic
# pcapReadMethod=tpacketv3
# tpacketv3NumThreads=2
# pcapWriteMethod=simple
# pcapWriteSize = 2560000
# packetThreads=5
# maxPacketsInQueue = 200000
### Low Bandwidth settings
# packetThreads=1
# pcapWriteSize = 65536
##############################################################################
# Classes of nodes
# Can override most default values, and create a tag call node:<classname>
[class1]
freeSpaceG = 10%
##############################################################################
# Nodes
# Usually just use the hostname before the first dot as the node name
# Can override most default values
[node1]
nodeClass = class1
# Might use a different elasticsearch node
elasticsearch=elasticsearchhost1
# Uncomment if this node should process the cron queries and packet search jobs, only ONE node should process cron queries and packet search jobs
# cronQueries = true
[node2]
nodeClass = class2
# Might use a different elasticsearch node
elasticsearch=elasticsearchhost2
# Uses a different interface
interface = eth4
##############################################################################
# override-ips is a special section that overrides the MaxMind databases for
# the fields set, but fields not set will still use MaxMind (example if you set
# tags but not country it will use MaxMind for the country)
# Spaces and capitalization is very important.
# IP Can be a single IP or a CIDR
# Up to 10 tags can be added
#
# ip=tag:TAGNAME1;tag:TAGNAME2;country:3LetterUpperCaseCountry;asn:ASN STRING
#[override-ips]
#10.1.0.0/16=tag:ny-office;country:USA;asn:AS0000 This is an ASN
##############################################################################
# It is possible to define in the config file extra http/email headers
# to index. They are accessed using the expression http.<fieldname> and
# email.<fieldname> with optional .cnt expressions
#
# Possible config atributes for all headers
# type:<string> (string|integer|ip) = data type (default string)
# count:<boolean> = index count of items (default false)
# unique:<boolean> = only record unique items (default true)
# headers-http-request is used to configure request headers to index
[headers-http-request]
referer=type:string;count:true;unique:true
authorization=type:string;count:true
content-type=type:string;count:true
origin=type:string
# headers-http-response is used to configure http response headers to index
[headers-http-response]
location=type:string
server=type:string
content-type=type:string;count:true
# headers-email is used to configure email headers to index
[headers-email]
x-priority=type:integer
authorization=type:string
##############################################################################
# If you have multiple clusters and you want the ability to send sessions
# from one cluster to another either manually or with the cron feature fill out
# this section
#[moloch-clusters]
#forensics=url:https://viewer1.host.domain:8005;passwordSecret:password4moloch;name:Forensics Cluster
#shortname2=url:http://viewer2.host.domain:8123;passwordSecret:password4moloch;name:Testing Cluster
# WARNING: This is an ini file with sections, most likely you don't want to put a setting here.
# New settings usually go near the top in the [default] section, or in [nodename] sections.
附:Configure配置脚本
#!/bin/bash
# Simple capital C Configure script for rpm/deb, like the old days
if [ "$1" == "--help" ]; then
echo "Configure (--wise|--parliament|) = Only 1 option can be used"
echo "--wise = install and start wise"
echo "--parliament = install and start parliament"
echo " = install moloch capture and viewer"
echo "--help = this help"
exit 0
fi
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root"
exit 1
fi
MOLOCH_NAME=BUILD_MOLOCH_NAME
if [ "$MOLOCH_NAME" == "BUILD_MOLOCH_NAME" ]; then
MOLOCH_NAME=moloch
fi
MOLOCH_INSTALL_DIR=/data/moloch
if [ "$MOLOCH_INSTALL_DIR" == "BUILD_MOLOCH_""INSTALL_DIR" ]; then
MOLOCH_INSTALL_DIR=/data/$MOLOCH_NAME
fi
if [ "$1" == "--wise" ]; then
if [ ! -f "$MOLOCH_INSTALL_DIR/etc/wise.ini" ]; then
sed -e "s,MOLOCH_ELASTICSEARCH,${MOLOCH_ELASTICSEARCH},g" -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/wise.ini.sample > $MOLOCH_INSTALL_DIR/etc/wise.ini
else
echo "Not overwriting $MOLOCH_INSTALL_DIR/etc/wise.ini, delete and run again if update required (usually not), or edit by hand"
sleep 1
fi
if [ -d "/etc/systemd" ] && [ -x "/bin/systemctl" ]; then
echo "Installing systemd start files, use systemctl"
sed -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochwise.systemd.service > /etc/systemd/system/molochwise.service
systemctl daemon-reload
systemctl enable molochwise
systemctl start molochwise
elif [ -d "/etc/init" ]; then
echo "Installing upstart start files, use start"
sed -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochwise.upstart.conf > /etc/init/molochwise.conf
start molochwise
fi
exit 0;
fi
if [ "$1" == "--parliament" ]; then
if [ -d "/etc/systemd" ] && [ -x "/bin/systemctl" ]; then
echo "Installing systemd start files, use systemctl"
sed -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochparliament.systemd.service > /etc/systemd/system/molochparliament.service
systemctl daemon-reload
systemctl enable molochparliament
systemctl start molochparliament
elif [ -d "/etc/init" ]; then
echo "Installing upstart start files, use 'start molochparliament'"
sed -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochparliament.upstart.conf > /etc/init/molochparliament.conf
start molochparliament
fi
exit 0;
fi
################################################################################
### Ask config questions
if [ -z "$MOLOCH_INTERFACE" ]; then
echo -n "Found interfaces: "
if [ ! -f /sbin/ifconfig ]; then
ip -o link | cut -f2 -d: | tr '\n' ' ' | sed 's/ \+/ /ig' | sed 's/ //1' | sed 's/[[:blank:]]*$//' | tr ' ' ';' | sed 's/$/\n/'
else
/sbin/ifconfig | grep "^[a-z]" | cut -d: -f1 | cut -d" " -f1 | paste -s -d";"
fi
echo -n "Semicolon ';' seperated list of interfaces to monitor [eth1] "
read -r MOLOCH_INTERFACE
fi
if [ -z "$MOLOCH_INTERFACE" ]; then MOLOCH_INTERFACE="eth1"; fi
MOLOCH_LOCALELASTICSEARCH=not-set
until [ "$MOLOCH_LOCALELASTICSEARCH" == "yes" ] || [ "$MOLOCH_LOCALELASTICSEARCH" == "no" ] || [ "$MOLOCH_LOCALELASTICSEARCH" == "" ]; do
echo -n "Install Elasticsearch server locally for demo, must have at least 3G of memory, NOT recommended for production use (yes or no) [no] "
read -r MOLOCH_LOCALELASTICSEARCH
done
if [ "$MOLOCH_LOCALELASTICSEARCH" == "yes" ]; then
MOLOCH_ELASTICSEARCH="http://localhost:9200"
which java
JAVA_VAL=$?
if [ $JAVA_VAL -ne 0 ]; then
echo "java command not found, make sure java is installed and in the path and run again"
fi
else
if [ -z "$MOLOCH_ELASTICSEARCH" ]; then
echo -n "Elasticsearch server URL [http://localhost:9200] "
read -r MOLOCH_ELASTICSEARCH
fi
if [ -z "$MOLOCH_ELASTICSEARCH" ]; then MOLOCH_ELASTICSEARCH="http://localhost:9200"; fi
fi
while [ -z "$MOLOCH_PASSWORD" ]; do
echo -n "Password to encrypt S2S and other things [no-default] "
read -r MOLOCH_PASSWORD
done
if [ -z "$MOLOCH_PASSWORD" ]; then echo "Must provide a password"; exit; fi
################################################################################
echo "Moloch - Creating configuration files"
if [ ! -f "$MOLOCH_INSTALL_DIR/etc/config.ini" ]; then
echo sed -e "s/MOLOCH_INTERFACE/${MOLOCH_INTERFACE}/g" -e "s,MOLOCH_ELASTICSEARCH,${MOLOCH_ELASTICSEARCH},g" -e "s/MOLOCH_PASSWORD/${MOLOCH_PASSWORD}/g" -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/config.ini.sample > $MOLOCH_INSTALL_DIR/etc/config.ini
sed -e "s/MOLOCH_INTERFACE/${MOLOCH_INTERFACE}/g" -e "s,MOLOCH_ELASTICSEARCH,${MOLOCH_ELASTICSEARCH},g" -e "s/MOLOCH_PASSWORD/${MOLOCH_PASSWORD}/g" -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/config.ini.sample > $MOLOCH_INSTALL_DIR/etc/config.ini
else
echo "Not overwriting $MOLOCH_INSTALL_DIR/etc/config.ini, delete and run again if update required (usually not), or edit by hand"
sleep 2
fi
if [ -d "/etc/systemd" ] && [ -x "/bin/systemctl" ]; then
echo "Installing systemd start files, use systemctl"
sed -e "s/MOLOCH_INTERFACE/${MOLOCH_INTERFACE}/g" -e "s,MOLOCH_ELASTICSEARCH,${MOLOCH_ELASTICSEARCH},g" -e "s/MOLOCH_PASSWORD/${MOLOCH_PASSWORD}/g" -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochcapture.systemd.service > /etc/systemd/system/molochcapture.service
sed -e "s/MOLOCH_INTERFACE/${MOLOCH_INTERFACE}/g" -e "s,MOLOCH_ELASTICSEARCH,${MOLOCH_ELASTICSEARCH},g" -e "s/MOLOCH_PASSWORD/${MOLOCH_PASSWORD}/g" -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochviewer.systemd.service > /etc/systemd/system/molochviewer.service
elif [ -d "/etc/init" ]; then
echo "Installing upstart start files, use start"
sed -e "s/MOLOCH_INTERFACE/${MOLOCH_INTERFACE}/g" -e "s,MOLOCH_ELASTICSEARCH,${MOLOCH_ELASTICSEARCH},g" -e "s/MOLOCH_PASSWORD/${MOLOCH_PASSWORD}/g" -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochcapture.upstart.conf > /etc/init/molochcapture.conf
sed -e "s/MOLOCH_INTERFACE/${MOLOCH_INTERFACE}/g" -e "s,MOLOCH_ELASTICSEARCH,${MOLOCH_ELASTICSEARCH},g" -e "s/MOLOCH_PASSWORD/${MOLOCH_PASSWORD}/g" -e "s,MOLOCH_INSTALL_DIR,${MOLOCH_INSTALL_DIR},g" < $MOLOCH_INSTALL_DIR/etc/molochviewer.upstart.conf > /etc/init/molochviewer.conf
else
echo "No startup scripts created for capture and viewer"
fi
################################################################################
# re-create these directories after installation so they are not part of the package manifest
CREATEDIRS="logs raw"
for CREATEDIR in $CREATEDIRS; do
if [ ! -e $MOLOCH_INSTALL_DIR/$CREATEDIR ]; then
mkdir -m 0700 -p $MOLOCH_INSTALL_DIR/$CREATEDIR && \
chown nobody $MOLOCH_INSTALL_DIR/$CREATEDIR
fi
done
################################################################################
ARCHRPM=$(uname -m)
case $ARCHRPM in
"x86_64")
ARCHDEB="amd64"
;;
"aarch64")
ARCHDEB="arm64"
;;
esac
if [ "$MOLOCH_LOCALELASTICSEARCH" == "yes" ]; then
echo "Moloch - Downloading and installing demo OSS version of Elasticsearch"
ES_VERSION=7.7.1
if [ -f "/etc/redhat-release" ] || [ -f "/etc/system-release" ]; then
yum install https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ES_VERSION}-${ARCHRPM}.rpm
else
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ES_VERSION}-${ARCHDEB}.deb
dpkg -i elasticsearch-oss-${ES_VERSION}-$ARCHDEB.deb
/bin/rm -f elasticsearch-oss-${ES_VERSION}-$ARCHDEB.deb
fi
fi
################################################################################
if [ -d "/etc/logrotate.d" ] && [ ! -f "/etc/logrotate.d/$MOLOCH_NAME" ]; then
echo "Moloch - Installing /etc/logrotate.d/$MOLOCH_NAME to rotate files after 7 days"
cat << EOF > /etc/logrotate.d/$MOLOCH_NAME
$MOLOCH_INSTALL_DIR/logs/capture.log
$MOLOCH_INSTALL_DIR/logs/viewer.log {
daily
rotate 7
notifempty
copytruncate
}
EOF
fi
################################################################################
INTERFACES=${MOLOCH_INTERFACE//;/ }
cat << EOF > $MOLOCH_INSTALL_DIR/bin/moloch_config_interfaces.sh
#!/bin/sh
for interface in $INTERFACES; do
/sbin/ethtool -G \$interface rx 4096 tx 4096 || true
for i in rx tx sg tso ufo gso gro lro; do
/sbin/ethtool -K \$interface \$i off || true
done
done
EOF
chmod a+x $MOLOCH_INSTALL_DIR/bin/moloch_config_interfaces.sh
################################################################################
if [ -d "/etc/security/limits.d" ] && [ ! -f "/etc/security/limits.d/99-moloch.conf" ]; then
echo "Moloch - Installing /etc/security/limits.d/99-moloch.conf to make core and memlock unlimited"
cat << EOF > /etc/security/limits.d/99-moloch.conf
nobody - core unlimited
root - core unlimited
nobody - memlock unlimited
root - memlock unlimited
EOF
fi
################################################################################
MOLOCH_INET=not-set
until [ "$MOLOCH_INET" == "yes" ] || [ "$MOLOCH_INET" == "no" ] || [ "$MOLOCH_INET" == "" ]; do
echo -n "Download GEO files? (yes or no) [yes] "
read -r MOLOCH_INET
done
if [ "$MOLOCH_INET" != "no" ]; then
echo "Moloch - Downloading GEO files"
$MOLOCH_INSTALL_DIR/bin/moloch_update_geo.sh > /dev/null
else
echo "Moloch - NOT downloading GEO files"
fi
################################################################################
echo ""
echo "Moloch - Configured - Now continue with step 4 in $MOLOCH_INSTALL_DIR/README.txt"
echo ""
tail -n +10 $MOLOCH_INSTALL_DIR/README.txt