Elasticsearch and Kibana is easy to setup on any OS platorm - Linux, Darwin or Windows. Our project uses v8.4 for both Elasticsearch and Kibana. We setup both Elasticsearch and Kibana on a Ubuntu 22.04 LTS VM with the following specifications:
- CPU: 2vCPU
- RAM: 2GB
- Disk: 60GB
If you want to skip the official documentation and quickly get hands-on, simply follow this for a quick setup!
-
Access Kibana from your browser at https://ip-address:port-number and login with the built-in account
elastic. Navigate to Management and then Dev Tools. -
Create index amatsa
PUT /amatsa -
Create roles to access the index:
a. admin: This role has full access to indexamatsaand kibana
b. analyst: This role has read access to indexamatsaand kibana
c. agent: This role has write access to indexamatsa
POST _security/role/admin
{
"indices": [
{
"names": [
"amatsa"
],
"privileges": [
"*",
],
"allow_restricted_indices": false
}
],
"applications": [
{
"application": "kibana-.kibana",
"privileges": [
"*"
],
"resources": [
"*"
]
}
]
}
POST _security/role/analyst
{
"indices": [
{
"names": [
"amatsa"
],
"privileges": [
"read",
"monitor"
],
"allow_restricted_indices": false
}
],
"applications": [
{
"application": "kibana-.kibana",
"privileges": [
"read"
],
"resources": [
"*"
]
}
]
}
POST _security/role/agent
{
"indices": [
{
"names": [
"amatsa"
],
"privileges": [
"write"
]
}
]
}
- Create one user account for each of the roles created
POST _security/user/<username>
{
"roles": [
<one-of-the-created-roles>
],
"full_name": "Account Name",
"email": "mail@example.com",
"password": "password"
}
- Setup runtime fields for your index after installing one client.
Setting up the client is very easy! 😉 Yes, we took care of that!!! The client is a python script that sends updates about system metrics to your Elasticsearch every X mins. We don't tamper your client's Python installations. We setup our own virtual environments and execute inside that. We use the task scheduler (or cronjob) to schedule the client scripts to run.
- Python 3.10+ (symlinked to python)
-
Download or clone this repository and cd to the cloned directory.
-
Clients use the YAML file in
src/config/amatsa-client.ymlto read configuration. You should change these parameters depending on how you are setting up the Elasticsearch server. Configuration includes:
- version: For client version tracking. You can leave this as it is!
- endpoint: Elasticsearch endpoint used by client used to push collected data. Give the HTTPS endpoint where your Elasticsearch is running. Elasticsearch v8.4 uses TLS by default and it is recommended to not downgrade to HTTP.
- tls-fingerprint: To verify authenticity of Elasticsearch server. The tls-fingerprint can be retrieved from the Elasticsearch server using this command:
openssl x509 -fingerprint -sha256 -in /etc/elasticsearch/certs/http_ca.crt
- username, password: Authentication to write to Elasticsearch. This is the client username and password you configured for role
agent. - index: Document will be written to this index in Elasticsearch. Tip: To skip this step altogether, you can edit this config file before deploying the repository to clients!
- On Linux/macOS, take ownership of script deploy.sh using
chmod +x deploy.sh. If you want the metric collection to happen every 30 mins, you should run./deploy.sh 30. On Windows, simply rundeploy-windows.bat 30. - Your client should be up and running by now!
To uninstall, simply delete the scheduled task/cron job and remove the repository.
- On macOS, if the cron job is not running at the interval you have setup, make sure
cronhas access to the location where your repository is available on the filesystem. - On Windows, make sure the correct version of Python (3.10+) is installed and
pythonis available inPATHlocation.