Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions Technical/Logging/Logging-Data-View.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,35 +10,35 @@

In order to facilitate the logging of applications within the Kubernetes cluster to Logstash via Pino, a critical step involves the duplication of the Elasticsearch master certificates. The provided screenshot indicates that there are two Elasticsearch master certificate secrets, each currently residing in distinct namespaces: one in '**processor**' and the other in '**development**'. To ensure seamless logging capabilities, the certificates in the 'development' namespace must be replicated and their namespace attribute adjusted to 'processor'. This is essential because the applications that require logging do not exist within the 'development' namespace. Without this duplication and reassignment, the applications would be unable to establish a connection to Logstash for logging purposes, as the necessary TLS certificates would not be available in the correct namespace. By making a duplicate with the updated namespace, we enable cross-namespace communication, allowing the applications to transmit their logs effectively.

![image-20240212-045445.png](../../images/image-20240212-045445.png)
![image-20240212-045445.png](../../images/log-dataview-0.png)

## Indices

The heart of the configuration, this is, at the bare minimum, what we’ll need to get our setup running. Fortunately, our logging structure creates one for us. By default, it is named `pino`, after the logging framework at the heart of it all. To verify if you have an index, navigate to `Stack Management > Index Management` (Index Management should be under the `Data` heading on your navigation drawer on the left side of the screen).

![](../../images/image-20231208-084112.png)
![](../../images/log-dataview-1.png)

On Index Management, you should see your auto created `pino` index. Scroll down on the navigation drawer and find the `Data Views` section:

![](../../images/image-20231208-084349.png)
![](../../images/log-dataview-2.png)

In here, we need to create a `Data View` based on our index (`pino`). This data view, being ultimately what we use to display our logs. You need to give the data view a name, while the name is not important, you need to pay attention to the `Index pattern` field. We want to use the `pino` index, but wildcards are also accepted. After saving your changes, you may navigate to `Observability > Logs > Settings`

![](../../images/image-20231208-085220.png)
![](../../images/log-dataview-3.png)

And select `pino*` for your log data view as above.
In the section below `Log sources`, you are then able to pick the columns you want to be displayed in your data view.

![](../../images/image-20231208-085401.png)
![](../../images/log-dataview-4.png)

## **Final Result**

The final objective is to create a user interface like the one shown in the image, which reflects a logstash logging. This interface captures and displays logging information in a structured format, including timestamps, channels, messages, IDs, and operation labels, offering a clear and concise view of the system's activities.

In addition to the current functionalities, the aim is to enhance this interface with greater configurability. This would likely involve the ability to filter logs by various parameters such as date, channel, message content, or specific IDs. The interface could also offer more advanced features like sorting options, real-time updates, customizable alert thresholds for monitoring specific events, and integrations with incident management systems. Tools or controls to manage the verbosity of the logs, defining which level of information should be captured, from general info to debug-level details, may also be part of this expansion. The expanded interface would serve not only as a passive log display but as a robust tool for active system monitoring and incident response.

![](../../images/image-20231208-083424.png)
![](../../images/log-dataview-5.png)

## **Additional Information**

To read more about the [architecture](../../../frms-platform-developers-documentation/the-tazama-logging-framework/archived-information/logging-framework-architecture.md)
To read more about the [architecture](./Logging-Framework-Architecture.md)
14 changes: 7 additions & 7 deletions Technical/Logging/Setting-Up-Elastic-APM.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@
1. Open your Elastic instance on your browser, which should look like the initial welcome screen you’ve seen.
2. Click on **"Explore on my own"** for a manual setup process.

![image-20240215-075201.png](../../images/image-20240215-075201.png)
![image-20240215-075201.png](../../images/elastic-1.png)

3. Navigate to the **APM** section on your Elastic dashboard and select **"Add APM integration"**.

![image-20240215-075235.png](../../images/image-20240215-075235.png)
![image-20240215-075235.png](../../images/elastic-2.png)

4. In the **Integration name** field, you can input a recognizable name for your APM setup, like "apm-1".
5. Under the **Server configuration** section:
Expand All @@ -18,7 +18,7 @@

- For the **URL**, use `http://apm-server.development.svc.cluster.local:8200`. This URL is the endpoint for the APM server within your cluster.

![image-20240215-075312.png](../../images/image-20240215-075312.png)
![image-20240215-075312.png](../../images/elastic-3.png)

6. Click **"Save and continue"** to proceed to the next steps.
7. Now, move on to **"Create configuration"** for your APM agents:
Expand All @@ -29,14 +29,14 @@

- The **Transaction sample rate** is set to 1.0 by default, indicating that every transaction will be sampled.

![image-20240215-075352.png](../../images/image-20240215-075352.png)
![image-20240215-075352.png](../../images/elastic-4.png)

8. After setting up these values, click on **"Save configuration"** to finalize the setup.

![image-20240215-075424.png](../../images/image-20240215-075424.png)
![image-20240215-075442.png](../../images/image-20240215-075442.png)
![image-20240215-075424.png](../../images/elastic-5.png)
![image-20240215-075442.png](../../images/elastic-6.png)

9. Your APM agents should now be configured to send performance data to the Elastic APM server.
10. To ensure that the setup is correct, you can create a test transaction and check if it appears in the Elastic APM dashboard.

![image-20240215-075504.png](../../images/image-20240215-075504.png)
![image-20240215-075504.png](../../images/elastic-7.png)
Loading