Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

This file was deleted.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
25 changes: 15 additions & 10 deletions SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,16 +92,21 @@
* [Certificates for sidecar injection](setup/agent/k8sTs-agent-request-tracing-certificates.md)

## 🔭 Open Telemetry
* [Getting started](setup/otel/getting-started.md)
* [Overview](setup/otel/overview.md)
* [Getting started](setup/otel/getting-started/README.md)
* [Concepts](setup/otel/concepts.md)
* [Rancher & Kubernetes](setup/otel/getting-started/getting-started-k8s.md)
* [Linux](setup/otel/getting-started/getting-started-linux.md)
* [AWS Lambda](setup/otel/getting-started/getting-started-lambda.md)
* [Open telemetry collector](setup/otel/collector.md)
* [Collector as a proxy](setup/otel/proxy-collector.md)
* [Languages](setup/otel/languages/README.md)
* [Generic Exporter configuration](setup/otel/languages/sdk-exporter-config.md)
* [Java](setup/otel/languages/java.md)
* [Node.js](setup/otel/languages/node.js.md)
* [Auto-instrumentation of Lambdas](setup/otel/languages/node.js/auto-instrumentation-of-lambdas.md)
* [.NET](setup/otel/languages/dot-net.md)
* [Verify the results](setup/otel/languages/verify.md)
* [Sampling](setup/otel/sampling.md)
* [SUSE Observability OTLP APIs](setup/otel/otlp-apis.md)
* [Instrumentation](setup/otel/instrumentation/README.md)
* [Java](setup/otel/instrumentation/java.md)
* [Node.js](setup/otel/instrumentation/node.js.md)
* [Auto-instrumentation of Lambdas](setup/otel/instrumentation/node.js/auto-instrumentation-of-lambdas.md)
* [.NET](setup/otel/instrumentation/dot-net.md)
* [SDK Exporter configuration](setup/otel/instrumentation/sdk-exporter-config.md)
* [Troubleshooting](setup/otel/troubleshooting.md)

## CLI
Expand Down Expand Up @@ -171,7 +176,7 @@
## 🔐 Security

* [Service Tokens](use/security/k8s-service-tokens.md)
* [Ingestion API Keys](use/security/k8s-ingestion-api-keys.md)
* [API Keys](use/security/k8s-ingestion-api-keys.md)

## ☁️ SaaS
* [User Management](saas/user-management.md)
Expand Down
2 changes: 1 addition & 1 deletion setup/install-stackstate/kubernetes_openshift/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ This step assummes that [Generate `baseConfig_values.yaml` and `sizing_values.ya
{% endhint %}


## Configure Ingress Rule for Open Telemetry Traces via the SUSE Observability Helm chart
## Configure Ingress Rule for Open Telemetry

The SUSE Observability Helm chart exposes an `opentelemetry-collector` service in its values where a dedicated `ingress` can be created. This is disabled by default. The ingress needed for `opentelemetry-collector` purposed needs to support GRPC protocol. The example below shows how to use the Helm chart to configure an nginx-ingress controller with GRPC and TLS encryption enabled. Note that setting up the controller itself and the certificates is beyond the scope of this document.

Expand Down
384 changes: 185 additions & 199 deletions setup/otel/collector.md

Large diffs are not rendered by default.

53 changes: 53 additions & 0 deletions setup/otel/concepts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
description: SUSE Observability
---

# Open Telemetry concepts

This is a summary of the most important concepts in Open Telemetry and should be sufficient to get started. For a more detailed introduction use the [Open Telemetry documentation](https://opentelemetry.io/docs/concepts/)

## Signals

Open Telemetry recognizes 3 telemetry signals:

* Traces
* Metrics
* Logs

At the momemt SUSE Observability supports traces and metrics, logs will be supported in a future version. For Kubernetes logs it is possible to use the [SUSE Observability agent](/k8s-quick-start-guide.md) instead.

### Traces

Traces allow us to visualize the path of a request through your application. A trace consists of one or more spans that together form a tree, a single trace can be entirely within a single service, but it can also go across many services. Each span represents an operation in the processing of the request and has:
* a name
* start and end time, from that a duration can be calculated
* status
* attributes
* resource attributes (see [resources](#resources))
* events

Span attributes are used to provide metadata for the span, for example a span that for an operation that places an order can have the `orderId` as an attribute, or a span for an HTTP operation can have the HTTP method and URL as attributes.

Span events can be used to represent a point in time where something important happened within the operation of the span. For example if the span failed there can be an `exception` or an `error` event that captures the error message, a stacktrace and the exact point in time the error occurred.

### Metrics

Metrics are measurements captured at runtime and they result in a metric event. Metrics are important indicators for application performance and availability and are often used to alert on an outage or performance problem. Metrics have:
* a name
* a timestamp
* a kind (counter, gauge, histogram, etc.)
* attributes
* resource attributes (see [resources](#resources))

Attributes provide the metadata for a metric.

## Resources

A resource is the entity that produces the telemetry data. The resource attributes provide the metadata for the resource. For example a process running in a container, in a pod, in a namespace in a Kubernetes cluster can have resource attributes for all these entities.

Resource attributes are often automatically assigned by the SDKs. However it is recommended to always set the `service.name` and `service.namespace` attributes explicitly. The first one is the logical name for the service, if not set the SDK will set an `unknown_service` value making it very hard to use the data later in SUSE Observability. The namespace is a convenient way to organize your services, especially useful if you have the same services running in multiple locations.

## Semantic conventions

Open Telemetry defines common names for operations and data, they call this the semantic conventions. Semantic conventions follow a naming scheme that allows for standardizing processing of data across languages, libraries and code bases. There are semantic conventions for all signals and for resource attributes. They are defined for many different platforms and operations on the [Open Telemetry website](https://opentelemetry.io/docs/specs/semconv/attributes-registry/). SDKs make use of the semantic conventions to assign these attributes and SUSE Observability also respects the conventions and relies on them, for example to recognize Kubernetes resources.

22 changes: 0 additions & 22 deletions setup/otel/getting-started.md

This file was deleted.

12 changes: 12 additions & 0 deletions setup/otel/getting-started/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
description: SUSE Observability
---

# Getting started

You might first want to familiarize yourself with the Open Telemetry [terminology and concepts](../concepts.md). like, signals, resources, etc.

To get started monitoring one of your own applications follow the getting started guide that matches best with your deployment setup:
* [Kubernetes or Rancher](./getting-started-k8s.md)
* [Linux host](./getting-started-linux.md)
* [AWS Lambda functions](./getting-started-lambda.md)
173 changes: 173 additions & 0 deletions setup/otel/getting-started/getting-started-k8s.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
---
description: SUSE Observability
---

# Getting Started with Open Telemetry on Rancher / Kubernetes

Here is the setup we'll be creating, for an application that needs to be monitored:

* The monitored application / workload running in cluster A
* The Open Telemetry collector running near the observed application(s), so in cluster A, and sending the data to SUSE Observability
* SUSE Observability running in cluster B, or SUSE Cloud Observability

![Container instrumentation with Opentelemetry via collector running as Kubernetes deployment](/.gitbook/assets/otel/open-telemetry-collector-kubernetes.png)


## The Open Telemetry collector

{% hint type="info" %}
For a production setup it is strongly recommended to install the collector, since it allows your service to offload data quickly and the collector can take care of additional handling like retries, batching, encryption or even sensitive data filtering.
{% endhint %}

First we'll install the OTel (Open Telemetry) collector in cluster A. We configure it to:

* Receive data from, potentially many, instrumented applications
* Enrich collected data with Kubernetes attributes
* Generate metrics for traces
* Forward the data to SUSE Observability, including authentication using the API key

Next to that it will also retry sending data when there are connection problems.

### Create the namespace and a secret for the API key

We'll install in the `open-telemetry` namespace and use the receiver API key generated during installation (see [here](/use/security/k8s-ingestion-api-keys.md#api-keys) where to find it):

```bash
kubectl create namespace open-telemetry
kubectl create secret generic open-telemetry-collector \
--namespace open-telemetry \
--from-literal=API_KEY='<suse-observability-api-key>'
```

### Configure and install the collector

We install the collector with a Helm chart provided by the Open Telemetry project. Make sure you have the Open Telemetry helm charts repository configured:

```bash
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
```

Create a `otel-collector.yaml` values file for the Helm chart. Here is a good starting point for usage with SUSE Observability, replace `<otlp-suse-observability-endpoint>` with your OTLP endpoint (see [OTLP API](../otlp-apis.md) for your endpoint) and insert the name for your Kubernetes cluster instead of `<your-cluster-name>`:

{% code title="otel-collector.yaml" lineNumbers="true" %}
```yaml
# Set the API key from the secret as an env var:
extraEnvsFrom:
- secretRef:
name: open-telemetry-collector
mode: deployment
image:
# Use the collector container image that has all components important for k8s. In case of missing components the otel/opentelemetry-collector-contrib image can be used which
# has all components in the contrib repository: https://github.com/open-telemetry/opentelemetry-collector-contrib
repository: "otel/opentelemetry-collector-k8s"
ports:
metrics:
enabled: true
presets:
kubernetesAttributes:
enabled: true
extractAllPodLabels: true
# This is the config file for the collector:
config:
receivers:
nop: {}
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
extensions:
# Use the API key from the env for authentication
bearertokenauth:
scheme: SUSEObservability
token: "${env:API_KEY}"
exporters:
nop: {}
otlp/suse-observability:
auth:
authenticator: bearertokenauth
# Put in your own otlp endpoint
endpoint: <otlp-suse-observability-endpoint>
compression: snappy
processors:
memory_limiter:
check_interval: 5s
limit_percentage: 80
spike_limit_percentage: 25
batch: {}
resource:
attributes:
- key: k8s.cluster.name
action: upsert
# Insert your own cluster name
value: <your-cluster-name>
- key: service.instance.id
from_attribute: k8s.pod.uid
action: insert
# Use the k8s namespace also as the open telemetry namespace
- key: service.namespace
from_attribute: k8s.namespace.name
action: insert
connectors:
# Generate metrics for spans
spanmetrics:
metrics_expiration: 5m
namespace: otel_span
service:
extensions: [ health_check, bearertokenauth ]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, resource, batch]
exporters: [debug, spanmetrics, otlp/suse-observability]
metrics:
receivers: [otlp, spanmetrics, prometheus]
processors: [memory_limiter, resource, batch]
exporters: [debug, otlp/suse-observability]
logs:
receivers: [nop]
processors: []
exporters: [nop]
```
{% endcode %}

{% hint type="warning" %}
**Use the same cluster name as used for installing the SUSE Observability agent** if you also use the SUSE Observability agent with the Kubernetes stackpack. Using a different cluster name will result in an empty traces perspective for Kubernetes components and will overall make correlating information much harder for SUSE Observability and your users.
{% endhint %}

Now install the collector, using the configuration file:

```bash
helm upgrade --install opentelemetry-collector open-telemetry/opentelemetry-collector \
--values otel-collector.yaml \
--namespace open-telemetry
```

The collector offers a lot more configuration receivers, processors and exporters, for more details see our [collector page](../collector.md). For production usage often large amounts of spans are generated and you will want to start setting up [sampling](../sampling.md).

## Collect telemetry data from your application

The common way to collect telemetry data is to instrument your application using the Open Telemetry SDK's. We've documented some quick start guides for a few languages, but there are many more:
* [Java](../instrumentation/java.md)
* [.NET](../instrumentation/dot-net.md)
* [Node.js](../instrumentation/node.js.md)

For other languages follow the documentation on [opentelemetry.io](https://opentelemetry.io/docs/languages/) and make sure to configure the SDK exporter to ship data to the collector you just installed by following [these instructions](../instrumentation/sdk-exporter-config.md).

## View the results
Go to SUSE Observability and make sure the Open Telemetry Stackpack is installed (via the main menu -> Stackpacks).

After a short while and if your pods are getting some traffic you should be able to find them under their service name in the Open Telemetry -> services and service instances overviews. Traces will appear in the [trace explorer](/use/traces/k8sTs-explore-traces.md) and in the [trace perspective](/use/views/k8s-traces-perspective.md) for the service and service instance components. Span metrics and language specific metrics (if available) will become available in the [metrics perspective](/use/views/k8s-metrics-perspective.md) for the components.

If you also have the Kubernetes stackpack installed the instrumented pods will also have the traces available in the [trace perspective](/use/views/k8s-traces-perspective.md).

## Next steps
You can add new charts to components, for example the service or service instance, for your application, by following [our guide](/use/metrics/k8s-add-charts.md). It is also possible to create [new monitors](/use/alerting/k8s-monitors.md) using the metrics and setup [notifications](/use/alerting/notifications/configure.md) to get notified when your application is not available or having performance issues.

# More info

* [API keys](/use/security/k8s-ingestion-api-keys.md)
* [Open Telemetry API](../otlp-apis.md)
* [Customizing Open Telemetry Collector configuration](../collector.md)
* [Open Telemetry SDKs](../instrumentation/README.md)
Loading