Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OTEL agent template requires update for hostname env lookup #904

Closed
alee2233 opened this issue Sep 1, 2023 · 2 comments
Closed

OTEL agent template requires update for hostname env lookup #904

alee2233 opened this issue Sep 1, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@alee2233
Copy link

alee2233 commented Sep 1, 2023

What happened?

Description

When using logsEngine = "otel" and enabling journald in logsCollection, the agent pod logs have error messages with:

"error": "evaluate value_expr: reflect: call of reflect.Value.Call on map Value (1:1)\n | env(\"K8S_NODE_NAME\")\n | ^"

The EXPR(env("K8S_NODE_NAME")) is used here (_otel-agent.tpl). However, a recent change in the opentelemetry-collector-contrib project (commit) has updated this from env to os_env_func.

I've manually modified the config map to use EXPR(os_env_func("K8S_NODE_NAME")) and the errors no longer exist.

Steps to Reproduce

Use logsEngine = "otel" and enable journald logging.

Expected Result

No errors in agent logs.

Actual Result

Error log entries show up in the agent logs (reformatted for easier viewing, also redacted sensitive info):

2023-09-01T18:51:29.452Z	error	helper/transformer.go:98	Failed to process entry	
{
    "kind": "receiver",
    "name": "journald/kubelet",
    "data_type": "logs",
    "operator_id": "add3",
    "operator_type": "add",
    "error": "evaluate value_expr: reflect: call of reflect.Value.Call on map Value (1:1)\n | env(\"K8S_NODE_NAME\")\n | ^",
    "action": "send",
    "entry": {
        "observed_timestamp": "2023-09-01T18:51:29.452050257Z",
        "timestamp": "2023-09-01T18:51:29.451617Z",
        "body": {
            "MESSAGE": "I0901 18:51:29.451401    3120 logs.go:323] \"Finished parsing log file\" path=\"/var/log/pods/monitoring_splunk-otel-collector-agent-xxxxx/otel-collector/0.log\"",
            "PRIORITY": "6",
            "SYSLOG_FACILITY": "3",
            "SYSLOG_IDENTIFIER": "kubelet",
            "_BOOT_ID": "832e7debce4c4d33a6314fc9e7528515",
            "_CAP_EFFECTIVE": "1ffffffffff",
            "_CMDLINE": "/usr/bin/kubelet --config /etc/kubernetes/kubelet/kubelet-config.json --kubeconfig /var/lib/kubelet/kubeconfig --container-runtime-endpoint unix:///run/containerd/containerd.sock --image-credential-provider-config /etc/eks/image-credential-provider/config.json --image-credential-provider-bin-dir /etc/eks/image-credential-provider --node-ip=xxxxx --pod-infra-container-image=xxxxx.dkr.ecr.us-east-1.amazonaws.com/eks/pause:3.5 --v=2 --hostname-override=xxxxx --cloud-provider=external --node-labels=eks.amazonaws.com/nodegroup-image=ami-xxxxx,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/sourceLaunchTemplateVersion=1,eks.amazonaws.com/nodegroup=default-node-group-xxxxx,node-group=default-node-group,eks.amazonaws.com/sourceLaunchTemplateId=lt-xxxxx --max-pods=58",
            "_COMM": "kubelet",
            "_EXE": "/usr/bin/kubelet",
            "_GID": "0",
            "_HOSTNAME": "xxxxx",
            "_MACHINE_ID": "ec2163b0002c21cfffbe405765926ae4",
            "_PID": "3120",
            "_STREAM_ID": "548f9ecb1fa74a87b0437aaac4f02333",
            "_SYSTEMD_CGROUP": "/runtime.slice/kubelet.service",
            "_SYSTEMD_SLICE": "runtime.slice",
            "_SYSTEMD_UNIT": "kubelet.service",
            "_TRANSPORT": "stdout",
            "_UID": "0",
            "__CURSOR": "s=5887aa5815ff4ec395a2b456e861e40d;i=1ecff4d;b=832e7debce4c4d33a6314fc9e7528515;m=64ac48f74a8;t=60450a6043261;x=51588681d8cd8fdd",
            "__MONOTONIC_TIMESTAMP": "6918195082408"
        },
        "resource": {
            "com.splunk.index": "eks",
            "com.splunk.source": "/var/log/journal",
            "com.splunk.sourcetype": "kube:journald:kubelet.service"
        },
        "severity": 0,
        "scope_name": ""
    }
}

Chart version

0.83.0

Environment information

Environment

Cloud: EKS
k8s version: 1.27
OS: Amazon Linux 2

Chart configuration

logsEngine = "otel" and enabling journald in logsCollection

Log output

No response

Additional context

No response

@alee2233 alee2233 added the bug Something isn't working label Sep 1, 2023
@atoulme
Copy link
Contributor

atoulme commented Sep 1, 2023

This doesn't look like a change that was made explicitly. It is a bug from the filelog receiver introduced in the commit you point out. The workaround you found can help for now. I will also offer a fix to upstream, so your workaround will no longer work once the fix is made. I'll link the issue to this issue.

@alee2233
Copy link
Author

Fixed in 0.85.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants