Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google-cloud-logging: namespace-id and container_name not registered in the default GKE installation #6239

Closed
mgenov opened this issue Sep 9, 2019 · 7 comments
Assignees
Labels
api: container Issues related to the Kubernetes Engine API API. api: logging Issues related to the Cloud Logging API. status: investigating The issue is under investigation, which is determined to be non-trivial. type: question Request for information or clarification. Not an issue.

Comments

@mgenov
Copy link

mgenov commented Sep 9, 2019

I'm not sure whether this is a role of the stackdriver-metadata-agent but this information is required by the java logging client and it's not available in my cluster.

The k8s cluster is configured with Stackdriver Kubernetes Engine Monitoring and Workload Identity enabled. All agents are running in the kube-system:

stackdriver-metadata-agent-cluster-level-74785fffdd-79b6v        1/1     Running   0          3h46m

The logs show no errors but the information regarding container_name and namespace_id is not available inside the containers:

root@workload-identity-test:/# curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/"  -H "Metadata-Flavor: Google"
cluster-name
root@workload-identity-test:/#

e.g only cluster-name is available but the google-cloud-logging library is looking and for namespace-id and container_name.

@pmakani pmakani added api: container Issues related to the Kubernetes Engine API API. type: question Request for information or clarification. Not an issue. labels Sep 9, 2019
@chingor13 chingor13 added the api: logging Issues related to the Cloud Logging API. label Sep 9, 2019
@igorpeshansky
Copy link

@chingor13, see Stackdriver/kubernetes-configs#47 (comment). TL;DR: this was actually a bug introduced in #3887.

@mgenov
Copy link
Author

mgenov commented Sep 12, 2019

@chingor13 can you give any information on this?

@pmakani pmakani self-assigned this Sep 13, 2019
@pmakani pmakani added the status: investigating The issue is under investigation, which is determined to be non-trivial. label Sep 13, 2019
@pmakani
Copy link

pmakani commented Sep 16, 2019

@mgenov @igorpeshansky I looked at different scenario, after running both type of cluster configuration with stackdriver logging, i am able to get container-name and namespace-id/
namespace-name. also verified using google-cloud-logging library.
attaching my finding for both type of configurations.

resource: {
  labels: {
   cluster_name: "springboot-stackdriver"    
   container_name: "springboot"    
   instance_id: "6864435408643901574"    
   namespace_id: "default"    
   pod_id: "springboot-599fd7db48-27z8v"    
   project_id: "google-issue-6239"    
   zone: "us-central1-a"    
  }
  type: "container"
 }
 severity: "INFO"  
 textPayload: "12:23:17.592 [http-nio-8080-exec-9] INFO  - Hello World
"  
 timestamp: "2019-09-12T12:23:17.592806398Z"  
}

resource: {
  labels: {
   cluster_name: "stackdriver-kubernetes"
   container_name: "kube-log"    
   location: "us-central1-a"    
   namespace_name: "default"    
   pod_name: "kube-log-585c5cb974-xc7hl"    
   project_id: "google-issue-6239"    
  }
  type: "k8s_container"   
 }
 severity: "INFO"  
 textPayload: "10:42:21.792 [http-nio-8080-exec-1] INFO  - Hello World! Kubernetes Stackdriver
"  
 timestamp: "2019-09-13T10:42:21.796188304Z"  
}

please let me know if there is something that i am missing here.

@igorpeshansky
Copy link

@pmakani Were those ingested via the google-cloud-logging library, or printed to stdout and ingested by the logging agent? If the latter, it's expected, because the agent does have access to the namespace name and the container name. This issue is about that information not being exposed to code running inside the container.

@codyoss
Copy link
Member

codyoss commented Oct 4, 2019

@mgenov @igorpeshansky I would consider this a dupe of #5765 . I would like to close this issue and move conversation over there if so. Let me know if I am mistaken.

@igorpeshansky
Copy link

It does look like a dupe, but see #6239 (comment) above.

@codyoss
Copy link
Member

codyoss commented Oct 4, 2019

Noted, thank you. Like I said here #5765 (comment) I think we might need to open up an issue with GKE to expose this data. It is unfortunate though that it looks like we can get this data from the pod, but I don't think we can today. Closing this, if we have future discussion lets do it here: #5765 .

@codyoss codyoss closed this as completed Oct 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: container Issues related to the Kubernetes Engine API API. api: logging Issues related to the Cloud Logging API. status: investigating The issue is under investigation, which is determined to be non-trivial. type: question Request for information or clarification. Not an issue.
Projects
None yet
Development

No branches or pull requests

5 participants