-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pubsub: weird panic ( pubsub.NewClient ) #229
Comments
Makes sense that would fix it. It's trying to find the home directory with Can you reproduce this panic consistently? If so, can you tell us more
|
The service is continuously built and deployed to a kubernetes cluster using wercker. Dockerfile for build is : az3r/golang-rocksdb:1.6-4.4 Dockerfile at runtime is rocksdb is embeded in binary I believe ( build command is : Panic at startup happens a lot:
And then suddenly starts working like a charm. Ps: sha is 335f474 |
That's a segfault. I think it's because there's no If you want to use the default service account from the metadata server, you can use the |
Very strange. I can't reproduce this.
|
Are you running on Google Container Engine? |
Okay, I did a little manual bissection, and this is caused by the adding of rocksdb. replacing boltdb with rocksdb:
Now if I build the binary inside my local docker using the same commands as my builder:
It works just fine
Weird thing is it can randomly work. $ kubectl get pods
NAME READY STATUS RESTARTS AGE
hotbase-paris-6b507d4-7fupd 1/1 Running 0 35m #boltDB I just built
hotbase-paris-db746c9-5g3r0 1/1 Running 172 16h #rocksDB build from yesterday Now the panic is always the same:
At the time of the panic no rocksdb storage is instantiated yet. most of the time container status is :
|
I'll try running my demo on Container Engine. |
Can't reproduce this. What exactly is in your Dockerfile? Is it running your app's binary as well as rocksdb? Is it running just rocksdb? |
Ah bummer, I will try to pull something reproductible ! |
Okay, took looooong enough but I nailed it down to this code :
remove one of gorocksdb.OpenDb / pubsub.NewClient / storage.NewClient and it works fine. Binary built inside a This randomly panics. |
@azr can you show me the Dockerfiles and commands you're running, too? I don't fully understand the setup you have going on here. |
Okay, For building I run:
( normally I use godep instead of go get ) That Dockerfile I just used is in : azr/golang-rocksdb The storage/env-rocksdb.sh file is :
Now that I have the binary I run :
That second docker file is :
And the test.yaml is a replication controller:
|
I will try pinging @tecbot also as he might have a clue. |
could you reproduce it @broady ? |
I think this is glibc bug 19341, which is caused by statically linking glibc. See golang/go#13470 for background. A workaround is to link glibc dynamically, i.e., remove |
@azr, did the workaround fix the problem for you? Since this has been attributed to a bug elsewhere, and there is no contradicting evidence, I'm going to close. |
Hey people, thanks for your time. I'll forward your hotfix to them though they probably can do something with it !? |
Oh, actually reading from my comments, I already did that. The whole build command was :
the netgo tag fixed another C problem |
Hello there,
I have a service running in GCP kubernetes and right after calling the latest
pubsub.NewClient(ctx, *projID)
it panics:It works locally if I use a json key file. ( on docker )
Cheers ! :)
The text was updated successfully, but these errors were encountered: