Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for data race in generating request ids #12

Merged
merged 1 commit into from
Dec 7, 2016
Merged

Fix for data race in generating request ids #12

merged 1 commit into from
Dec 7, 2016

Conversation

ascotan
Copy link

@ascotan ascotan commented Dec 6, 2016

This fixes a bug where calling random.Int31() from multiple go routines generates a race condition.

The problem is:

==================
WARNING: DATA RACE
Read at 0x00c420027460 by goroutine 281:
math/rand.(*rngSource).Int63()
/usr/local/Cellar/go/1.7.3/libexec/src/math/rand/rng.go:243 +0x156
math/rand.(*Rand).Int63()
/usr/local/Cellar/go/1.7.3/libexec/src/math/rand/rand.go:64 +0x51
math/rand.(*Rand).Int31()
/usr/local/Cellar/go/1.7.3/libexec/src/math/rand/rand.go:70 +0x38
github.com/k-sone/snmpgo.genRequestId()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/util.go:26 +0x66
github.com/k-sone/snmpgo.(*messageProcessingV1).PrepareOutgoingMessage()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/mprocessing.go:33 +0x87
github.com/k-sone/snmpgo.(*snmpEngine).SendPdu()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/engine.go:21 +0xfe
github.com/k-sone/snmpgo.(*SNMP).sendPdu.func1()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/client.go:320 +0xf2
github.com/k-sone/snmpgo.retry()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/util.go:56 +0x6d
github.com/k-sone/snmpgo.(*SNMP).sendPdu()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/client.go:322 +0x19d
github.com/k-sone/snmpgo.(*SNMP).GetRequest()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/client.go:162 +0xb3
...

Previous write at 0x00c420027460 by goroutine 13:
math/rand.(*rngSource).Int63()
/usr/local/Cellar/go/1.7.3/libexec/src/math/rand/rng.go:244 +0x1be
math/rand.(*Rand).Int63()
/usr/local/Cellar/go/1.7.3/libexec/src/math/rand/rand.go:64 +0x51
math/rand.(*Rand).Int31()
/usr/local/Cellar/go/1.7.3/libexec/src/math/rand/rand.go:70 +0x38
github.com/k-sone/snmpgo.genRequestId()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/util.go:26 +0x66
github.com/k-sone/snmpgo.(*messageProcessingV1).PrepareOutgoingMessage()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/mprocessing.go:33 +0x87
github.com/k-sone/snmpgo.(*snmpEngine).SendPdu()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/engine.go:21 +0xfe
github.com/k-sone/snmpgo.(*SNMP).sendPdu.func1()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/client.go:320 +0xf2
github.com/k-sone/snmpgo.retry()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/util.go:56 +0x6d
github.com/k-sone/snmpgo.(*SNMP).sendPdu()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/client.go:322 +0x19d
github.com/k-sone/snmpgo.(*SNMP).GetRequest()
/Users/whoami/Desktop/code/golang/src/github.com/k-sone/snmpgo/client.go:162 +0xb3
....

Goroutine 281 (running) created at:
...

Goroutine 13 (running) created at:
...

It is generated by do something similar to the following:

for ip := network.CIDR.IP.Mask(network.CIDR.Net.Mask);
network.CIDR.Net.Contains(ip);
IncrementIP(ip) {
go func(dupip net.IP) {
QuerySNMP(ip)
}(ip)
}

Putting a mutex around calls to rand.Int31() solves this issue.

@k-sone
Copy link
Owner

k-sone commented Dec 7, 2016

Thank you very much! I had overlooked it 😳

@k-sone k-sone merged commit 4efb4b6 into k-sone:master Dec 7, 2016
@k-sone
Copy link
Owner

k-sone commented Dec 7, 2016

I added a commit 22cd474

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants