Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor bucket configuration flags for Object Store #500

Merged
merged 43 commits into from
Sep 19, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
6e403c7
Add unified flags for bucket configuration
jojohappy Aug 31, 2018
5bfbd28
Use the unified bucket config for components
jojohappy Aug 31, 2018
51801e6
Use double quotes instead of single quotes
jojohappy Aug 31, 2018
c8e311b
Fixed missing flags in man page
jojohappy Aug 31, 2018
f5fda0c
To use value of bucket config instead of pointer to get value from co…
jojohappy Aug 31, 2018
af99b66
Remove useless code
jojohappy Aug 31, 2018
75111fc
Update documents
jojohappy Aug 31, 2018
04c94bd
Update
jojohappy Sep 3, 2018
1e40dab
Change errors package
jojohappy Sep 3, 2018
d340fba
To validate the configuration in each provider client
jojohappy Sep 3, 2018
c132691
To support to make bucket configuration flag using suffix
jojohappy Sep 3, 2018
2e999c2
Update documents
jojohappy Sep 3, 2018
c8694e9
Refactor:
jojohappy Sep 7, 2018
173a2f1
Update documents
jojohappy Sep 7, 2018
49e66f9
Fixed missing method for inmem bucket
jojohappy Sep 7, 2018
8758b9c
Update the describe for objstore.config
jojohappy Sep 8, 2018
9f16668
To setup bucket flags required for component bucket and downsample
jojohappy Sep 8, 2018
a8cf8d6
Rename Content to Config for Bucket.Config
jojohappy Sep 8, 2018
e787a31
To change error handler idiom
jojohappy Sep 8, 2018
ff2cbf5
Update describe for component store
jojohappy Sep 8, 2018
4b51cec
To setup secret-key just use envvar
jojohappy Sep 8, 2018
62e0c89
Update the placeholder of flags and documents
jojohappy Sep 8, 2018
7666ac6
Update example for bucket
jojohappy Sep 8, 2018
61c25aa
Update documents
jojohappy Sep 8, 2018
27f9fc2
Update CHANGELOG
jojohappy Sep 8, 2018
b5943ba
Fixed something nits
jojohappy Sep 10, 2018
845148f
To distinguish no bucket is configured or not supported
jojohappy Sep 10, 2018
0e2f7ae
Update CHANGELOG
jojohappy Sep 10, 2018
7411fa7
Remove unrequired unit test
jojohappy Sep 10, 2018
8580d0c
To set bucket flag required for component store and compact
jojohappy Sep 11, 2018
c61b82e
Rename GetBucket to Name & Set context as first argument
jojohappy Sep 11, 2018
c7aa4a9
Wrap error to give more information
jojohappy Sep 11, 2018
31828d7
Rename field bucket to name for the struct of Bucket
jojohappy Sep 11, 2018
6a81755
Update documents
jojohappy Sep 11, 2018
b3f8eb1
Change the bucket configuration flags to reference to YAML file
jojohappy Sep 17, 2018
cb5f75d
Do not get the secret-key from envvar
jojohappy Sep 17, 2018
8c2dedd
Update documents
jojohappy Sep 17, 2018
9ecc7f9
Update CHANGELOG
jojohappy Sep 17, 2018
2b64ce0
Update test case for package objstore client
jojohappy Sep 17, 2018
248034e
Rename flag objstore.config.file to objstore.config-file
jojohappy Sep 18, 2018
4d665f1
Update CHANGELOG
jojohappy Sep 18, 2018
33d1559
To wrap errors for loading and parsing bucket configuration file
jojohappy Sep 19, 2018
5bbb652
Rename objstore flag for consistency
jojohappy Sep 19, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,25 @@ NOTE: As semantic versioning states all 0.y.z releases can contain breaking chan

## Unreleased

- Remove support of those flags for bucket
- --gcs-bucket=\<bucket\>
- --s3.bucket=\<bucket\>
- --s3.endpoint=\<api-url\>
- --s3.access-key=\<key\>
- --s3.insecure
- --s3.signature-version2
- --s3.encrypt-sse
- --gcs-backup-bucket=\<bucket\>
- --s3-backup-bucket=\<bucket\>
- Remove support of those environment variables for bucket
* S3_BUCKET
* S3_ENDPOINT
* S3_ACCESS_KEY
* S3_INSECURE
* S3_SIGNATURE_VERSION2
* S3_SECRET_KEY
- Add flag `--objstore.config-file` to reference to the bucket configuration file in yaml format. Note that detailed information in document [storage](docs/storage.md).

## [v0.1.0](https://github.com/improbable-eng/thanos/releases/tag/v0.1.0) - 2018.09.14

Initial version to have a stable reference before [gossip protocol removal](https://github.com/improbable-eng/thanos/blob/master/docs/proposals/gossip-removal.md).
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ tarballs-release: $(PROMU)
# test runs all Thanos golang tests against each supported version of Prometheus.
.PHONY: test
test: test-deps
@echo ">> running all tests. Do export THANOS_SKIP_GCS_TESTS="true" or/and export THANOS_SKIP_S3_AWS_TESTS="true" if you want to skip e2e tests against real store buckets"
@echo ">> running all tests. Do export THANOS_SKIP_GCS_TESTS='true' or/and export THANOS_SKIP_S3_AWS_TESTS='true' if you want to skip e2e tests against real store buckets"
@for ver in $(SUPPORTED_PROM_VERSIONS); do \
THANOS_TEST_PROMETHEUS_PATH="prometheus-$$ver" THANOS_TEST_ALERTMANAGER_PATH="alertmanager-$(ALERTMANAGER_VERSION)" go test $(shell go list ./... | grep -v /vendor/ | grep -v /benchmark/); \
done
Expand Down
22 changes: 7 additions & 15 deletions cmd/thanos/bucket.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ import (
"github.com/go-kit/kit/log"
"github.com/improbable-eng/thanos/pkg/block"
"github.com/improbable-eng/thanos/pkg/objstore/client"
"github.com/improbable-eng/thanos/pkg/objstore/s3"
"github.com/improbable-eng/thanos/pkg/runutil"
"github.com/improbable-eng/thanos/pkg/verifier"
"github.com/oklog/run"
Expand Down Expand Up @@ -42,34 +41,27 @@ var (
func registerBucket(m map[string]setupFunc, app *kingpin.Application, name string) {
cmd := app.Command(name, "inspect metric data in an object storage bucket")

gcsBucket := cmd.Flag("gcs-bucket", "Google Cloud Storage bucket name for stored blocks.").
PlaceHolder("<bucket>").String()

s3Config := s3.RegisterS3Params(cmd)
bucketConfFile := cmd.Flag("objstore.config-file", "The object store configuration file path.").
PlaceHolder("<bucket.config.path>").Required().String()

// Verify command.
verify := cmd.Command("verify", "verify all blocks in the bucket against specified issues")
verifyRepair := verify.Flag("repair", "attempt to repair blocks for which issues were detected").
Short('r').Default("false").Bool()
// NOTE(bplotka): Currently we support backup buckets only in the same project.
verifyBackupGCSBucket := cmd.Flag("gcs-backup-bucket", "Google Cloud Storage bucket name to backup blocks on repair operations.").
PlaceHolder("<bucket>").String()
verifyBackupS3Bucket := cmd.Flag("s3-backup-bucket", "S3 bucket name to backup blocks on repair operations.").
PlaceHolder("<bucket>").String()
backupBucketConfFile := verify.Flag("objstore-backup.config-file", "The backup object store configuration file path.").
PlaceHolder("<bucket-backup.config.path>").String()
verifyIssues := verify.Flag("issues", fmt.Sprintf("Issues to verify (and optionally repair). Possible values: %v", allIssues())).
Short('i').Default(verifier.IndexIssueID, verifier.OverlappedBlocksIssueID).Strings()
verifyIDWhitelist := verify.Flag("id-whitelist", "Block IDs to verify (and optionally repair) only. "+
"If none is specified, all blocks will be verified. Repeated field").Strings()
m[name+" verify"] = func(g *run.Group, logger log.Logger, reg *prometheus.Registry, _ opentracing.Tracer, _ bool) error {
bkt, err := client.NewBucket(logger, gcsBucket, *s3Config, reg, name)
bkt, err := client.NewBucket(logger, *bucketConfFile, reg, name)
if err != nil {
return err
}
defer runutil.CloseWithLogOnErr(logger, bkt, "bucket client")

backupS3Config := *s3Config
backupS3Config.Bucket = *verifyBackupS3Bucket
vglafirov marked this conversation as resolved.
Show resolved Hide resolved
backupBkt, err := client.NewBucket(logger, verifyBackupGCSBucket, backupS3Config, reg, name)
backupBkt, err := client.NewBucket(logger, *backupBucketConfFile, reg, name)
if err == client.ErrNotFound {
if *verifyRepair {
return errors.Wrap(err, "repair is specified, so backup client is required")
Expand Down Expand Up @@ -129,7 +121,7 @@ func registerBucket(m map[string]setupFunc, app *kingpin.Application, name strin
lsOutput := ls.Flag("output", "Format in which to print each block's information. May be 'json' or custom template.").
Short('o').Default("").String()
m[name+" ls"] = func(g *run.Group, logger log.Logger, reg *prometheus.Registry, _ opentracing.Tracer, _ bool) error {
bkt, err := client.NewBucket(logger, gcsBucket, *s3Config, reg, name)
bkt, err := client.NewBucket(logger, *bucketConfFile, reg, name)
if err != nil {
return err
}
Expand Down
15 changes: 5 additions & 10 deletions cmd/thanos/compact.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ import (
"github.com/improbable-eng/thanos/pkg/compact"
"github.com/improbable-eng/thanos/pkg/compact/downsample"
"github.com/improbable-eng/thanos/pkg/objstore/client"
"github.com/improbable-eng/thanos/pkg/objstore/s3"
"github.com/improbable-eng/thanos/pkg/runutil"
"github.com/oklog/run"
"github.com/opentracing/opentracing-go"
Expand All @@ -32,10 +31,8 @@ func registerCompact(m map[string]setupFunc, app *kingpin.Application, name stri
dataDir := cmd.Flag("data-dir", "Data directory in which to cache blocks and process compactions.").
Default("./data").String()

gcsBucket := cmd.Flag("gcs.bucket", "Google Cloud Storage bucket name for stored blocks.").
PlaceHolder("<bucket>").String()

s3config := s3.RegisterS3Params(cmd)
bucketConfFile := cmd.Flag("objstore.config-file", "The object store configuration file path.").
PlaceHolder("<bucket.config.path>").Required().String()

syncDelay := modelDuration(cmd.Flag("sync-delay", "Minimum age of fresh (non-compacted) blocks before they are being processed.").
Default("30m"))
Expand All @@ -56,8 +53,7 @@ func registerCompact(m map[string]setupFunc, app *kingpin.Application, name stri
return runCompact(g, logger, reg,
*httpAddr,
*dataDir,
*gcsBucket,
s3config,
*bucketConfFile,
time.Duration(*syncDelay),
*haltOnError,
*wait,
Expand All @@ -78,8 +74,7 @@ func runCompact(
reg *prometheus.Registry,
httpBindAddr string,
dataDir string,
gcsBucket string,
s3Config *s3.Config,
bucketConfFile string,
syncDelay time.Duration,
haltOnError bool,
wait bool,
Expand All @@ -99,7 +94,7 @@ func runCompact(

reg.MustRegister(halted)

bkt, err := client.NewBucket(logger, &gcsBucket, *s3Config, reg, component)
bkt, err := client.NewBucket(logger, bucketConfFile, reg, component)
if err != nil {
return err
}
Expand Down
14 changes: 5 additions & 9 deletions cmd/thanos/downsample.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ import (
"github.com/improbable-eng/thanos/pkg/compact/downsample"
"github.com/improbable-eng/thanos/pkg/objstore"
"github.com/improbable-eng/thanos/pkg/objstore/client"
"github.com/improbable-eng/thanos/pkg/objstore/s3"
"github.com/improbable-eng/thanos/pkg/runutil"
"github.com/oklog/run"
"github.com/oklog/ulid"
Expand All @@ -33,13 +32,11 @@ func registerDownsample(m map[string]setupFunc, app *kingpin.Application, name s
dataDir := cmd.Flag("data-dir", "Data directory in which to cache blocks and process downsamplings.").
Default("./data").String()

gcsBucket := cmd.Flag("gcs.bucket", "Google Cloud Storage bucket name for stored blocks.").
PlaceHolder("<bucket>").Required().String()

s3Config := s3.RegisterS3Params(cmd)
bucketConfFile := cmd.Flag("objstore.config-file", "The object store configuration file path.").
PlaceHolder("<bucket.config.path>").Required().String()

m[name] = func(g *run.Group, logger log.Logger, reg *prometheus.Registry, tracer opentracing.Tracer, _ bool) error {
return runDownsample(g, logger, reg, *dataDir, *gcsBucket, s3Config, name)
return runDownsample(g, logger, reg, *dataDir, *bucketConfFile, name)
}
}

Expand All @@ -48,12 +45,11 @@ func runDownsample(
logger log.Logger,
reg *prometheus.Registry,
dataDir string,
gcsBucket string,
s3Config *s3.Config,
bucketConfFile string,
component string,
) error {

bkt, err := client.NewBucket(logger, &gcsBucket, *s3Config, reg, component)
bkt, err := client.NewBucket(logger, bucketConfFile, reg, component)
if err != nil {
return err
}
Expand Down
17 changes: 6 additions & 11 deletions cmd/thanos/rule.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ import (
"github.com/improbable-eng/thanos/pkg/block"
"github.com/improbable-eng/thanos/pkg/cluster"
"github.com/improbable-eng/thanos/pkg/objstore/client"
"github.com/improbable-eng/thanos/pkg/objstore/s3"
"github.com/improbable-eng/thanos/pkg/runutil"
"github.com/improbable-eng/thanos/pkg/shipper"
"github.com/improbable-eng/thanos/pkg/store"
Expand Down Expand Up @@ -72,12 +71,10 @@ func registerRule(m map[string]setupFunc, app *kingpin.Application, name string)
alertmgrs := cmd.Flag("alertmanagers.url", "Alertmanager URLs to push firing alerts to. The scheme may be prefixed with 'dns+' or 'dnssrv+' to detect Alertmanager IPs through respective DNS lookups. The port defaults to 9093 or the SRV record's value. The URL path is used as a prefix for the regular Alertmanager API path.").
Strings()

gcsBucket := cmd.Flag("gcs.bucket", "Google Cloud Storage bucket name for stored blocks. If empty, ruler won't store any block inside Google Cloud Storage.").
PlaceHolder("<bucket>").String()

alertQueryURL := cmd.Flag("alert.query-url", "The external Thanos Query URL that would be set in all alerts 'Source' field").String()

s3Config := s3.RegisterS3Params(cmd)
bucketConfFile := cmd.Flag("objstore.config-file", "The object store configuration file path.").
PlaceHolder("<bucket.config.path>").String()

m[name] = func(g *run.Group, logger log.Logger, reg *prometheus.Registry, tracer opentracing.Tracer, _ bool) error {
lset, err := parseFlagLabels(*labelStrs)
Expand Down Expand Up @@ -112,8 +109,7 @@ func registerRule(m map[string]setupFunc, app *kingpin.Application, name string)
*dataDir,
*ruleFiles,
peer,
*gcsBucket,
s3Config,
*bucketConfFile,
tsdbOpts,
name,
alertQueryURL,
Expand All @@ -136,8 +132,7 @@ func runRule(
dataDir string,
ruleFiles []string,
peer *cluster.Peer,
gcsBucket string,
s3Config *s3.Config,
bucketConfFile string,
tsdbOpts *tsdb.Options,
component string,
alertQueryURL *url.URL,
Expand Down Expand Up @@ -415,13 +410,13 @@ func runRule(

// The background shipper continuously scans the data directory and uploads
// new blocks to Google Cloud Storage or an S3-compatible storage service.
bkt, err := client.NewBucket(logger, &gcsBucket, *s3Config, reg, component)
bkt, err := client.NewBucket(logger, bucketConfFile, reg, component)
if err != nil && err != client.ErrNotFound {
return err
}

if err == client.ErrNotFound {
level.Info(logger).Log("msg", "No GCS or S3 bucket was configured, uploads will be disabled")
level.Info(logger).Log("msg", "No supported bucket was configured, uploads will be disabled")
vglafirov marked this conversation as resolved.
Show resolved Hide resolved
uploads = false
}

Expand Down
19 changes: 7 additions & 12 deletions cmd/thanos/sidecar.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ import (
"github.com/improbable-eng/thanos/pkg/block"
"github.com/improbable-eng/thanos/pkg/cluster"
"github.com/improbable-eng/thanos/pkg/objstore/client"
"github.com/improbable-eng/thanos/pkg/objstore/s3"
"github.com/improbable-eng/thanos/pkg/reloader"
"github.com/improbable-eng/thanos/pkg/runutil"
"github.com/improbable-eng/thanos/pkg/shipper"
Expand All @@ -43,11 +42,6 @@ func registerSidecar(m map[string]setupFunc, app *kingpin.Application, name stri
dataDir := cmd.Flag("tsdb.path", "Data directory of TSDB.").
Default("./data").String()

gcsBucket := cmd.Flag("gcs.bucket", "Google Cloud Storage bucket name for stored blocks. If empty, sidecar won't store any block inside Google Cloud Storage.").
PlaceHolder("<bucket>").String()

s3Config := s3.RegisterS3Params(cmd)

reloaderCfgFile := cmd.Flag("reloader.config-file", "Config file watched by the reloader.").
Default("").String()

Expand All @@ -56,6 +50,9 @@ func registerSidecar(m map[string]setupFunc, app *kingpin.Application, name stri

reloaderRuleDirs := cmd.Flag("reloader.rule-dir", "Rule directories for the reloader to refresh (repeated field).").Strings()

bucketConfFile := cmd.Flag("objstore.config-file", "The object store configuration file path.").
PlaceHolder("<bucket.config.path>").String()

m[name] = func(g *run.Group, logger log.Logger, reg *prometheus.Registry, tracer opentracing.Tracer, _ bool) error {
rl := reloader.New(
log.With(logger, "component", "reloader"),
Expand All @@ -77,8 +74,7 @@ func registerSidecar(m map[string]setupFunc, app *kingpin.Application, name stri
*httpBindAddr,
*promURL,
*dataDir,
*gcsBucket,
s3Config,
*bucketConfFile,
peer,
rl,
name,
Expand All @@ -95,8 +91,7 @@ func runSidecar(
httpBindAddr string,
promURL *url.URL,
dataDir string,
gcsBucket string,
s3Config *s3.Config,
bucketConfFile string,
peer *cluster.Peer,
reloader *reloader.Reloader,
component string,
Expand Down Expand Up @@ -224,13 +219,13 @@ func runSidecar(

// The background shipper continuously scans the data directory and uploads
// new blocks to Google Cloud Storage or an S3-compatible storage service.
bkt, err := client.NewBucket(logger, &gcsBucket, *s3Config, reg, component)
bkt, err := client.NewBucket(logger, bucketConfFile, reg, component)
if err != nil && err != client.ErrNotFound {
return err
}

if err == client.ErrNotFound {
level.Info(logger).Log("msg", "No GCS or S3 bucket was configured, uploads will be disabled")
level.Info(logger).Log("msg", "No supported bucket was configured, uploads will be disabled")
uploads = false
}

Expand Down
19 changes: 7 additions & 12 deletions cmd/thanos/store.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ import (
"github.com/go-kit/kit/log/level"
"github.com/improbable-eng/thanos/pkg/cluster"
"github.com/improbable-eng/thanos/pkg/objstore/client"
"github.com/improbable-eng/thanos/pkg/objstore/s3"
"github.com/improbable-eng/thanos/pkg/runutil"
"github.com/improbable-eng/thanos/pkg/store"
"github.com/improbable-eng/thanos/pkg/store/storepb"
Expand All @@ -24,17 +23,15 @@ import (

// registerStore registers a store command.
func registerStore(m map[string]setupFunc, app *kingpin.Application, name string) {
cmd := app.Command(name, "store node giving access to blocks in a GCS bucket")
cmd := app.Command(name, "store node giving access to blocks in a bucket provider. Now supported GCS / S3.")

grpcBindAddr, httpBindAddr, newPeerFn := regCommonServerFlags(cmd)

dataDir := cmd.Flag("data-dir", "Data directory in which to cache remote blocks.").
Default("./data").String()

gcsBucket := cmd.Flag("gcs.bucket", "Google Cloud Storage bucket name for stored blocks. If empty sidecar won't store any block inside Google Cloud Storage.").
PlaceHolder("<bucket>").String()

s3Config := s3.RegisterS3Params(cmd)
bucketConfFile := cmd.Flag("objstore.config-file", "The object store configuration file path.").
PlaceHolder("<bucket.config.path>").Required().String()

indexCacheSize := cmd.Flag("index-cache-size", "Maximum size of items held in the index cache.").
Default("250MB").Bytes()
Expand All @@ -51,8 +48,7 @@ func registerStore(m map[string]setupFunc, app *kingpin.Application, name string
logger,
reg,
tracer,
*gcsBucket,
s3Config,
*bucketConfFile,
*dataDir,
*grpcBindAddr,
*httpBindAddr,
Expand All @@ -71,8 +67,7 @@ func runStore(
logger log.Logger,
reg *prometheus.Registry,
tracer opentracing.Tracer,
gcsBucket string,
s3Config *s3.Config,
bucketConfFile string,
dataDir string,
grpcBindAddr string,
httpBindAddr string,
Expand All @@ -83,9 +78,9 @@ func runStore(
verbose bool,
) error {
{
bkt, err := client.NewBucket(logger, &gcsBucket, *s3Config, reg, component)
bkt, err := client.NewBucket(logger, bucketConfFile, reg, component)
if err != nil {
return err
return errors.Wrap(err, "create bucket client")
}

// Ensure we close up everything properly.
Expand Down
Loading