Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add objects list caching for boltdb-shipper index store to reduce object storage list api calls #5160

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
150 changes: 150 additions & 0 deletions pkg/storage/stores/shipper/storage/cached_client.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
package storage

import (
"context"
"fmt"
"path"
"strings"
"sync"
"time"

"github.com/go-kit/log/level"

util_log "github.com/cortexproject/cortex/pkg/util/log"

"github.com/grafana/loki/pkg/storage/chunk"
)

const (
cacheTimeout = 1 * time.Minute
)

type table struct {
commonObjects []chunk.StorageObject
userIDs []chunk.StorageCommonPrefix
userObjects map[string][]chunk.StorageObject
}

type cachedObjectClient struct {
chunk.ObjectClient

tables map[string]*table
tableNames []chunk.StorageCommonPrefix
tablesMtx sync.RWMutex
cacheBuiltAt time.Time

rebuildCacheChan chan struct{}
err error
}

func newCachedObjectClient(downstreamClient chunk.ObjectClient) *cachedObjectClient {
return &cachedObjectClient{
ObjectClient: downstreamClient,
tables: map[string]*table{},
rebuildCacheChan: make(chan struct{}, 1),
}
}

func (c *cachedObjectClient) List(ctx context.Context, prefix, _ string) ([]chunk.StorageObject, []chunk.StorageCommonPrefix, error) {
prefix = strings.TrimSuffix(prefix, delimiter)
ss := strings.Split(prefix, delimiter)
if len(ss) > 2 {
return nil, nil, fmt.Errorf("invalid prefix %s", prefix)
}

if !c.cacheBuiltAt.Add(cacheTimeout).After(time.Now()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could also be written as:

Suggested change
if !c.cacheBuiltAt.Add(cacheTimeout).After(time.Now()) {
if time.Since(c.cacheBuiltAt) > cacheTimeout {

which I find easier to read.

select {
case c.rebuildCacheChan <- struct{}{}:
c.err = nil
c.err = c.buildCache(ctx)
<-c.rebuildCacheChan
if c.err != nil {
level.Error(util_log.Logger).Log("msg", "failed to build cache", "err", c.err)
}
default:
for !c.cacheBuiltAt.Add(cacheTimeout).After(time.Now()) && c.err == nil {
time.Sleep(time.Millisecond)
Copy link
Contributor

@cyriltovena cyriltovena Jan 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For loop and time.Sleep is a no no !

You want to use the promise pattern instead. Not sure if we can avoid a lock/ RW lock.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a sync.WaitGroup to make all the goroutines attempting to build the cache to wait until the operation gets over. Can you please check now whether it looks good?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep looks good.

}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a hard time understanding why you chose to use a channel here. I assume to block concurrent access on List(). First call is building the cache while all others wait until cache is built?

Copy link
Contributor Author

@sandeepsukhani sandeepsukhani Jan 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, just first or one of the concurrent calls to list should get to build the cache while others wait for it to finish successfully or with error. I will add a comment to make it clearer.

}

if c.err != nil {
return nil, nil, c.err
}

c.tablesMtx.RLock()
defer c.tablesMtx.RUnlock()

if prefix == "" {
cyriltovena marked this conversation as resolved.
Show resolved Hide resolved
return []chunk.StorageObject{}, c.tableNames, nil
} else if len(ss) == 1 {
tableName := ss[0]
table, ok := c.tables[tableName]
if !ok {
return []chunk.StorageObject{}, []chunk.StorageCommonPrefix{}, nil
}

return table.commonObjects, table.userIDs, nil
cyriltovena marked this conversation as resolved.
Show resolved Hide resolved
} else {
tableName := ss[0]
cyriltovena marked this conversation as resolved.
Show resolved Hide resolved
table, ok := c.tables[tableName]
if !ok {
return []chunk.StorageObject{}, []chunk.StorageCommonPrefix{}, nil
}

userID := ss[1]
objects, ok := table.userObjects[userID]
if !ok {
return []chunk.StorageObject{}, []chunk.StorageCommonPrefix{}, nil
}

return objects, []chunk.StorageCommonPrefix{}, nil
}
}

func (c *cachedObjectClient) buildCache(ctx context.Context) error {
if c.cacheBuiltAt.Add(cacheTimeout).After(time.Now()) {
return nil
}

objects, _, err := c.ObjectClient.List(ctx, "", "")
if err != nil {
return err
}

c.tablesMtx.Lock()
defer c.tablesMtx.Unlock()

c.tables = map[string]*table{}
Comment on lines +133 to +136
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we decrease the lock time by assigning c.tables at the very end?

Suggested change
c.tablesMtx.Lock()
defer c.tablesMtx.Unlock()
c.tables = map[string]*table{}
new_tables := map[string]*table{}
...
c.tablesMtx.Lock()
defer c.tablesMtx.Unlock()
c.tables = new_tables
c.cacheBuiltAt = time.Now()
return nil

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to keep it locked until we build the cache to avoid returning stale results. Most of these list calls happen async so I am refreshing the cache on demand instead of running a goroutine refreshing it every min since we usually do these operations every 5 mins in index-gateway and 10 mins in compactor by default.


for _, object := range objects {
ss := strings.Split(object.Key, delimiter)
if len(ss) < 2 || len(ss) > 3 {
return fmt.Errorf("invalid key: %s", object.Key)
}
tableName := ss[0]
tbl, ok := c.tables[tableName]
if !ok {
tbl = &table{
commonObjects: []chunk.StorageObject{},
userObjects: map[string][]chunk.StorageObject{},
userIDs: []chunk.StorageCommonPrefix{},
}
c.tables[tableName] = tbl
c.tableNames = append(c.tableNames, chunk.StorageCommonPrefix(tableName))
}

if len(ss) == 2 {
tbl.commonObjects = append(tbl.commonObjects, object)
} else {
userID := ss[1]
if len(tbl.userObjects[userID]) == 0 {
tbl.userIDs = append(tbl.userIDs, chunk.StorageCommonPrefix(path.Join(tableName, userID)))
}
tbl.userObjects[userID] = append(tbl.userObjects[userID], object)
}
}

c.cacheBuiltAt = time.Now()
return nil
}
183 changes: 183 additions & 0 deletions pkg/storage/stores/shipper/storage/cached_client_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
package storage

import (
"context"
"errors"
"sync"
"testing"
"time"

"github.com/stretchr/testify/require"

"github.com/grafana/loki/pkg/storage/chunk"
)

type mockObjectClient struct {
chunk.ObjectClient
storageObjects []chunk.StorageObject
errResp error
listCallsCount int
listDelay time.Duration
}

func newMockObjectClient(objects []string) *mockObjectClient {
storageObjects := make([]chunk.StorageObject, 0, len(objects))
for _, objectName := range objects {
storageObjects = append(storageObjects, chunk.StorageObject{
Key: objectName,
})
}

return &mockObjectClient{
storageObjects: storageObjects,
}
}

func (m *mockObjectClient) List(_ context.Context, _, _ string) ([]chunk.StorageObject, []chunk.StorageCommonPrefix, error) {
defer func() {
time.Sleep(m.listDelay)
m.listCallsCount++
}()

if m.errResp != nil {
return nil, nil, m.errResp
}

return m.storageObjects, []chunk.StorageCommonPrefix{}, nil
}

func TestCachedObjectClient(t *testing.T) {
objectsInStorage := []string{
// table with just common dbs
"table1/db1.gz",
"table1/db2.gz",

// table with both common and user dbs
"table2/db1.gz",
"table2/user1/db1.gz",

// table with just user dbs
"table3/user1/db1.gz",
"table3/user1/db2.gz",
}

objectClient := newMockObjectClient(objectsInStorage)
cachedObjectClient := newCachedObjectClient(objectClient)

// list tables
objects, commonPrefixes, err := cachedObjectClient.List(context.Background(), "", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, objects, []chunk.StorageObject{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arguments of the Equal function are in "incorrect" order:

Suggested change
require.Equal(t, objects, []chunk.StorageObject{})
require.Equal(t, []chunk.StorageObject{}, objects)

The function interface is

func Equal(t TestingT, expected interface{}, actual interface{}, msgAndArgs ...interface{})

This isn't a problem as long as expected and actual are equal, but the test error message is misleading in case they aren't.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guess this is not only a problem in your test, but we have that all over the place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, sorry I messed up the order. Fixed it.

require.Equal(t, commonPrefixes, []chunk.StorageCommonPrefix{"table1", "table2", "table3"})

// list objects in all 3 tables
objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "table1/", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, objects, []chunk.StorageObject{
{Key: "table1/db1.gz"},
{Key: "table1/db2.gz"},
})
require.Equal(t, []chunk.StorageCommonPrefix{}, commonPrefixes)

objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "table2/", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, objects, []chunk.StorageObject{
{Key: "table2/db1.gz"},
})
require.Equal(t, []chunk.StorageCommonPrefix{"table2/user1"}, commonPrefixes)

objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "table3/", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, []chunk.StorageObject{}, objects)
require.Equal(t, []chunk.StorageCommonPrefix{"table3/user1"}, commonPrefixes)

// list user objects from table2 and table3
objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "table2/user1/", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, []chunk.StorageObject{
{
Key: "table2/user1/db1.gz",
},
}, objects)
require.Equal(t, []chunk.StorageCommonPrefix{}, commonPrefixes)

objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "table3/user1/", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, []chunk.StorageObject{
{Key: "table3/user1/db1.gz"},
{Key: "table3/user1/db2.gz"},
}, objects)
require.Equal(t, []chunk.StorageCommonPrefix{}, commonPrefixes)

// list non-existent table
objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "table4/", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, []chunk.StorageObject{}, objects)
require.Equal(t, []chunk.StorageCommonPrefix{}, commonPrefixes)

// list non-existent user
objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "table3/user2/", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, []chunk.StorageObject{}, objects)
require.Equal(t, []chunk.StorageCommonPrefix{}, commonPrefixes)
}

func TestCachedObjectClient_errors(t *testing.T) {
objectsInStorage := []string{
// table with just common dbs
"table1/db1.gz",
"table1/db2.gz",
}

objectClient := newMockObjectClient(objectsInStorage)
cachedObjectClient := newCachedObjectClient(objectClient)

// do the initial listing
objects, commonPrefixes, err := cachedObjectClient.List(context.Background(), "", "")
require.NoError(t, err)
require.Equal(t, 1, objectClient.listCallsCount)
require.Equal(t, objects, []chunk.StorageObject{})
require.Equal(t, commonPrefixes, []chunk.StorageCommonPrefix{"table1"})

// timeout the cache and call List concurrently with objectClient throwing an error
// objectClient must receive just one request and all the cachedObjectClient.List calls should get an error
wg := sync.WaitGroup{}
cachedObjectClient.cacheBuiltAt = time.Now().Add(-(cacheTimeout + time.Second))
objectClient.listDelay = time.Millisecond * 100
objectClient.errResp = errors.New("fake error")
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
_, _, err := cachedObjectClient.List(context.Background(), "", "")
require.Error(t, err)
require.Equal(t, 2, objectClient.listCallsCount)
}()
}

wg.Wait()

// clear the error and call the List concurrently again
// objectClient must receive just one request and all the calls should not get any error
objectClient.errResp = nil
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
objects, commonPrefixes, err = cachedObjectClient.List(context.Background(), "", "")
require.NoError(t, err)
require.Equal(t, 3, objectClient.listCallsCount)
require.Equal(t, objects, []chunk.StorageObject{})
require.Equal(t, commonPrefixes, []chunk.StorageCommonPrefix{"table1"})
}()
}
wg.Wait()
}
Loading