-
-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issues when getting all records. #491
Comments
I would say there is only one valid case when this situation can happen. Query result is paginated with 1 MB size limit (documentation). So please check result of each call and compare number of returned records. If they equal - we have performance issue in Dynamoid. Otherwise it's expected result. |
The number of returned records is exactly the same, a total of |
Well, I would like to check logs and compare API requests in both cases to ensure the performance issue is related to Dynamoid internals and not related to generated API requests to DynamoDB. Could you please provide logged requests to DynamoDB for both cases? For Dynamoid you can enable logging this way: Dynamoid::Config.logger.level = :debug |
I have another idea what could happen. If a hash key name in a table is not default table key: :the_hash_key Dynamoid doesn't analyze a DynamoDB table schema (maybe we have to change this) and relies on a schema declared explicitly in code. If It will be clear when we have logged API requests to DynamoDB. |
I'm not seeing any log output even after adding that configuration. But maybe it's helpful to note that I have a range key as well as a hash key, and my hash key is also declared as a table capacity_mode: :provisioned,
key: :the_hash_key,
read_capacity: 350,
write_capacity: 100
range :the_sort_key, :string
field :the_hash_key, :integer |
Confirming that removing that unneeded |
Actually, without that I'm getting this error:
Edit @attributes=
{:created_at=>Mon, 18 Jan 2021 09:12:30 +0000,
:updated_at=>Mon, 18 Jan 2021 09:12:30 +0000,
:the_hash_key=>"209963.0",
},
@attributes_before_type_cast=
{:created_at=>Mon, 18 Jan 2021 09:12:30 +0000,
:updated_at=>Mon, 18 Jan 2021 09:12:30 +0000,
:the_hash_key=>0.209963e6,
} Is there any way I can drop the |
Yeah so removing the field just means that Dynamoid now just assumes the key is of the "string" type, which causes the mismatch with DynamoDB. My initial tests were flawed, so I think this long query lag is still a problem. I'll continue to try to get log output. |
Regarding logging. This code snippet class MyDynamoidModel
include Dynamoid::Document
table capacity_mode: :provisioned,
key: :the_hash_key,
read_capacity: 350,
write_capacity: 100
range :the_sort_key, :string
field :the_hash_key, :integer
end
Dynamoid.logger.level = :debug
MyDynamoidModel.create_table
MyDynamoidModel.where(the_hash_key: 1).all.to_a
# => [] produces the following logs:
I assume that AWS client log formatter could not be set so please try to set any from the list described here #459. Anyway, you can just provide complete code snippets to reproduce the issue - both for Dynamoid (with model class declaration) and AWS SDK. |
Hello, we're new to using Dynamoid, and DynamoDB in general. I'm hoping someone can explain why there is a large difference in latency between these calls:
-VS-
The first version runs in about 28s. The second one about 1.5s. Having to call
to_a
is a little weird, maybe there's a preferred way?The text was updated successfully, but these errors were encountered: