-
Notifications
You must be signed in to change notification settings - Fork 905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consumer performance benchmarking #1853
Comments
The performance is pretty bad. Can you check which part of the code is taking much time? Are you performing some IO operation between consuming messages? Are you sure that you are using Apache Kafka 0.1.0 version? 0.7.0 was release in 2012. |
Hi Pranav Thanks |
Can you log time for each step? Getting message from the python client should not be taking alot of time. |
Sure Pranav. Will do and revert but might take sometime Thanks |
Hi Can you share if any bench marking was done earlier Thanks |
Hi @pranavrth - By default do we need to have any threading concept added to improve performance . I run as single threaded application in both cases (kafka-python and confluent -kafka) and no of partitions are more than 1. Ideally threads may be equal to no of partitions in case sequence needs to be maintained. Any suggestions to further improve the performance for confluent kafka based consumer. Also how to overcome consumer poll taking more time in confluent kafka. |
Description
We are using this library based consumer (basic one) to consume messages from MSK. performance wise, this doesnt meet our defined requirements. Can anyone suggest any pointers on how to optimise the parameters or tune to get best performance. to process 5k messages, as of now it takes around 24-45 minutes but our requirement is less than 5 min. Properties used are as below
How to reproduce
NA
Checklist
Please provide the following information:
confluent_kafka.version()
andconfluent_kafka.libversion()
): 2.6.0 library and MSK{...}
-ECS Fargate service'debug': '..'
as necessary) NAThe text was updated successfully, but these errors were encountered: