Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Append callback error: The memory mapped buffer for log of test-topic-0000000-0150-106 is already closed #534

Closed
Chillax-0v0 opened this issue Dec 11, 2023 · 1 comment · Fixed by #536
Assignees
Labels
bug Something isn't working

Comments

@Chillax-0v0
Copy link
Contributor

server.log:

[ec2-user@i-03a72abb12e71394a automq]$ less logs/server.log | grep test-topic-0000000-0150-106
[2023-12-11 08:32:13,806] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test-topic-0000000-0150-106) (kafka.server.ReplicaFetcherManager)
[2023-12-11 08:32:17,349] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] opened existing meta stream: stream_id=353 (kafka.log.streamaspect.ElasticLog$)
[2023-12-11 08:32:17,523] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] loaded partition meta: ElasticPartitionMeta{startOffset=0, cleanerOffset=0, recoverOffset=7244, cleanedShutdown=true} (kafka.log.streamaspect.ElasticLog$)
[2023-12-11 08:32:17,523] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] loaded 1 producer snapshots, offsets(filenames) are Set(7244)  (kafka.log.streamaspect.ElasticLog$)
[2023-12-11 08:32:17,560] INFO [ElasticProducerStateManager partition=test-topic-0000000-0150-106] Loading producer state from snapshot file 'SnapshotFile(7244.snapshot,7244)' (kafka.log.streamaspect.ElasticProducerStateManager)
[2023-12-11 08:32:17,581] INFO [ElasticLogLoader partition=test-topic-0000000-0150-106, dir=/data/kafka/kraft-broker-logs] Producer state recovery took 22ms for snapshot load and 0ms for segment recovery from offset 7244 (kafka.log.streamaspect.ElasticUnifiedLog$)
[2023-12-11 08:32:17,581] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] loaded log meta: ElasticLogMeta{streamMap={log=414, tim=461, txn=-1}, lastNthSegmentMetas=[ElasticStreamSegmentMeta{baseOffset=0, createTimestamp=1702283488521, lastModifiedTimestamp=1702283527281, streamSuffix='', logSize=29781394, log=[0, -1], time=[0, -1], txn=[0, -1], firstBatchTimestamp=1702283494681, timeIndexLastEntry=TimestampOffsetData{timestamp=1702283517186, offset=5102}}]} (kafka.log.streamaspect.ElasticLog$)
[2023-12-11 08:32:17,581] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] loaded leader epoch checkpoint with 1 entries (kafka.log.streamaspect.ElasticLog$)
[2023-12-11 08:32:17,581] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] last leaderEpoch entry is: EpochEntry(epoch=0, startOffset=0) (kafka.log.streamaspect.ElasticLog$)
[2023-12-11 08:32:17,645] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] save partition meta ElasticPartitionMeta{startOffset=0, cleanerOffset=0, recoverOffset=7244, cleanedShutdown=false} (kafka.log.streamaspect.ElasticLog)
[2023-12-11 08:32:17,647] INFO ElasticUnifiedLog test-topic-0000000-0150-106 opened time cost: 3821 ms (kafka.log.UnifiedLog$)
[2023-12-11 08:32:17,647] INFO Created log for partition test-topic-0000000-0150-106 in /data/kafka/kraft-broker-logs/test-topic-0000000-0150-106 with properties {elasticstream.replication.factor=3, flush.messages=1, min.insync.replicas=1, retention.ms=86400000}, cost=3828ms (kafka.log.LogManager)
[2023-12-11 08:32:17,647] INFO [Partition test-topic-0000000-0150-106 broker=2] Log loaded for partition test-topic-0000000-0150-106 with initial high watermark 7244 (kafka.cluster.Partition)
[2023-12-11 08:32:17,672] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test-topic-0000000-0150-106) (kafka.server.ReplicaFetcherManager)
[2023-12-11 08:32:17,672] INFO [ReplicaAlterLogDirsManager on broker 2] Removed fetcher for partitions Set(test-topic-0000000-0150-106) (kafka.server.ReplicaAlterLogDirsManager)
[2023-12-11 08:32:17,684] INFO [Partition test-topic-0000000-0150-106 broker=2] Closing partition (kafka.cluster.Partition)
[2023-12-11 08:32:17,710] INFO [ElasticProducerStateManager partition=test-topic-0000000-0150-106] Wrote producer snapshot at offset 7294 with 4 producer ids in 9 ms. (kafka.log.streamaspect.ElasticProducerStateManager)
[2023-12-11 08:32:17,737] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] save log meta ElasticLogMeta{streamMap={log=414, tim=461, txn=-1}, lastNthSegmentMetas=[ElasticStreamSegmentMeta{baseOffset=0, createTimestamp=1702283488521, lastModifiedTimestamp=1702283537693, streamSuffix='', logSize=29986888, log=[0, -1], time=[0, -1], txn=[0, -1], firstBatchTimestamp=1702283494681, timeIndexLastEntry=TimestampOffsetData{timestamp=1702283517186, offset=5102}}]} (kafka.log.streamaspect.ElasticLog$)
[2023-12-11 08:32:17,745] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] save partition meta ElasticPartitionMeta{startOffset=0, cleanerOffset=0, recoverOffset=7294, cleanedShutdown=true} (kafka.log.streamaspect.ElasticLog)
[2023-12-11 08:32:17,746] INFO [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] log(except for streams) closed (kafka.log.streamaspect.ElasticLog)
[2023-12-11 08:32:17,767] ERROR [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] append callback error (kafka.log.streamaspect.ElasticLog)
org.apache.kafka.common.errors.KafkaStorageException: The memory mapped buffer for log of test-topic-0000000-0150-106 is already closed
[2023-12-11 08:32:23,061] INFO [ReplicaManager broker=2] partition test-topic-0000000-0150-106 is closed, trigger leader election (kafka.server.ReplicaManager)
@Chillax-0v0 Chillax-0v0 added the bug Something isn't working label Dec 11, 2023
@Chillax-0v0
Copy link
Contributor Author

[ec2-user@i-03a72abb12e71394a automq]$ less logs/server.log | grep -E "WARN|ERROR"
[2023-12-11 08:31:37,487] WARN These configurations '[network.out.capacity, metrics.reporting.interval.ms, network.in.capacity]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig)
[2023-12-11 08:32:13,843] ERROR Error while opening stream: 353, epoch 2, code: STREAM_NOT_CLOSED, retry later (kafka.log.stream.s3.streams.ControllerStreamManager)
[2023-12-11 08:32:13,843] WARN Retry task: kafka.log.stream.s3.network.ControllerRequestSender$RequestTask@4fc4ffef, delay : 500 ms (kafka.log.stream.s3.network.ControllerRequestSender)
[2023-12-11 08:32:14,344] ERROR Error while opening stream: 353, epoch 2, code: STREAM_NOT_CLOSED, retry later (kafka.log.stream.s3.streams.ControllerStreamManager)
[2023-12-11 08:32:14,344] WARN Retry task: kafka.log.stream.s3.network.ControllerRequestSender$RequestTask@4fc4ffef, delay : 1000 ms (kafka.log.stream.s3.network.ControllerRequestSender)
[2023-12-11 08:32:15,345] ERROR Error while opening stream: 353, epoch 2, code: STREAM_NOT_CLOSED, retry later (kafka.log.stream.s3.streams.ControllerStreamManager)
[2023-12-11 08:32:15,345] WARN Retry task: kafka.log.stream.s3.network.ControllerRequestSender$RequestTask@4fc4ffef, delay : 2000 ms (kafka.log.stream.s3.network.ControllerRequestSender)
[2023-12-11 08:32:17,586] WARN [Producer clientId=AutoBalancerMetricsReporterProducer] Got error produce response with correlation id 13 on topic-partition __auto_balancer_metrics-0, retrying (4 attempts left). Error: NOT_LEADER_OR_FOLLOWER (org.apache.kafka.clients.producer.internals.Sender)
[2023-12-11 08:32:17,587] WARN [Producer clientId=AutoBalancerMetricsReporterProducer] Received invalid metadata error in produce request on partition __auto_balancer_metrics-0 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender)
[2023-12-11 08:32:17,767] ERROR [ElasticLog partition=test-topic-0000000-0150-106 epoch=2] append callback error (kafka.log.streamaspect.ElasticLog)
[2023-12-11 08:32:48,759] WARN [Producer clientId=AutoBalancerMetricsReporterProducer] Got error produce response with correlation id 27 on topic-partition __auto_balancer_metrics-0, retrying (3 attempts left). Error: NETWORK_EXCEPTION. Error Message: Disconnected from node 2 (org.apache.kafka.clients.producer.internals.Sender)
[2023-12-11 08:32:48,759] WARN [Producer clientId=AutoBalancerMetricsReporterProducer] Received invalid metadata error in produce request on partition __auto_balancer_metrics-0 due to org.apache.kafka.common.errors.NetworkException: Disconnected from node 2. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender)
[2023-12-11 08:34:17,606] WARN Failed to send 2 metrics for time 1702283537578 (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-12-11 08:36:22,634] WARN Failed to send 11 metrics for time 1702283662606 (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-12-11 08:38:27,661] WARN Failed to send 11 metrics for time 1702283787635 (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-12-11 08:40:32,689] WARN Failed to send 11 metrics for time 1702283912661 (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-12-11 08:42:37,716] WARN Failed to send 11 metrics for time 1702284037690 (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-12-11 08:44:42,744] WARN Failed to send 11 metrics for time 1702284162717 (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-12-11 08:46:47,772] WARN Failed to send 11 metrics for time 1702284287745 (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants