You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Java Version: Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
OS: Windows 10 x64
HARDWARE: i5 (4 core), 8GB MEMORY, SSD
Actual behavior
The inserting is slowing down after 2000 records inserted with batch. With the 2000 records, the inserting process costs within 0.5 second, after that it becomes 3~4 seconds. If I kill the process and restart it again, the first 2000 record is as fast as the first time and slow down right after.
Steps to reproduce
sample script of one batch:
BEGIN;
LET vol=CREATE VERTEX v_volume SET f_year=2017,f_quantity=59,f_recorded='2017-08-08 15:58:44';
LET province=UPDATE v_province SET f_name='Guangdong', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Guangdong';
LET city=UPDATE v_city SET f_name='Guangzhou', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Guangzhou';
LET pr=UPDATE v_plan_round SET f_name='PR67', f_number=67, f_recorded='2017-08-08 15:58:44', f_updated='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_number=67;
LET brand=UPDATE v_brand SET f_name='Bentley', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Bentley';
LET model=UPDATE v_model SET f_name='Continental GT/GTC/SSp', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Continental GT/GTC/SSp';
CREATE EDGE e_in_pr FROM $vol TO $pr SET f_recorded='2017-08-08 15:58:44';
CREATE EDGE e_has_type FROM $vol TO $brand SET f_recorded='2017-08-08 15:58:44';
CREATE EDGE e_has_type FROM $vol TO $model SET f_recorded='2017-08-08 15:58:44';
CREATE EDGE e_has_type FROM $vol TO [#41:1,#47:0,#18:0,#22:0,#38:0,#29:0,#25:0] SET f_recorded='2017-08-08 15:58:44';
COMMIT RETRY 100;
Is there suggestion that i can get a work around? the process doesn't consume much memory (JAVA SE, consuming 1GB memory, nearly 40% CPU time). There are total 370,000 records.
I also try multi-processes, but it didn't work. The second/third process was even slower.
Also I changed some setting of the configure file:
probably the slowdown is due to the lookups for UPSERT statemets, please make sure you have an index on the property you use in the WHERE condition of the UPSERT statements
CREATE INDEX i_segment_name ON v_segment (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_body_type_name ON v_body_type (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_fuel_type_name ON v_fuel_type (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_brand_indicator_name ON v_brand_indicator (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_ev_type_name ON v_ev_type (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_product_type_name ON v_product_type (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_city_level_name ON v_city_level (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_city_cluster_name ON v_city_cluster (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_sss_type_name ON v_sss_type (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_content_name ON v_content (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_model_name ON v_model (f_name) UNIQUE_HASH_INDEX STRING;
CREATE INDEX i_volume_row ON v_volume (f_row) NOTUNIQUE_HASH_INDEX Decimal;
CREATE INDEX i_plan_round_number ON v_plan_round (f_number) UNIQUE_HASH_INDEX Integer;
CREATE INDEX i_province_name ON v_province (f_name) NOTUNIQUE_HASH_INDEX STRING;
CREATE INDEX i_city_name ON v_city (f_name) NOTUNIQUE_HASH_INDEX STRING;
CREATE INDEX i_brand_name ON v_brand (f_name) UNIQUE_HASH_INDEX STRING;
Total affected record is 96. The problem remains. Is it something to do with the transaction?
OrientDB Version: 2.2.25
Java Version: Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
OS: Windows 10 x64
HARDWARE: i5 (4 core), 8GB MEMORY, SSD
Actual behavior
The inserting is slowing down after 2000 records inserted with batch. With the 2000 records, the inserting process costs within 0.5 second, after that it becomes 3~4 seconds. If I kill the process and restart it again, the first 2000 record is as fast as the first time and slow down right after.
Steps to reproduce
sample script of one batch:
BEGIN;
LET vol=CREATE VERTEX v_volume SET f_year=2017,f_quantity=59,f_recorded='2017-08-08 15:58:44';
LET province=UPDATE v_province SET f_name='Guangdong', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Guangdong';
LET city=UPDATE v_city SET f_name='Guangzhou', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Guangzhou';
LET pr=UPDATE v_plan_round SET f_name='PR67', f_number=67, f_recorded='2017-08-08 15:58:44', f_updated='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_number=67;
LET brand=UPDATE v_brand SET f_name='Bentley', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Bentley';
LET model=UPDATE v_model SET f_name='Continental GT/GTC/SSp', f_recorded='2017-08-08 15:58:44' UPSERT RETURN AFTER @Rid WHERE f_name='Continental GT/GTC/SSp';
CREATE EDGE e_in_pr FROM $vol TO $pr SET f_recorded='2017-08-08 15:58:44';
CREATE EDGE e_has_type FROM $vol TO $brand SET f_recorded='2017-08-08 15:58:44';
CREATE EDGE e_has_type FROM $vol TO $model SET f_recorded='2017-08-08 15:58:44';
CREATE EDGE e_has_type FROM $vol TO [#41:1,#47:0,#18:0,#22:0,#38:0,#29:0,#25:0] SET f_recorded='2017-08-08 15:58:44';
COMMIT RETRY 100;
local server config:
LOCAL SERVER CONFIGURATION
+----+---------------------------------------------------+-------------------------------------------------------------------------+
|# |NAME |VALUE |
+----+---------------------------------------------------+-------------------------------------------------------------------------+
|0 |environment.dumpCfgAtStartup |false |
|1 |environment.concurrent |true |
|2 |environment.lockManager.concurrency.level |32 |
|3 |environment.allowJVMShutdown |true |
|4 |script.pool.maxSize |20 |
|5 |memory.useUnsafe |true |
|6 |memory.chunk.size |2147483647 |
|7 |memory.directMemory.safeMode |true |
|8 |memory.directMemory.trackMode |false |
|9 |memory.directMemory.onlyAlignedMemoryAccess |true |
|10 |jvm.gc.delayForOptimize |600 |
|11 |storage.openFiles.limit |512 |
|12 |storage.componentsLock.cache |10000 |
|13 |storage.diskCache.pinnedPages |20 |
|14 |storage.diskCache.bufferSize |4096 |
|15 |storage.diskCache.writeCachePart |15 |
|16 |storage.diskCache.writeCachePageTTL |86400 |
|17 |storage.diskCache.writeCachePageFlushInterval |25 |
|18 |storage.diskCache.writeCacheFlushInactivityInterval|60000 |
|19 |storage.diskCache.writeCacheFlushLockTimeout |-1 |
|20 |storage.diskCache.diskFreeSpaceCheckInterval |5 |
|21 |storage.diskCache.diskFreeSpaceCheckIntervalInPages|2048 |
|22 |storage.diskCache.keepState |true |
|23 |storage.diskCache.checksumMode |StoreAndVerify |
|24 |storage.configuration.syncOnUpdate |true |
|25 |storage.compressionMethod |nothing |
|26 |storage.encryptionMethod |nothing |
|27 |storage.encryptionKey | |
|28 |storage.makeFullCheckpointAfterCreate |false |
|29 |storage.makeFullCheckpointAfterOpen |true |
|30 |storage.makeFullCheckpointAfterClusterCreate |true |
|31 |storage.trackChangedRecordsInWAL |false |
|32 |storage.useWAL |true |
|33 |storage.wal.syncOnPageFlush |true |
|34 |storage.wal.cacheSize |3000 |
|35 |storage.wal.fileAutoCloseInterval |10 |
|36 |storage.wal.maxSegmentSize |128 |
|37 |storage.wal.maxSize |4096 |
|38 |storage.wal.commitTimeout |1000 |
|39 |storage.wal.shutdownTimeout |10000 |
|40 |storage.wal.fuzzyCheckpointInterval |300 |
|41 |storage.wal.reportAfterOperationsDuringRestore |10000 |
|42 |storage.wal.restore.batchSize |1000 |
|43 |storage.wal.readCacheSize |1000 |
|44 |storage.wal.fuzzyCheckpointShutdownWait |600 |
|45 |storage.wal.fullCheckpointShutdownTimeout |600 |
|46 |storage.wal.path | |
|47 |storage.diskCache.pageSize |64 |
|48 |storage.diskCache.diskFreeSpaceLimit |256 |
|49 |storage.lowestFreeListBound |16 |
|50 |storage.lockTimeout |0 |
|51 |storage.record.lockTimeout |2000 |
|52 |storage.useTombstones |false |
|53 |record.downsizing.enabled |true |
|54 |object.saveOnlyDirty |false |
|55 |db.pool.min |1 |
|56 |db.pool.max |100 |
|57 |db.pool.idleTimeout |0 |
|58 |db.pool.idleCheckDelay |0 |
|59 |db.mvcc.throwfast |false |
|60 |db.validation |true |
|61 |nonTX.recordUpdate.synch |false |
|62 |nonTX.clusters.sync.immediately |manindex |
|63 |tx.trackAtomicOperations |false |
|64 |tx.pageCacheSize |12 |
|65 |index.embeddedToSbtreeBonsaiThreshold |40 |
|66 |index.sbtreeBonsaiToEmbeddedThreshold |-1 |
|67 |hashTable.slitBucketsBuffer.length |1500 |
|68 |index.auto.synchronousAutoRebuild |true |
|69 |index.auto.lazyUpdates |10000 |
|70 |index.flushAfterCreate |true |
|71 |index.manual.lazyUpdates |1 |
|72 |index.durableInNonTxMode |true |
|73 |index.ignoreNullValuesDefault |false |
|74 |index.txMode |FULL |
|75 |index.cursor.prefetchSize |500000 |
|76 |sbtree.maxDepth |64 |
|77 |sbtree.maxKeySize |10240 |
|78 |sbtree.maxEmbeddedValueSize |40960 |
|79 |sbtreebonsai.bucketSize |2 |
|80 |sbtreebonsai.linkBagCache.size |100000 |
|81 |sbtreebonsai.linkBagCache.evictionSize |1000 |
|82 |sbtreebonsai.freeSpaceReuseTrigger |0.5 |
|83 |ridBag.embeddedDefaultSize |4 |
|84 |ridBag.embeddedToSbtreeBonsaiThreshold |40 |
|85 |ridBag.sbtreeBonsaiToEmbeddedToThreshold |-1 |
|86 |collections.preferSBTreeSet |false |
|87 |file.trackFileClose |false |
|88 |file.lock |true |
|89 |file.deleteDelay |10 |
|90 |file.deleteRetry |50 |
|91 |security.userPasswordSaltIterations |65536 |
|92 |security.userPasswordSaltCacheSize |500 |
|93 |security.userPasswordDefaultAlgorithm |PBKDF2WithHmacSHA256 |
|94 |network.maxConcurrentSessions |1000 |
|95 |network.socketBufferSize |0 |
|96 |network.lockTimeout |15000 |
|97 |network.socketTimeout |15000 |
|98 |network.requestTimeout |3600000 |
|99 |network.retry.strategy |auto |
|100 |network.retry |5 |
|101 |network.retryDelay |500 |
|102 |network.binary.loadBalancing.enabled |false |
|103 |network.binary.loadBalancing.timeout |2000 |
|104 |network.binary.maxLength |16384 |
|105 |network.binary.readResponse.maxTimes |20 |
|106 |network.binary.debug |false |
|107 |network.http.installDefaultCommands |true |
|108 |network.http.serverInfo |OrientDB Server v.2.2.25 (build 72a35a4)|
|109 |network.http.maxLength |1000000 |
|110 |network.http.streaming |false |
|111 |network.http.charset |utf-8 |
|112 |network.http.jsonResponseError |true |
|113 |network.http.jsonp |false |
|114 |network.http.sessionExpireTimeout |300 |
|115 |network.http.useToken |false |
|116 |network.token.secretKey | |
|117 |network.token.encryptionAlgorithm |HmacSHA256 |
|118 |network.token.expireTimeout |60 |
|119 |profiler.enabled |false |
|120 |profiler.config | |
|121 |profiler.autoDump.interval |0 |
|122 |profiler.maxValues |200 |
|123 |profiler.memoryCheckInterval |120000 |
|124 |sequence.maxRetry |100 |
|125 |sequence.retryDelay |200 |
|126 |storageProfiler.intervalBetweenSnapshots |100 |
|127 |storageProfiler.cleanUpInterval |5000 |
|128 |log.console.level |info |
|129 |log.file.level |info |
|130 |class.minimumClusters |0 |
|131 |log.console.ansi |auto |
|132 |cache.local.impl |com.orientechnologies.orient.core.cache.ORecordCacheWeakRefs |
|133 |command.timeout |0 |
|134 |command.cache.enabled |false |
|135 |command.cache.evictStrategy |PER_CLUSTER |
|136 |command.cache.minExecutionTime |10 |
|137 |command.cache.maxResultsetSize |500 |
|138 |query.parallelAuto |false |
|139 |query.parallelMinimumRecords |300000 |
|140 |query.parallelResultQueueSize |20000 |
|141 |query.scanPrefetchPages |20 |
|142 |query.scanBatchSize |1000 |
|143 |query.scanThresholdTip |50000 |
|144 |query.limitThresholdTip |10000 |
|145 |query.live.support |true |
|146 |query.timeout.defaultStrategy |EXCEPTION |
|147 |lucene.query.pageSize |10000 |
|148 |statement.cacheSize |100 |
|149 |sql.graphConsistencyMode |tx |
|150 |client.channel.maxPool |100 |
|151 |client.connectionPool.waitTimeout |5000 |
|152 |client.channel.dbReleaseWaitTimeout |10000 |
|153 |client.ssl.enabled |false |
|154 |client.ssl.keyStore | |
|155 |client.ssl.keyStorePass | |
|156 |client.ssl.trustStore | |
|157 |client.ssl.trustStorePass | |
|158 |server.openAllDatabasesAtStartup |false |
|159 |server.channel.cleanDelay |5000 |
|160 |server.cache.staticFile |false |
|161 |server.log.dumpClientExceptionLevel |FINE |
|162 |server.log.dumpClientExceptionFullStackTrace |false |
|163 |distributed.dumpStatsEvery |0 |
|164 |distributed.crudTaskTimeout |10000 |
|165 |distributed.maxStartupDelay |10000 |
|166 |distributed.commandTaskTimeout |120000 |
|167 |distributed.commandQuickTaskTimeout |5000 |
|168 |distributed.commandLongTaskTimeout |86400000 |
|169 |distributed.deployDbTaskTimeout |1200000 |
|170 |distributed.deployChunkTaskTimeout |60000 |
|171 |distributed.deployDbTaskCompression |7 |
|172 |distributed.asynchQueueSize |0 |
|173 |distributed.asynchResponsesTimeout |15000 |
|174 |distributed.purgeResponsesTimerDelay |15000 |
|175 |distributed.conflictResolverRepairerChain |quorum,content,majority,version |
|176 |distributed.conflictResolverRepairerCheckEvery |5000 |
|177 |distributed.conflictResolverRepairerBatch |50 |
|178 |distributed.txAliveTimeout |30000 |
|179 |distributed.requestChannels |1 |
|180 |distributed.responseChannels |1 |
|181 |distributed.heartbeatTimeout |10000 |
|182 |distributed.checkHealthCanOfflineServer |false |
|183 |distributed.checkHealthEvery |10000 |
|184 |distributed.autoRemoveOfflineServers |0 |
|185 |distributed.publishNodeStatusEvery |10000 |
|186 |distributed.localQueueSize |10000 |
|187 |distributed.dbWorkerThreads |8 |
|188 |distributed.queueMaxSize |10000 |
|189 |distributed.backupDirectory |../backup/databases |
|190 |distributed.backupTryIncrementalFirst |true |
|191 |distributed.concurrentTxMaxAutoRetry |10 |
|192 |distributed.atomicLockTimeout |50 |
|193 |distributed.concurrentTxAutoRetryDelay |10 |
|194 |db.document.serializer |ORecordSerializerBinary |
|195 |client.krb5.config | |
|196 |client.krb5.ccname | |
|197 |client.krb5.ktname | |
|198 |client.credentialinterceptor | |
|199 |client.ci.keyalgorithm |AES |
|200 |client.ci.ciphertransform |AES/CBC/PKCS5Padding |
|201 |client.ci.keystore.file | |
|202 |client.ci.keystore.password | |
|203 |security.createDefaultUsers |true |
|204 |server.security.file | |
|205 |jna.disable.system.library |true |
|206 |distributed.queueTimeout |500000 |
|207 |db.makeFullCheckpointOnIndexChange |true |
|208 |db.makeFullCheckpointOnSchemaChange |true |
|209 |client.session.tokenBased |true |
|210 |oauth2.secretkey | |
|211 |storage.cluster.usecrc32 |false |
|212 |lazyset.workOnStream |true |
|213 |db.mvcc |true |
|214 |db.use.distributedVersion |false |
|215 |mvrbtree.timeout |0 |
|216 |mvrbtree.nodePageSize |256 |
|217 |mvrbtree.loadFactor |0.7 |
|218 |mvrbtree.optimizeThreshold |100000 |
|219 |mvrbtree.entryPoints |64 |
|220 |mvrbtree.optimizeEntryPointsFactor |1.0 |
|221 |mvrbtree.entryKeysInMemory |false |
|222 |mvrbtree.entryValuesInMemory |false |
|223 |mvrbtree.ridBinaryThreshold |-1 |
|224 |mvrbtree.ridNodePageSize |64 |
|225 |mvrbtree.ridNodeSaveMemory |false |
|226 |tx.commit.synch |false |
|227 |tx.autoRetry |1 |
|228 |tx.log.fileType |classic |
|229 |tx.log.synch |false |
|230 |tx.useLog |true |
|231 |index.auto.rebuildAfterNotSoftClose |true |
|232 |client.channel.minPool |1 |
|233 |storage.keepOpen |true |
|234 |cache.local.enabled |true |
+----+---------------------------------------------------+-------------------------------------------------------------------------+
Is there suggestion that i can get a work around? the process doesn't consume much memory (JAVA SE, consuming 1GB memory, nearly 40% CPU time). There are total 370,000 records.
I also try multi-processes, but it didn't work. The second/third process was even slower.
Also I changed some setting of the configure file:
After restart the server or restart java SE process. Those settings have got effect. Did I miss anything?
The text was updated successfully, but these errors were encountered: