Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create and Delete Events Generate Lots of Detached Nodes #2795

Closed
BeksOmega opened this issue Aug 9, 2019 · 8 comments
Closed

Create and Delete Events Generate Lots of Detached Nodes #2795

BeksOmega opened this issue Aug 9, 2019 · 8 comments
Labels
component: events issue: bug Describes why the code or behaviour is wrong type: question

Comments

@BeksOmega
Copy link
Collaborator

Describe

Create and delete events generate lots of detached nodes.

This is because those events store the xml of the created/deleted blocks.

To Observe

Steps to observe the behavior:

  1. Go to the playground.
  2. Run the spaghetti stress test.
  3. Delete all of the blocks.
  4. Create a heap snapshot.
    A) Go to the memory tab of the chrome dev tools.
    B) Select the Heap Snapshot button.
    C) Press the "Take Snapshot" button.
    D) In the "Class filter" type in "Detached"
  5. Observe the large number of detached nodes:
    Detached_1
  6. Expand the detached node, and select the first entry.
  7. Observe how it is part of a delete event:
    Detached_2

Describe the solution you'd like

Honestly, I know that detached nodes are bad but I don't know what you're supposed to do instead, since these are being used for storage purposes.

Maybe storing the block information as JSON would be better? I have designed a system for adding JSON serialization to the Blockly serializer.

Describe alternatives you've considered

Maybe converting the dom to text would be better? But again I really have no knowledge in this area.

Additional context

Tracking issue requested by Rachel.

@BeksOmega BeksOmega added issue: triage Issues awaiting triage by a Blockly team member issue: bug Describes why the code or behaviour is wrong labels Aug 9, 2019
@samelhusseini samelhusseini added this to the Needs discussion milestone Aug 9, 2019
@samelhusseini samelhusseini added type: question component: events and removed issue: triage Issues awaiting triage by a Blockly team member labels Aug 9, 2019
@rachel-fenichel
Copy link
Collaborator

These are being kept alive by the undo stack on the workspace--calling Blockly.mainWorkspace.clearUndo() in the playground clears them out. A reasonable solution is to set a depth on the undo stack. 64 undos ought to be enough for anybody.

@rachel-fenichel
Copy link
Collaborator

@picklesrus @ewpatton Do you folks already have a depth on the undo queue to solve this?

@ewpatton
Copy link
Contributor

ewpatton commented Sep 10, 2019 via email

@rachel-fenichel
Copy link
Collaborator

I think this would matter if you have a long-lived workspace, like a tab you've had open for a lot of time and have been poking at. Maybe also when opening a giant workspace, since that would create these all at once, so that could be a good way to check.

@ewpatton
Copy link
Contributor

ewpatton commented Sep 10, 2019 via email

@NeilFraser
Copy link
Contributor

I ran some tests in Chrome. My results were fairly consistent between runs, so I have confidence in the results.

  • Load playground, memory snapshot: 5.3 MB.
  • Run Spaghetti test, memory snapshot: 21.2 MB.
  • Delete playground, wait a minute for garbage collection, memory snapshot: 5.7 MB.

Now comment out the two lines where oldXml is set and repeat:

  • Delete playground, wait a minute for garbage collection, memory snapshot: 5.6 MB.

This indicates that the XML (the detached Nodes) takes 100 KB. By comparison, converting this XML to text:

Blockly.Xml.domToText(workspace.undoStack_[2].oldXml).length/1024 -> 326

My conclusions are:

  1. XML nodes are being stored three times more efficiently than the easiest text-based representation.
  2. The leak of about 33 bytes per block deletion (100 KB for 3,000 blocks) seems low enough not to worry about, since it serves a genuine purpose (undo).

@NeilFraser
Copy link
Contributor

I should follow up and say that this is absolutely a topic that should be explored. There may be monsters lurking in our memory management that we aren't aware of. Like that time when we discovered that the connection database was O(n^2) and was using the majority of our CPU time due to a trivial inefficiency.

There are not a lot of eyes on our memory heap, so it's a promising thing to poke at. Thank you @BeksOmega!

@rachel-fenichel
Copy link
Collaborator

Turns out we have a MAX_UNDO, so this is bounded (as well as being small per-block). Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: events issue: bug Describes why the code or behaviour is wrong type: question
Projects
None yet
Development

No branches or pull requests

6 participants