Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debug node #3051

Closed
wants to merge 2 commits into from
Closed

Debug node #3051

wants to merge 2 commits into from

Conversation

vampcat
Copy link
Contributor

@vampcat vampcat commented Aug 5, 2017

Adds a new Debug Node that can be used to display the Color attachments of FBOs. Also adds associated console commands that can be used to control the debug Node, like dagDebug engine:fbo.ssao (that will become something like dagDebugger setFbo engine:fbo.ssao after some polish).

Vision for the future: Ideally, we will have an abstract DebugNode class, that gets extended by ColorAttachmentDebugNode (currently the DebugNode), DepthAttachmentDebugNode and StencilAttachmentDebugNodes.

@emanuele3d Ping! :D

@GooeyHub
Copy link
Member

GooeyHub commented Aug 5, 2017

Hooray Jenkins reported success with all tests good!

@emanuele3d emanuele3d self-requested a review August 6, 2017 00:53
@emanuele3d emanuele3d added Topic: Architecture Requests, Issues and Changes related to software architecture, programming patterns, etc. Topic: Rendering Requests, Issues and Changes related to lighting, meshes, camera, etc. labels Aug 6, 2017
@emanuele3d
Copy link
Contributor

Ok, had a look and I have no idea where you are going with this PR.

Right now the use of the debug node seems convoluted and superfluous.

I like the idea of having a console command that changes the fbo of the node helping with the debugging. But I'd tie that to the CopyFboColorAttachmentToScreenNode instead, as it does the job already.

Also, do we need to add the command to the CoreCommands file? Can't we register the command directly in the node?

@vampcat
Copy link
Contributor Author

vampcat commented Aug 6, 2017

"Right now the use of the debug node seems convoluted and superfluous.
...
But I'd tie that to the CopyFboColorAttachmentToScreenNode instead, as it does the job already."

You are correct, right now the job of DebugNode is kinda trivial. However, the major reason I made a new Node, rather than cram it all up in CopyFboColorAttachmentToScreenNode, was the separation of concerns. This deals mainly with three things:

  • Storing data: as things stand, the job of CopyImageToScreenNode is simple - it copies an image directly to screen. It does not need to store any other data, or know anything else. The DebugNode, on the other hand, has to obtain the lastUpdated / stale gBuffers right in the constructor, so that any external calls to .swap() don't provide us with the wrong buffer. This might include other things in the future, though not necessarily.
  • Interfacing with other classes: I've had to add the method WorldRenderer.getDebugNode that allows the console command, which is stored in an external file (at least for now), to interface with the dag. Rather than having them deal with CopyImageToScreen (and DebugDepth and DebugStencil in the future) I'd rather have them deal with DebugNode (And DebugDepth and DebugStencil).
  • Lastly, the addition of new types of debug Nodes to the system. Right now, we'll have to just add the new Debug Nodes, and link their output to CopyImage Node. If we combine the Debug and CopyImageToScreenNode, we'll have to break it into two parts again to allow New Nodes. Consider a small example : let's say we want to allow drawing the scene, with contribution of a sepcific fbo highlighted (tinted). Right now, we can just rework DebugNode to draw to an intermediate fbo with the desired image, and pass that to CopyImageToScreenNode. If we combine these two Nodes right now, we'll have to either a) have this piece of debug code embedded in the same Node that is responsible for drawing things to screen and which will also be used by other Nodes like DepthDebug making it less than optimal, or b) we'll have to split the CopyFboColorAttachment into two - DebugNode, and CopyImageToScreenNode.

That being said, if you still want, I can merge these two together - I'm just not sold on the idea of combining them now, and splitting them again a week from now.

"Also, do we need to add the command to the CoreCommands file? Can't we register the command directly in the node?"

I tried directly adding the command to Node, it didn't work. I guess I'm missing something, but I definitely agree with you, and will have this done.

@emanuele3d
Copy link
Contributor

emanuele3d commented Aug 7, 2017

As I looked at the broader picture, something occurred to me. We have been discussing about what these nodes do in the context of debugging, but we should probably stop earlier.

What do these nodes do, period?

First, we need a node that displays an arbitrary color attachment of an arbitrary fbo. Second, we need a number of converter nodes that kick in when we need to deal with a non-color attachment, to convert the attachment data into color data. This way, we don't need a hierarchy of specialized debug nodes, we only need one OutputToScreenNode (currently known as CopyFBOblablabla - I know, my fault) and a few converter nodes such as DepthToColorNode or StencilToColorNode. In fact we might also need a Color64toColor32Node as the gbuffer has 64 bits color attachments.

That fact that we might use these nodes for debugging purposes is completely irrelevant. Other people might use them in different ways. What we do need is the commands to interact with these nodes: that's where the debugging capability kicks in. And for that we need, most of all, to figure out how to declare commands right in the node class code, so that we don't need to give access to the bowels of the renderer to unrelated parts of the engine.

Does this make sense @vampcat?

@vampcat
Copy link
Contributor Author

vampcat commented Aug 7, 2017

Yes it does, and I completely agree with you.

However, I do have one slight problem dealing with all this, and something I keep coming back to : let's discuss a practical debugger. We want either a fullscreen regular image with contribution of a node highlighted (ideally, given fbo into tint and then a composition of the tinted fbo and gbuffer), or a picture in picture depth attachment (ideally, depthToColor along with pipComposition of this image and gbuffer). What I'm trying to highlight is, it's not one single Node that will make up the renderer, it will be a series of Nodes.
So, how about we go with your (new) vision, and think up of a slightly better area to house the debugging code, apart from the Nodes themselves? (Something like RenderGraphDebugger class or something, not sure tbh)

@emanuele3d
Copy link
Contributor

There are a couple of issues here.

  1. I fail to understand what functionality you'd like in a RenderGraphDebugger at this stage. From my perspective debugging will be a combination of nodes and placement of nodes. Highlighting the contribution of a node under inspection will require either rendering to a separate FBO (if it doesn't already), use a TintNode on the result and then recombine everything with a CompositingNode or, more simply, add a TintNode after the previous node and then render the inspected node as usual (reverse highlighting). PiP fundamentally is just a specialized, not blending CompositingNode that takes the color attachment of an FBO (the background), the color attachment of another FBO (the foreground) and combines the two using a viewport to write the content of the foreground on top of a portion of the background. Depending how complex you want it to be we can even have CropNodes and Transform2dNodes that alone or combined would give you anything from PiP to side-by-side views.

  2. I fail to see why we have to discuss this debugging functionality so in depth at this stage, when we have no experience with an actual DAG in Terasology. The idea here is to have quickly in place something minimally functional that allows us to develop the DAG and debug it as necessary. It is premature to develop a DAG-based debugging functionality when the DAG is not in place yet.

Once we have a DAG in place we'll certainly have to deal with how, for debugging purposes, we can add or move nodes used for debugging, so that they get into the right places - not to mention removing them from those places once they are no longer needed. But right now for this purpose I'm thinking commands, not fancy classes somewhat replacing the whole graph and peppering it with debug nodes. Also keep in mind that once we have a visual DAG, anything automatic will get out of the window: it will be much easier to write a tutorial showing what node to put where so that the user can do it on his own than write fancy functionality to do it automagically. I suspect if we get all the way to a VDAG even module developers won't bother with functionality to insert nodes in the right place: they'll just write a tutorial or a set of instructions leaving the user with the relatively simple task of adding/replacing the appropriate nodes and doing the rewiring.

Does this make sense?

Really, for the time being, focus on the OutputToScreenNode and a handful of ConverterNodes that we can (currently) add to the initRenderGraph and then interact with via console commands. That's a good, solid, no-frills base. Then we get on with the DAG development and as we refine our understanding of it I'm sure we'll be able to understand much better what we need debug-wise.

@vampcat
Copy link
Contributor Author

vampcat commented Aug 9, 2017

Okay, so I don't think I'm getting across.

"Once we have a DAG in place we'll certainly have to deal with how, for debugging purposes, we can add or move nodes used for debugging - not to mention removing them from those places once they are no longer needed. But right now for this purpose I'm thinking commands, not fancy classes somewhat replacing the whole graph and peppering it with debug nodes."
No, I don't want to replace the graph or pepper it with debug Nodes. I agree with you compeltely on the vision right now:
"focus on the OutputToScreenNode and a handful of ConverterNodes that we can (currently) add to the initRenderGraph and then interact with via console commands."

That being said, I just don't believe OutputToScreenNode is the best place to put in commands that manipulate the rest of the graph. Shouldn't a related - but separate - class take care of adding/removing the Nodes, based on which attachment type (or later, in which view mode) we want to view?

@emanuele3d
Copy link
Contributor

That being said, I just don't believe OutputToScreenNode is the best place to put in commands that manipulate the rest of the graph. Shouldn't a related - but separate - class take care of adding/removing the Nodes, based on which attachment type (or later, in which view mode) we want to view?

That's not what I'm asking. The commands interacting with the OutputToScreenNode should only change the FBO. The converter node would only need commands setting of min/max values, i.e. to remap depth and stencil to a visible range. The wiring, at this stage, we'd do right in the source code, as needed. We don't need fancier functionality right now.

@vampcat
Copy link
Contributor Author

vampcat commented Aug 10, 2017

Okay, this makes sense.

I started on it, and ran into a slight problem : a class needs to extend the BaseSystem (or something similar) class, to enable registering of commands within it. OutputImageToScreen Node already extends AbstractNode, so either we make a new class, or make Abstract Node extend that class. I really don't like the second option.

What do you recommend?

@emanuele3d
Copy link
Contributor

That's annoying. 😝

I'd say neither possibilities are ideal given the circumstances. What I'm thinking right now is to have another object extend BaseComponentSystem, with the two obvious candidates being WorldRendererImpl and RenderGraph. But that requires looking into the lifecycle of a system and avoid being confused between engine subsystems and component systems, which I'm noticing in the code they both tend to be called just "systems".

Let's have a chat in the rendering channel tonight, also to clarify my knowledge, with @Cervator at least, but ideally also with @Josharias and @flo. This is a bigger architectural issue that requires a bit more knowledge of the bigger picture. Would 20:00 CEST work for everybody?

I'm actually quite flexible about the time as my wife and kids have gone for a long weekend away (my first multi-day holiday in 6.5 years!) so, I could do earlier in the afternoon or later in the evening but we have to keep in mind @vampcat is India-based and 3.5 hours ahead of Europe.

@emanuele3d
Copy link
Contributor

@vampcat: I just had a chat with @Cervator on the subject.

It seems to me that making all nodes ComponentSystems (even by adding empty methods in AbstractNode) is not ideal as it would have some performance impact on the objects iterating over legitimate ComponentSystems.

In this context I'm thinking that the WorldRendererImpl class could be a good candidate to become a system, perhaps doing initialization work in postBegin() and disposal in shutdown(). From that scenario we'd be able to have commands in the WorldRendererImpl class.

That been said, it would still be nice if somehow nodes can register with the renderer the commands they can accept. I.e. we could have a package-private WorldRenderer.registerCommand(uri nodeId, string commandName). Then, in the node, we'd have a handleCommand(string commandName, string arguments).

In this context, typing "dagNodeCommand myNodeUri myCommand myArg1 myArg2" would fetch the appropriate node via RenderGraph.findNode(myNodeUri) and then call myNode.handleCommand(myCommand, "myArg1 myArg2"), completely delegating to the node the handling of the arguments string.

What do you think?

@vampcat
Copy link
Contributor Author

vampcat commented Aug 12, 2017

Closing in favour of #3058

@vampcat vampcat mentioned this pull request Aug 12, 2017
@emanuele3d emanuele3d closed this Aug 12, 2017
@vampcat vampcat deleted the debugNode branch February 18, 2018 13:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Topic: Architecture Requests, Issues and Changes related to software architecture, programming patterns, etc. Topic: Rendering Requests, Issues and Changes related to lighting, meshes, camera, etc.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants