-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A camera system that is more tied to entities and their relative location #2115
Conversation
Ooooh! Aaaah! ... that counts as feedback, right? Looks good to me. I like "Gaze" :-) Does this help us move toward a better camera system (like multiple camera locations / closed circuit tv) or is it more about offsetting the eyes of a creature from the body base and allowing sensible turning of a head? Bonus points for "We have a pony, with a monkey head!" |
It doesn't do anything about multiple active camera renderings at one time. But you could mount the client's camera to any entity that has a location, allowing one to look through a CCTV as first person by separating the character from the camera. |
Righto. Pinging @emanuele3d for possible interest / feedback. Maybe @flo would have an idea or two as well. @MarcinSc perhaps? |
Did some more work around seeing what it would be like to then use the camera entity to "mount" held items to for the first person view. I have used a similar pattern of having a mount point entity, and then location linking the local player's held item to this mount point with an offset in front of the camera. It seems to be going well. As an added bonus it allows a custom hand mesh to be used. Here is a quick video snippet of progress: https://youtu.be/F74LJ4NhEI4 |
Nice! Good progress is good :-) I take it either something got missed in a commit or relies on an unreleased/applied lib tweak or something? Or maybe Jenkins is just being goofy. |
I'll have to double check tomorrow. It seemed like it was just a Jenkins oddity. But even after clearing the workspace it still is weird. "It's working on my station"... |
…eld items. Use animation is still not present, but it at least moves the mount point on use so that it at least does something. Meshes are always added to items. Still quirks with getting the material to work on blocks all the time. Fixes for multiplayer Gaze issues.
bc4a301
to
6b0a34e
Compare
Refer to this link for build results (access rights to CI server needed): |
I haven't looked at this in detail - apologies for that - but I wanted to give some preliminary feedback: An important aspect is that the camera should be detachable from the local player character at some point. For example, the camera could follow the player from behind like in Tomb Raider. Would that still work? Maybe the HUD should look completely different in that case? |
@msteiger: separating the camera from the player's character, this will do that. Check. You are likely thinking about being able to turn the first person rendering stuff off if you are in 3rd person mode, right? In that case, I have not built in a switch for that yet. But that is definitely what I had in mind. Having some sort of component that helped determine if a first person view should be shown. (thanks for the feedback. If you only get around to a conceptual review, I would be pleased) |
One last tinker at this before polishing it up a bit more. I tried out moving the camera entity's position (while it is still linked to the main character's gaze mount point) to prove that one could indeed use this as a 3rd person view. I had to enable the character's mesh to be rendered for the owner in order for it to work. |
Very cool :-) Curiously does this change anything fundamental that would break modules? Thinking about the v1.0.0 release here. |
It does change modules that do any additional ray tests for collision as the CharacterComponent's pitch and yaw fields have been removed. The next couple steps I want to take will change things up a bit more by making progress in moving the inventory systems to Core. So far in the omega modules, I have IRLCorp, Miniion, Minimap, and Pathfinding that need tweaks. |
Alrighty. It sounds like it would be worth getting that done before v1.0.0 then. Does that seem reasonable? And not that we are generally good with estimating things or picking dates but how long do you think it would take to be stable (just speaking API, not bugs) with the changes in? Couple weeks? Keep in mind we can move things first then get fancier / finish later. Just so long as modules get updated to point at the right spot :-) At this time I'm thinking the full Iota split-out (getting rid of Core and having the base game depend on Iota instead) would be one of the main v2.0.0 items. |
My personal plan is to get the camera stuff sent for real PR tonight. Then, get a PR for the first person rendering stuff early next week. Then do a once over the logic systems still in engine and move them to Core, where applicable, the week after. Hopefully at that point we could have a more decoupled engine that would allow for even more variety of gameplay modules that dont all smell like MC. [cheers on Light and Shadow and Master of Oreon] |
I just wanted to compliment @Josharias for this. Looks like a very good step forward. I also like the clarity of your initial explanation with the diagram. I have not looked into how the first person stuff is done, I only know at what point it happens in the WorldRenderer. But I did notice that there were some issues with intersections of objects held in hand. I can see this set of changes will go a long way to address that. Question: are the objects held in hand now actually in the world? I guess there is some special handling still given that they go black when they are inside a block? It would be nice if they become part of the 3d scene being rendered as their look, i.e. lighting, yould be consistent with everything else. I also just wrote a paragraph about a potential gotcha with the renderer triggering the "underwater look" when the head of the player is underwater, potentially leading to situations where the look is applied but the camera is still -above- the water. It turns out I had in mind method WorldRendererImpl.isHeadUnderwater(). The implementation however seems to do the right thing, checking against the camera position rather than the character's head. I will rename the method when I document the class and the interface it implements. Again, good stuff Josharias! |
Ah, one more comment directed to @Cervator regarding the multiple cameras issue. I think that has at least a couple of elements to it, both of which are worth their own PR:
Cameras as components is a bit daunting for me because I am so unfamiliar with the ES and the various camera-related entities. But it might not be overly difficult. A multi-camera-aware renderer is something that my DAG-based Rendering Pipeline proposal would address. That's a pretty big thing though. Perhaps there could be simpler, interim solutions to get some way there. I.e. allowing multiple "secondary renderers" rendering to a texture located on in-world geometry. The WorldRenderer would remain untouched and the secondary renders would probably have to do everything on their own, inefficient if they could share acceleration structures instead. But it would probably be much quicker then a full DAG-based solution. |
Thanks @emanuele3d! And yes, mspaint skills to the rescue with the diagram. :) Answer: As far as I could tell, technically the first person rendered stuff was/is in the world scene already (which is why it could get shadows and such). These changes let the first person rendered stuff use the same mechanisms as dropped objects to get meshed and displayed. It also manhandles the location of the held item so that it is always in front of the camera (only for the local client, and only for the local client's held item). So these items do indeed have a location that is part of the world. |
So in other words a held item like the chest in one video is fully visible to the holding player and affected by the world, but a second player on a server wouldn't see it, right? Or would it be visible? That could be a kind of cool way to distinguish the engine. Neat to be able to see people trying to place a chest just right vs just seeing them holding a tiny chest in their hand (if even that) Would probably be tricky to get the representation of a held item right from a third person perspective, but maybe we could sprinkle some of @glasz's style magic from the past on it and do the disconnected look for held stuff. Although I guess for tools and such you could have a snap point of sorts on compatible models where it would attach and be part of the model if the active item, differentiating between tools that can be held directly in a hand and items that just float in front of the player (magic!) |
@Cervator: At the moment, the held item only gets a location on the client side. So no other players would be able to see it. You could give it a different location for clients not tied to that character, and then it would be visible to them as well (going by the rule: entities render in the world if they have a LocationComponent and a MeshComponent). You would preferably attach the held item to a hand or arm of the character's model (snap points). |
Thank you all for the thoughts and questions. First polished PR has been created. |
Not for Committing, for feedback only
Could I get some feedback on this concept? This started with a goal to have the first person renderer (the one that shows the held block and such) more extensible so that one could customize and improve the first person renderer by attaching entities with meshes to the camera's location and have them fixed in view. This way we could get rid of a bunch of the openGL code that currently renders the first person held items.
The main part of this is the Gaze components (please suggest a better name if there is one.) Gaze is specific to a character (whether AI controlled or player controlled) and not to be confused with the camera. The gaze entity needs its origin to be at a different location than the character location (aka, foot location) so that it can rotate in place at whatever location the "eyes" of the character are at. In order to do this one must nest this entity in a few layers of components and entities, which then all get linked together by location linking (see
Location.attachChild(...)
).The client then gets an entity created (the camera entity) that can be location linked to the gaze entity. The local player rendering would then use the derived location of the camera entity to find out where in the world the player is currently at.
I made a picture...
Before I get to polishing this further, is it even on the right track? Any gotchas I havent found yet? I also made a quick video snippet of this system working in multiplayer with a moveable head apart from the body.
https://www.youtube.com/watch?v=_q0dEKTIxuo