Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial support for per-tile nonlinear warping and edge blending #179

Merged
merged 9 commits into from
May 4, 2016

Conversation

derek-gerstmann
Copy link
Contributor

Please review these changes as an initial working solution for doing per-tile warping and edge blending, to support projector based tiled displays.

The implementation resides mostly within DisplayContext, implemented with the RenderCorrection class interface and applies a warp and/or edge blending globally to the final frame buffers without affecting any cameras or renderer instances.

This is not as clean as I would like, but it seemed the safest path, rather than attempting to override the CameraOutput or Renderer listener interfaces which seem reserved for applications.

Also, the GpuBuffer class interfaces aren't used anywhere within the codebase, so I couldn't tell if these are legacy or stubs, since all proper applications seem to use OSG for geometry, etc. There seems to be widespread OpenGL calls throughout the codebase, even though there's some level of abstraction that seems to be designed to modularise the graphics API calls. So, I have to cringe, but practically, I found it necessary to call OpenGL directly within the RenderCorrection interfaces.

I haven't been able to do extensive testing on the various stereoscopic 3d modes, so apologies if the interlaced modes don't work as expected. Please take a look at these closely.

To test, please try the included test-warpblend.cfg config file, which shows the newly introduced config options:

    tiles:
    {
        local:
        {
            t0x0:
            {
                flipWarpMesh = true;  // inverts the vertical axis for the warp mesh (to match OpenGL)
                   correctionMode = "PreWarpEdgeBlend"; // specifies the order and combination of warping + blending
                   warpMesh = "test-warpmesh.csv";  // basic CSV file format describing quad mesh for projecting frame buffer onto for nonlinear warping
                   edgeBlend = "test-edgeblend.png"; // alpha mask to apply for edge blending
            };
        };

To use, try:

./omegalib/build/bin/orun -L v -c system/test-warpblend.cfg ./sandbox/modules/sprite/demo.py
./omegalib/build/bin/oimg -c system/test-warpblend.cfg -L v -M stretch -f testimage.png

@febret
Copy link
Member

febret commented May 4, 2016

Thanks for the contribution! Looks great!

RE the GpuBuffer & related classes, they are not legacy and have actually been introduced recently to address the issue you mentioned (widespread use of raw OpenGL calls).

Most of the examples use OSG because it provides some simple model loading / animation / physics support and is integrated through the cyclops module. But the omegalib core does not depend on OSG and in fact, given how big of a dependency it is, for new applications I'm trying to move away from it unless I absolutely need some of its features.

@febret febret merged commit 8d8a4d7 into uic-evl:master May 4, 2016
@derek-gerstmann
Copy link
Contributor Author

Excellent! Thanks for including this in master!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants