-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use CPU pixel format conversion for AMD/ATI platform only #6039
Conversation
@solarmystic , can you help to test a bit those games that may require read framebuffer to memory mode ? to see if it is working okay with this commit on AMD platform |
@raven02 Danganronpa, Ys Seven and Trails in the Sky still work as expected on my AMD graphics card with this commit when using FB to Memory mode. I'm not sure whether it is a good idea to make every other platform use Read Framebuffers to Memory (GPU) by default (if I understand your pull request correctly) without giving them the option to switch to the CPU though. We know for a fact from testing that AMD/ATI cards need the CPU mode to attain accurate functionality in games like Danganronpa (correct object targeting/interaction), but other mobile platforms may need CPU mode as well. |
CPU pixel conversion only used in Asynchronous PBO mode and current code always force synchronous mode on mobile. Thereforw mobile should be.unaffected. |
What about Intel, PowerVR on desktop? I guess we probably don't care about Matrox/S3... -[Unknown] |
Hm, if we actually do this, we should change the rendering mode settings to have only 3 instead of 4 option, as this locks the CPU/GPU choice without telling the user. Also, we should remove that glGetError call or only call it the first time, as glGetError is very expensive (it forces the CPU to wait for the GPU). |
I still think that the choice should be given to the end-user for those options, but that's just me. And like @unknownbrackets mentioned, we still need input from users with other desktop GPU vendors. |
Yeah, I'm undecided. Maybe we should simply somehow "default" or make more prominent the option that's likely to be more suitable for the user's GPU. |
Yep , probably keep it for sake of good to our user .I will remove the glGetError call by the way. |
Nvidia platform shouldn't need it and will bit slow down a bit.
I remember this has been suggested previously here #1686 to detect and set it simply using vendor .
(There are some underlying codes need to be removed but will clean up later)