-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling Scanline Simulation (future improvements) #16373
Comments
The current problem is that we don't -know- a good way to improve it that doesn't have fairly bad artifacting or other major issues of its own. I personally think the rolling scan feature as it is now, will scare people off BFI thinking it's an entirely useless/broken feature. But I didn't want to stand in the way of merging either, as it isn't my place, and as this code should not inhibit the existing full-frame BFI/shader sub-frame code paths from working as intended. Some of the best things we know of, for the issues this feature has, are trying to hide the joint lines behind scanlines in CRT filters, and having some overlap between rolling scan sections with brightness adjustment which replaces some of the tearing problem with horizontal strips of less motion blur reduction. Which in and of itself is a pretty apparent visual artifact. Also, a front-end solution like this wont be aware of what shaders are in use, and the screen resolution and Hz being used will also change where those rolling scan joint lines are in the image. Making trying to build front end code, or a shader specificially meant to be used in conjunction with this feature, need to account for a LOT of different joint line possibilities. If anyone can provide a solution to where the artifacting is minimal enough to compete with the existing full-frame BFI that has zero inherent artifacting (other than strobing itself being a little annoying, obviously), I am all for it though. There are a few side benefits to the rolling scan method over full-frame BFI when/if it works well. This is where @mdrejhon would be very handy. :) |
For the record, I find a double ON to be much less obtrusive than a double OFF flicker. |
Did you mean this response for my last reply on the previous PR regarding the 120hz bfi workaround? |
Yeah, I just put it here instead of there so we could close the lid on that one and continue discussion of improvements here. |
A sub-frame shader solution (to that 120hz workaround) wouldn't be able to inject an 'extra' sub-frame like a driver solution could. But I still think it might be better to 'hide' a feature that is purposefully injecting noticeable annoying artifacting in a shader rather than as a front-end option. So you'd maybe do something more like (100-0)-(100-0)-(50-50)-(0-100)-(0-100) style phase shift on a framecount%(adjustable number of how long you want between phase shifts). And keep in mind framecount intentionally doesn't increment on sub-frames, or sub-frames would mess with anything older that looks at framecount but isn't sub-frame aware. The 50-50 transition frame might be a less noticeable/annoying transition than just a straight flip like 100-0-0-100? Trading some of the very noticeable change in instantaneous average brightness for some transient motion blur, still annoying but maybe a -little- less distracting. |
Hi Roc-Y I presume this only happens when rolling scan line is turned on?
…On Fri, 16 Aug 2024, 17:42 Roc-Y, ***@***.***> wrote:
I don't know why this causes wide black bands in the shader I developed,
but I think if the Rolling Scanline Simulation feature only handles the
native resolution (e.g. 256*244), then my shader will behave normally.
20240817_003458.jpg (view on web)
<https://github.com/user-attachments/assets/546e0f9c-5d53-4801-a4f1-ca496e18e89b>
—
Reply to this email directly, view it on GitHub
<#16373 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AVKYGRTCSH6EVMKYMZTDUQDZRYTWLAVCNFSM6AAAAABE53OIC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJTHAZDIOJRG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
There are no black lines after turning off the shader. It seems that as long as the resolution is enlarged in the shader, there will be black lines. It has nothing to do with whether it is a CRT shader. |
BTW, in fast horizontal scrolling, there can be tearing artifacts with rolling-scan. You need motion sufficiently fast (about 8 retropixels/frame or faster, which produces 2-pixel-offsets for 4-segment sharp-boundary rolling scan). This is fixed via using alphablend overlaps. However, gamma-correcting the overlaps can be challenging so that all pixels emit the same number of photons is challenging. And to put fadebehind effects (so that a short-shutter photo of rolling scan looks more similar to a short-shutter photo of a CRT). And even LCD GtG distorts the alphablend overlaps. So alphablend overlaps works best on OLEDs of a known gamma (and doing gamma correction, and disabling ABL). For LCD, sharp-boundary rolling scan is better (and tolerating the tearing artifacts during fast platformers). Then again using HDR ABL is wonderful, because you can convert SDR into HDR, and use the 25% window size to make rolling-scan strobe much brighter. This improves a lot if you use 8-segment rolling scan (60fps at 480Hz OLED) to reduce HDR window size per refresh cycle, allowing HDR OLED to have a much brighter picture during rolling BFI! Also, I have a TestUFO version of rolling scan BFI under development that actually simulates the behavior of a CRT beam more accurately (including phosphor fadebehind effect). Related: #10757 |
@songokukakaroto I've been working on subframe BFI shaders that you can try. There's a 240 Hz rolling scan one, but it's not great. As mdrejhon mentioned, the gamma issue means the fades aren't perfect. |
I have a surprise coming December 24, 2024 -- the world's most accurate CRT electron beam simulator shader, the "Temporal HLSL" complete enough to close BFIv3 (except for the beamrace part) if integrated into RetroArch. It looks better than this simulation. MIT license. @hizzlekizzle, you probably can port it quickly into Retroarch. You're welcome. |
@mdrejhon sounds exciting. I can't wait to check it out :D |
Even MAME could port it in, if they wish -- @cuavas put a spike on it mamedev/mame#6762 because they said it was a pipe dream. Ah well, RetroArch code contribution it is!
It's a reality today. I just finished working on a new glsl shader for a CRT electron beam simulator I've ever seen, especially when run on a 240Hz OLED. I'll publicize it with MIT source license via a github you can fork or port on 24th December 2024 -- my release date. When run on a 240Hz-480Hz OLED to emulate a 60Hz tube, it looks good enough that my shader might become the gensis of CRT-replacement OLEDs for arcade machines of 2030s when it's cheaper to buy these soon-cheaper OLEDs than source nearly-extinct tubes 10 years from now. |
Sounds great! |
Sneak preview! Slow mo version of realtime shader. These are genuine screenshots, played back in slow-motion, of it running in real time doing 60fps at 240Hz. Phosphor trailing is adjustable, for brightness-vs-motionclarity. Still works at 2xInHz (Min ~100Hz to emulate PAL CRT, min ~120Hz to emulate NTSC CRT) and up, scales infinitely (>1000Hz+) Yes, it looks like a slow motion video of a CRT tube. But these are PrintScreen screenshots! |
It's out now! I've published the article: I've released a MIT-licensed open source shader: Shadertoy animation demo (for 240Hz) Can easily adjust settings for 120Hz or 144Hz or 165Hz or 360Hz or 480Hz or 540Hz! Please implement it into Retroarch. Pretty please. |
Discussion also at #10757 |
Porting to RetroArch was a breeze. It's available here now: https://github.com/libretro/slang-shaders/blob/master/subframe-bfi/shaders/crt-beam-simulator.slang and will show up in the online updater in a few minutes/hours. I replaced the edge-blended version I had made with it, since this one is superior in every way lol. |
That was damn fast! Nice Christmas surprise. And I can combine CRT filters simultaneously too? Neat! You should rename the menu in RetroArch if possible, to at least catch attention -- that it is a new better shader. Also eventually, I'll add a phase-offset since I can reduce the latency of this CRT simulator by probably 1 frameslice (1 native refresh cycle) simply by adding +time as a constant. I need to experiment with my changes in ShaderToy in the coming week (It's Christmas). But it's absurdly fantastic to see an actual deployment the same day I released my simulation shader! Which releases will have it? PC, Mac, Linux? Can it also be ported to the mobile app for 120Hz OLED iPhone/iPad too? I notice that the shadertoy works great on those, even if not as good as 240Hz. TechSpot Readers:EDIT: TechSpot posted some publicity that contained a permalink to this comment. If you're looking for the original main copy of the shader that will get an improved version in January 2025, please go to my repository: www.github.com/blurbusters/crt-beam-simulator |
I just asked our Apple guy and he says the subframe stuff is available on nightly builds for iOS but will be included in the upcoming release. It doesn't persist, so you have to re-enable it on each launch, which is a drag, but nothing worth doing is ever easy :) But yeah, Mac/Win/Lin should be covered. Thanks for working on this and for designing (and licensing) it around sharing and easy integration into other projects. It was a breeze to port thanks to that foresight and generosity. |
Tim made one of the most important contributions to keep it bright and seam-free (variable-MPRT algorithm). Niche algorithms tend to be ignored by the display industry, so it's nice we could BYOA (Bring Your Own Algorithm) straight into RetroArch, just supply generic Hz, and the software can do the rest. And nice that you kept the LCD Saver Mode (maybe add a boolean toggle for it). OLEDs do not require that, and I kind of prefer it be done at the application level to avoid the slewing latency effect [0...1 native Hz]. Not a biggie for 240-480Hz, but turning it off will create constant latency for evenly-divisible refresh rates. |
Done! libretro/slang-shaders#668 I'm having fun running my subframes up higher than my monitor can push and setting the "FPS Divisor" up accordingly. It looks just like slow-motion camera footage of CRTs. You can get some pretty believable slo-mo captures by pairing it with p_mailn's metaCRT: |
We'd need to see a log of it failing to load to even guess, I'm afraid. This sort of issue is usually handled more effectively via forum/discord/subreddit, though, if you can pop over to one of those. |
How do you load this in RetroArch? When i load the presets nothing happens. I have a 240hz LCD monitor, what other options must i change to make it work? |
@Tasosgemah Enable shader sub-frames in the settings. |
Thanks, it works now. But i assume my monitor isn't good enough for it because even though the motion blur is reduced, it looks really bad. All the colors are very dark, there's some minor ghosting, some noticeable transparent horizontal stripes and random flickering that comes and goes. |
@Tasosgemah Something else must be wrong, I have a 160hz monitor set to 120hz for this and it looks super clean and I experience none of this. How many Shader Sub-Frames did you enable? |
Hmm any idea why loading the CRT-Beam-Simulation shader causes N64 games to go black/have no video-output? I'm running ParaLLEl with Angrylion for the GFX plugin. |
@mdrejhon libretro/slang-shaders@dde0a17 For instance here's some presets I made using RetroArch's built-in in-menu shader stacking system. It's very easy to do this. So I'd suggest just running through nearly all of slang_shader's shaders (there's a lot of them all with their own purpose and intent), and then seeing what shaders work best in conjunction with crt-beam and then working out some kind of numbering scheme or whatever you had in mind. Not sure if I'm making sense or not but I think that could lead us to the best and most immediate results. |
Exactly why I made my suggestion -- We'd crowdsource the dataUpdated for clarity
|
I don't have time; this was an unfunded project. Even though it took me hundred hours of work of spare-time stuff since 2022; re-testing with tons of shaders gets challenging. Can't we just crowdsource using my suggestion? We can simply keep these new properties unset/undefined (most of them) and gradually fill these .SEQUENCE property values as time passes, over the next several months. I will contribute some of it. But may you at least you pave the road though to help testing easier? It's just a single property value (.sequence) and an automatic flag (indicator) for sorted true/false. (Tells me it's not in recommended sequence). You wouldn't enforce it. Maybe you're asking me to make a pull request on the .SEQUENCE value processor and the warning-message displayer (if the SEQUENCE numbers are not in sorted order). Would be a very minor modification. It would not affect the workflow, just a notification indicator to help quickly test sequences. That's exactly why I made my suggestion; to help me do it; and to help crowdsource the data.
Then at beginning:
This will help me research sort order more easily + help crowdsource the data. It's always more efficient to process shaders while the buffer is small (CRT-simulate the original 320x240 emulator framebuffer as one example) rather than when it's big (CRT-simulate a 3840x2160 framebuffer post-CRT-filter). You wouldn't disallow that, but you'd have a warning indicator light, a changed background color in an out-of-sequence shader, or some warning message appear somewhere, to guide the tinkerer and experimenter (and save a lot of time). Basically, don't interfere with users, but notify users of a potentially suboptimal sequence.
|
Are you available on Discord by the way? Or do you hang out on our Discord servers? I have a relatively high end TV, I'd be quite willing to discuss with you specific things to setup to see how much we can push this feature on this device. I think some of that entails setting up the right settings for the TV. I don't have a lot of that knowledge so it'd be best if I could contact you directly. |
[Exchanged handles privately, removed contact info as notifications is overflowing from the viral social media on BlueSky and X concurrently, seems people are amazed at the BlurBusters achievement] Chatting on discord, it appears it's a question best for @Themaister because he's the GPU expert; so I'll defer to him on what he thinks of my timesaver idea (the 2 hours work to reduce sequence experimentation workload by 90%, since one can simply add the sequence number piecemeal, one shader at a time, over months -- and it's permanently documented. I helped @LibretroAdmin improve colors on OLED; using SDR apparently produces better colors than HDR on his specific model -- this is just a side effect of the math being optimized for Adobe sRGB and some TVs really does a bad job of compressing SDR inside HDR. I wish TVs would give me better APIs to access linearized HDR for the bottom end of the HDR curve (up to the window size specified etc) Now, new problem: Black clipping and white clipping at the display level or GPU control panel level creates banding. So, calibrate your brightness/contrast first! (Retroarch brightness/contrast, done before CRT simulation, doesn't cause banding problems; as long as done before CRT simulation -- so color process your emulator framebuffer before piping it through CRT simulator) Now resuming an idea I got:
I have an idea; Even 16 colors of a 8-bit machine theoretically turns into 1 billion colors when you have the phosphorfade + brightnesscascade algorithm. Therefore, you get banding if you get any clipping in your Adobe sRGB colorspace. |
<Technical>Big wall of text warning Table of Contents
Select Simulation: [Plasma | DLP | CRT | CRT-VRR | Fast LCD | Slow LCD | OLED | 35mm projector] Yep. I have all the algorithms. It can be done. Just supply me with 1000Hz + direct access to nits value per pixel. I already produce examples of primitive wright brothers tests limited by limited Hz; there's TestUFO Interlacing, TestUFO DLP Colorwheel (run at 360Hz or it's annoying, wave your hand in front of it), TestUFO Variable-Blur BFI (Run at 240Hz), and I even can do CRT-style 30fps at 60Hz double images via pure software BFI on any LCD/OLED. Obviously, I will port the CRT simulator to TestUFO soon. However once we hit ~1000Hz, the number of algorithms I can do literally expands geometrically. This fully functioning CRT beam simulator is my micdrop here, after all... Now click ">details" to open the 15-page wall of text. Recalibrate to avoid black/white clipping!Every single Adobe sRGB color from RGB(0,0,0) thru RGB(255,255,255) must not be clipped, so you may be able to eliminate the banding by making sure you've precalibrated your display. It may not solve 100% of banding but give it a try, raise Brightness and reduce Contrast. 10-bit Helps 8-Bit in CRT SimulatorNow, if you can do a 10bit pipeline; yes, use it where possible, even if only at display end. This will slightly reduce math rounding errors during the mathematics inside the CRT simulator; the extra precision reduces banding issues because of how the CRT simulator upconverts the limited retro palettes to almost infinite possibilities due to the phosphorfades and per-channel cascades that can blend differetly for R versus G versus B (e.g. brightness cascade algorithm will ghost G only to next Hz, if R and B is dim...behaving kinda like original tube). My CRT shader seems to look better inside a 10bit SDR proccessing pipeline (in an offline DirectX app) to reduce the quantization errors caused by the gamma2linear() and linear2gamma() that is done twice per subpixel per native Hz, which will skew the Talbot-Plateau theorem extremely very slightly (e.g. viewing test patterns through the CRT simulator will appear to be roughly 7-bit color instead 8-bit color due to math rounding errors building up). Understanding Gamma vs LinearWhereupon you succesfully calibrate all the bands out, and then use an exact integer divisor native:emulated, unit-testing via mathing together the pixel values of the emulated CRT frames and dividing by native:emulated ratio, then stretching to 0..255, successfully yields the pixel values of original emulator framebuffer, but it won't, due to the 8bit math rounding errors, you see...). Ah well, one cannot win universally. RGB(64,64,64) is not HALF the photons of RGB(128,128,128) due to Gamma CurvesFor example, you can never get perfect 25%-linear grey with 8bit. RGB(127,127,127) is not half the brightness of RGB(254,254,254) in photoncount due to the gamma curve. So you necessarily have quantization errors, especially when doing two gamma curve computes per subpixel per refresh cycle, as I need to know linear for Talbot-Plateau Theorem, can't do that in gamma space, and there's no clean math formula to successfully linearize HDR on dispays that applies their tonemappings... But upconverting Adobe sRGB to 10bit, then piping through 10bit CRT simulator, outputting over 10bit HDMI, to a 10bit OLED, it can look real clean, since the 10bit stays really close to the original 8bit values from the quantization errors building up in the CRT simulator... However, despite the quantization errors, the CRT simulator still looks kickass. Let me tell you more... HDR Math Is Horribly OpaqueI wish 10bit HDR was easy enough to math on, but I want display manufacturers to improve their ability to communicate HDR metadata to me, so I can properly apply Talbot-Plateau theorem to different refresh cycles that is temporally predictable. This is still a math miracle, nontheless, almost an E=mc^2 simplification (look at how tiny the shader is). The Bedrock Of CRT Algorithm: The Easy Talbot-Plateau TheoremThis is because Talbot-Plateau Theorem is beautifully simple (this Theorem is the bedrock of the CRT algorithm, study up buddy!): Flash something twice as bright for half as long, and it's the same average brightness. And that's why I need the CRT simulator to run in linear colorspace, and properly subdivide brightness over multiple Hz in a "photoncount management" style system and go variable-MPRT. Easy peasy once you understand it, it's only high school math -- but most don't get it until they realize how simple Talbot-Plateau Theorem is. That's why www.blurbusters.com/area51 is now textbook reading at display manufacturers. The Photon Budgetting AlgorithmCredit: Timothy Lottes for this sheer brillance, he calls it "brightness redistribution". I call it "photon budgetting". Imagine you have a 100 mL of water you need to pour into four separate 25mL glasses that is served once a minute. The restaurant server can only serve one 25 mL glass per minute. But somebody ordered 60mL of water fast. You pour 25mL into the first two glasses, followed by 10mL on the third glass.
Exactly! Use photons instead of water. So if I want to emit a 60% linear white in one refresh cycle, but convert it to lower motion blur over four refresh cycles, I have to compress by** serving the photons quickly** That's how a 60%-linear white gets photon-budgetted into four refresh cycles for 60fps at 240Hz
Of course, this is per-pixel (time-correct relative to raster, which sweeps top to bottom). Remember, RGB(174,174,174) has almost exactly 40% of photons of RGB(255,255,255), because of the gamma curve formula, and I do 2 gamma curve computes per subpixel per refresh cycle. Shaders are math performance miracles. Now you understand the brightness-cascade algorithm, it reduce motion blur at the sam brightness level. Now that said, GAIN_VS_BLUR is often lower, often you have a value of 0.5, so you can compress even more. Instead of 60% linear white, you're serving only 30% linear white, which means:
Voila, exactly half as bright, BUT remove more than half the blur!Works even better at larger native:emulated Hz ratios, especially 3-4, than at 1, but you can see the magic going on, that the CRT simulator is brighter than classic BFI because of this math cleverness. Almost as if you're breaking laws of physics (violating Talbot Plateau Theorem), but not really -- you are selectively giving dimmer pixels a lower-motion-blur advantage -- simply because you can brighten (even clip to 255!!!) to help serve the photons faster to shorten pulsewidths for lower motion blur. It averages ala Talbot-Plateau Theorem. Sorry nothing faster than light (as quantum teleportation seems like, but not quite), but a very clever cheat that still complies with laws of physics, getting MORE than 50% average blur reduction while only losing half brightness. In some cases, I'm even seeing 75% blur reduction with only 50% reduction in brightness, simply because a lot of pixels in a dark dungeon game are easy to reduce the blur of, they're able to be made 4x brighter briefly, without clipping the brights out (which simply cascades to extra Hz for more blur for brighter pixels). So, understanding Talbot-Plateau theorem /combined/ with understanding gamma curves, lets you do this CRT emulation magic. Plasma simulator shader possible / DLP simulator shader possibleNot bothering with this because CRT is gold standard, just saying plasma subfield shaders are definitely indeedy possible, complete with christmas-dots, noise and plasma-contouring effects complete; temporally correct. The identical math magic can assist in massively brightening early experiments in plasma subfield emulators on a 600Hz OLED, and even DLP subfield emulators on a 1440Hz OLED, and so on. Optional noisy dither and optional rainbow artifacts too, if one wished (ugh!), piece of cake for me. But the CRT tube is still the gold stanard, I targetted that first obviously. Generic Hz + shader for the win, in BYOA (Bring Your Own Algorithm) approaches! I am in 30 research papers, www.blurbusters.com/area51 which you may need to watch as I have more skunkworks projects coming out in the new year. I was not paid to do the shader, this is a hobby part of my biz. But I do work with some display manufacturers on contract from time to time... Gained lots of skillz that led to this CRT electron beam flying spot simulation breakthrough! Generic brute Hz is the dawn of BYOA (Bring Your Own Algorithm) age. Open source display or television next, maybe?Give me 1000Hz-ish, direct access to the HDR-to-Nits lookup table, and I can have a menu: Select Simulation: [Plasma | DLP | CRT | CRT-VRR | Fast LCD | Slow LCD | OLED | 35mm projector] I already have temporal formulas for ALL of them. Yes. GtG simulators, subfield simulators, interlace simulators, strobe sequences, etc. Just add Hz. Watch your sports broadcast in OLED mode, but watch your movies in the 35mm projector simulator. Play your video games in the 60Hz or 120Hz or 85Hz whatever Hz CRT simulator. It's just all simple Blur Busters math trickery. User choice for the win? (Manufacturers, please make it happen. Christmas gift 2026 for me, pretty please, give me 1000Hz + direct access to your brightness lookup tables. We are already at 480Hz, and the 1000Hz OLEDs are already in the lab, hitting market in 2027. I now have enough networking contacts to provide (display+temporal) genius to pull this off. I don't want to be a display manufacturer, I just want to do algorithms for an open source TV. The same shaders running in an open source TV could be ported anywhere else, like Retrotink. It's just shader compute. I can make all sorts of algorithms happen if I only had Hz and direct access to nits-to-HDR-values lookup tables for Talbot Plateau Theorem (see how surprisingly simple the Theorem is from the restaurant-server metaphor above?) Next best thing, we can just do it in our own shaders, BYOA (Bring Your Own Algorithm). Brainstorm Per-Channel Phosphor Fade(Brainstorming... Long term, in the future, should have separate adjustable phosphor speeds for different channels, but for now, we optimized on brightness-cascade trick (variable MPRT trick). This would create green ghosting for moving whites, if I made green a slower phosphor than red/blue. Also maybe in future a Y=ax^x+bx+c "S-shaped curve", to allow the combination of prioritizing bright-first, while still having a phosphor decay in 2nd Hz onwards, etc.) </Technical> |
Is anybody else experiencing the same colour separation as I attempt to demonstrate in this video (filmed at 60fps on an iphone): Most visible around the 17 second mark To my eyes in person, there is no black banding or visible banding at all everything looks solid and as expected, all colours look accurate when static (albeit the whole screen has that almost sub perceptual hint of flicker that you always get with BFI), but during moments of fast lateral motion, there is a kind of seperation of colour channels. Very noticeable in this clip and reflects pretty well what I see with my own eyes (except the colours are pure and not overblown or saturated as they appear in this video) Setup is a Dell AW2725DF. Tried at 360hz and 240hz. HDR off. SRGB colour space selected in monitor. Using Vulkan in RA with just the shader on its own, it's not masked by putting a CRT shader over the top (in fact it becomes even more obvious). Tried messing about with gamma settings, LCD saver on/off, different snes cores, different video modes on my monitor, different video processors in RA, pretty much everything I could find, and I'm finding that if I make it so the image look stable and crisp when static (eg no artifacts), then it exhibits this behaviour when moving quickly. It also almost looks like a rolling shutter kind of jelly artifact as well at times, even when theres not colour seperation, there's line doubling on vertical lines, seemingly right at the currently ongoing "roll" point. It's also not always present in the same part of the image, it feels like its a rolling band that is travelling vertically throughout the height of the screen over the course of maybe...45-60 seconds or so. So it's only present when running along the bottom of this level in SMW for 20 seconds or so, then its totally gone for another cycle so to speak. Is this kind of an expected side effect? I mean we are doing rolling frame refreshes, so rolling frame artifacts kind of seems like an expected outcome almost. Can somebody that is experiencing this working as well as they'd imagine on an OLED, try running left and right on the first level of SMW (USA) for a minute or two, see if they have the same kind of artifacts pop up? (Or if the Chief knows what this is being caused by, feel free to inform me!) |
That's normal, original CRT tubes flickered. It should flicker less than BFI or strobe backlights, and if your display is not doing any weirdness (e.g. PWM) then it should flicker approximately similarly to an original tube (for same viewing distance to same brightnes tube).
A minor jelly artifact is unavoidable with slowly-scanning out granularized refresh cycles, you need a large native:simulated Hz ratios to make it look like normal CRT scanskew. I will add a scan-velocity adjustment to reduce the jelly artifact, give me about one week.
It will support infinite scan velocity, like a global-refresh CRT tube (...inventing nonexistent displays for the win in BYOA - Bring Your Own Algorithm approach...). You will just have to hang tight until I add these improvements to my shader, and that Retroarch adds a "Scan Mode: Normal/Global" setting to the CRT simulator. The global mode will have zero jelly effect, at the compromise of slightly more flicker (Because it behaves like a phosphor-style variable-MPRT black frame insertion that phosphor fades), ala Timothy Lottes math brilliance.
It travels vertically simply because it's trying to avoid LCD image retention. Turn off the LCD Saver setting (coming to next version of Retroarch) if you want to make that stationary, or use an odd-number native:simulated Hz ratio. It's the algorithm preventing LCD image retention (scientific explanation).
TL;DR:
How it works; LCD Saver intentionally kicks CRT Hz off by 0.001 to native:emulated ratio, creating ~59.985 Hz for 60.000fps for 120.00Hz/240.00Hz/360.00Hz LCDs (yes, you heard me right) to slowly slew the black frames out of phase of the LCD voltage-polarity inversion algorithm in a clever way, because my CRT algorithm does not require native:emulated ratios to be an integer, as long as it's at least (2.0f) or more, yes it's a float, even though Retroarch was designed only for integer ratios (understandable, it's not an easy workflow). If you're using an OLED, turn LCD Saver setting off, it's coming to the next version of RetroArch. If you're using an LCD, use odd-numbered native:emulated ratios like 180Hz or 300Hz (this will stop the drift, since LCD Saver automatically turns off internally) As a rule of thumb, jelly effect starts to appear with scrolling/panning/turning speeds that's faster in pixels-per-frame than the native:emulated ratio. If your native:emulated ratio is only 4 (60fps for 240Hz) and your motion is going more than 4 pixels per frame, that's when the jelly effect appears (divergence effects). That's to be expected, unless I add motion compensation or AI algorithms or interpolate-within-scanout, which is kind of a no-no for most retro purists (but some might ask for it). It's funny I fully understand the display science & physics of what's going on. All I can say, throw as much native:emulated Hz that you can at it, and that will massively help (at least until RetroArch chokes trying to keep up, 2ms time budgets at 480Hz is hard, and we've already got 1000Hz OLEDs in the lab) Upcoming solution for jelly effectThe good news is that I do have a solution for the jelly effect, as a tradeoff between flicker. It will be a "GLOBALFLICKER_VERSUS_SCANJELLY" slider of sorts, as a scan velocity adjustment, like a faster vertical deflector on a CRT tube that can be adjusted to infinity (global refresh CRT). It's a workaround for excessively low native:emulated Hz ratios until everybody has 1000Hz displays. That's because your analog moving eyes is in a different position during each of these emulated CRT frameslices. As you move your eyes from one edge to the other edge of the screen, 960 pixels/sec at "60Hz" is 16 pixels per simulated Hz, but we only have 4 slices, so we've got double-imaging at 4 pixel separation artifact (like an advanced multiayered meta version of the old CRT 30fps at 60Hz effect, simply due to native:emulated Hz ratios not being high enough, quite just yet) The color separation problemI believe Timothy Lottes has a solution to color separation during jelly effect, that can be reduced, without eliminating the jelly effect fully. I might be able to port his changes too, as a configurable option. Normally it just looks like phosphor ghosting for bright color channels, if you blur it enough (no square pixels), but with square pixels, the color separation artifacts DOES get ugly. (So you may wish to blur your pixels slightly when using my CRT simulator, using the various filter shaders) Fix: ETA early 2025 (January 2025 I hope) Reminders of Best Practices
TestUFO Jelly Effect TestWant to test jelly effect on your old 60Hz real CRT tubes? See for yourself: www.testufo.com/scanskew It was not very noticeable because every scanline imperceptibly shifted horizontally relative to your gaze, as it scanned downwards while you tracked eyes perpendicular to scanout. But with low native:emulated ratios, the jelly effect is slightly stairstepped/quantized (although softened by the soft overlaps, and softened even further by non-square pixels). |
BTW, after CES I will release an improved shader that includes:
A lot of these are to help you get around various specific display-specific itty bitty limitations. So much craziness trying to stomp all of these out; I now understand all the mechanisms creating these problems. Such fun thinking in Talbot Plateau Law wrapped into 3D matrix math (width x height x time)... To borrow an oft-used retro pop phrase: "All your blurs are belong to us" 👽 |
@pxdl, I'm having the exact same problem but I cannot find a place to enable sub-frames, where is that option at? Nevermind, fixed it. For anyone else in the future, if you've had retroarch a while, do a fresh install. I was missing the newer menu options. |
I have a new troubleshooting HOWTO HOWTO: For CRT Simulator Artifacts: Fix banding / flicker / chroma ghosting |
Can this be ported to the android build too please? New Android handhelds are coming very soon with 120hz OLED displays and this is simply the best thing happening to emulation since forever! |
We had previously disabled all BFI-type effects on Android simply because we didn't want to deal with people freaking out about how we ruined their $1k phone with "burn-in" and most weren't capable of 120+ Hz anyway, but now that most are AM/OLED and support high refresh rates, we're looking into re-enabling those features. |
Please, yes. Plus, why not do my best practices?
|
I also long since liked the idea of a duplicate frame every X seconds as a safety measure for traditional software BFI in RetroArch in general. It should be hardly noticeable and it's always better to be safe than sorry when working with hardware that can cost anywhere from a couple hundred to over thousand and a lot of potentially uneducated people. |
While it's not part of the BFI options in settings > video, I've incorporated cadence-shifting every X seconds in several of the BFI shaders. |
Only problem is the slewing-latency effect so automatically disable this when not necessary. Like for OLEDs to lower BFI latency. Or when native:emulated is odd, or is not an exact even integer ratio (The ONLY time you need on LCD to actively flip the inversion phase of the AC voltage of pixel inversion to prevent BFI static electricity buildup and its resulting image retention) |
Yeah, please -don't- make the cadence shifting universal. I heavily disagree that it is hardly noticeable. Without bfi, sure, a single dropped or extra frame is pretty hard to pick out, but our eyes are much, much more sensitive to a very short uneven brightness flicker, than they are to a very short temporal stutter with even brightness. Also, for the regular BFI, 120hz on an lcd is the only time it is truly necessary currently. Not even the other even multiples, since the number of bright vs dark frames are adjustable. Ie: 240hz at ON(+)-(ON-)-Off(+)-Off(-) is still perfectly safe on an lcd. |
d> Yeah, please -don't- make the cadence shifting universal. I heavily disagree that it is hardly noticeable. Without bfi, sure, a single dropped or extra frame is pretty hard to pick out, but our eyes are much, much more sensitive to a very short uneven brightness flicker, than they are to a very short temporal stutter with even brightness.
Not necessarilyThat pattern will create image retention on some of MY own LCDs. Not all of them, but some of them. Sometimes, but I wouldn't trust it. I've been working with LCD inversion for 10 years and I work with display manufacturers: https://www.blurbusters.com/area51 -- I am in over 30 research papers. EDIT: In the display side, LCD inversion algorithms can go to a 4-frame sequence instead of 2-frame sequence, as some LCDs automatically do when doing stereoscopic shutter glasses content. Also, sometimes BFI content "looks" like stereoscopic shutter glasses content to these LCDs by accident too. From a Display Industry VeteranAlways safe
Risky
Potentially Risky
Mitigation
|
Not necessarily what? That 240hz at ON(+)-(ON-)-Off(+)-Off(-) isn't guaranteed safe on an lcd? I do only have ~4 240hz capable lcd screens, which granted is probably much much less than you, hah. I was the primary author of the current RA bfi & sub-frame back-end stuff so I did test quite thoroughly however. But at least out of that sample size, all 4 lcd's that did have issues at 120hz, were 100% retention free at 240hz at ON(+)-(ON-)-Off(+)-Off(-). And on my one 360hz capable screen at ON(+)-(ON-)-Off(+)-Off(-)-Off(+)-Off(-). If you did manage to get one that had problems with a well paired +,- strobe length at 240hz or any other even multiple (not including 120hz).. you could still adjust the strobe length away from (what should be) that safe value and then one of the other lengths should be safe for your strange screen. It shouldn't be possible to be unevenly building up charge at -all- different strobe lengths when some of them will have even number of on-off frames and some will have odd, the hardware screen inversion algorithm would be having to dynamically change to match the output for it to interfere with both, wouldn't it? Anyway, I'd take adjusting the strobe length, or more accurately just to an odd Hz multiple but that's not the point for this conversation, and maybe having to sacrifice a bit of clarity over the inserted or dropped frames any day. Others might feel differently but that's why I'd certainly prefer it to stay a choice and not forced either way. Also, what I was talking about was not forcing cadence shifting for the full frame bfi implementation only, of course. As I understand it, though I haven't tested it out myself yet, your rolling scan algorithm just shifts the scan out line slowly to avoid the voltage retention without the flicker of the full screen bfi solution. |
Due to the 3D glasses era (frame sequential stereoscopic glasses), some LCD vendors had to sometimes switch to a EVEN:EVEN:ODD:ODD algorithm than EVEN:ODD:EVEN:ODD algorithm. LCD inversion algorithms can switch away from their standards. So there's still a risk. However, yes, a lot of LCDs are safe with the 1:1:0:0 sequence. I just can't guarantee it it's the bog-standard inversion algorithm. We're lucky they've never used odd-number cadences (it's technically possible, but unlikely, due to the fact there's only two voltage polarities, which lends itself well to even cadences like x2 or x4). |
The "smart" and "safe" 120 Hz BFI shaders swap cadence on a timer, and the "smart" one will also swap whenever the screen transitions to black, so you don't see the stutter (uses shader feedback to store and check the cadence over time): https://github.com/libretro/slang-shaders/blob/master/subframe-bfi/shaders/120hz-smart-BFI/calculations.slang |
That's actually pretty neat too. Remember it doesn't have to be a black frame, but any unbalances of any kind, e.g. average brightness of even frames unbalances out of average brightness of odd frames. Remember, this is a problem with rolling BFI too! So top half black / bottom half notblack, alternatingly. That will create image retention too. So in theory, a watchdog shader could track even/odd balancings on a per pixel basis..but that's a lot of compute wasted. Heh. All that just to dodge image retention from LCD inversion, when we just go with simpler algorithms. Also you can use temporal scaling too, to remove a native:simulated integer divisor ratio, e.g. 60Hz BFI at 280Hz is possible using temporal scaling tricks. Think of bilinear interpolation in spatials, except applied temporally instead. Basically alphablend between black frame and visible frames using a linear-correct gamma blend. (It has to a linearspace alphablend; not the regular blending built into GPUs, but a custom shader fragment instead, to comply with Talbot Plateau Law). The problem with temporal scaling algorithms like the one I did for CRT -- is slow LCD GtG, so use OLED, and also do the gamma2linear / linear2gamma for the Talbot Plateau Law corrections to prevent objectionable flicker. |
That, I am agreed on. I want to be able to force "LCD Saver" off, even for LCDs. Some cadences are safe on many LCDs, like the 1:1:0:0 cadence for BFI (even if 1:0:0:0 or 1:1:1:0 isn't safe). Okay, maybe call it the "Let my LCDs burn, baby" mode, for testing out whether the LCD has better inversion algorithms that takes 2 minutes instead of 2 seconds to show retention, etc. |
Any news on that front by chance? Ayn is waiting for the Android implementation to try this new feature on their upcoming 120hz oled handheld if you need any feedback. |
At first, I thought this project was to reduce the macroscopic brightness flicker, so as to protect the eyes. Because in my opinion, the biggest problem of 60Hz BFI is that flicker hurts the eyes. |
Thank you for warning that it's an opinion; so kudos. But to micdrop the other Armchair Artifactsplainers (1000x, been there, done that) putting "You Prefer This" in other peoples' mouth unceremoniously -- I referee, mythbust & correctly scientifcally yank that out -- and I remind people that people have preferences, you know. Some of us are unusually sensitive to tearing. Some of us are unusually sensitive to flicker. Some of us are unusually sensitive to motion sickness. Not all of us sees the same way you do. I've seen <1% people get motion sick from tearing (e.g. vertigo trigger from tearing artifacts). Sometimes it's a "I dont care" <-> "I notice" <-> "It bothers me" <-> "It makes me motionsick" <-> "It gives me migraines" continuum. You categorize blurs as one, but tearing as another -- but for a different person, it's different. Everybody has different triggers, you don't care, but others may care. Everybody wears different eyeglasses. 12% are color blind. Different people have different motion sensitivities. Not everybody sees identically. You know -- my name sake -- Blur Busters & its science -- means I am a beacon for people who get headaches from motion blur. I've got a supertanker full of anecdotes in my mailbox, buddies... As the (display+temporal) entity, specializing in blurs, GtG, ghosting, tearings, original frames, fake frames, display simulators, input latency, BFI, inversion algoroithms, framerate, Hz, and anything that involves a (screen)+(time-dimension), with cites in over 30 research papers I've become the known Hz Einstein authority in this matter. No worry, I know it's a user preference -- tearing can be preferred! -- but not by all. 👽
Depends on the content. Actually, CRT simulator is preferable to BFI according to hundreds of people telling me. My algorithm combined with Timothy Lotte's algorithm, was a marriage made in heaven and lowered the "look better than BFI" bar all the way down to mere 120Hz LCDs (as long as reasonably fast IPS). It may not happen to your LCD (especialy if you're using a 6-bit TN LCD), but there are LCDs and OLEDs where the CRT simulator looks better than BFI in the total package deal (comfort / flicker / motion blur reduction). The seams are tiny enough apparently! The jello effect, while extant, is very minor on some content. It's a problem at Sonic Hedgehog speeds, but not all games are Sonic Hedgehog demanding a 240-480Hz OLED to fix the jello effect. There's been a big surge of Retrotink 4K (my logo is on the bottom) thanks to the CRT simulator now released in an early beta on the box. Many 120Hz users are raving about my CRT simulator now being better than BFI on average, even if some turn it off and use BFI instead for faster content. Also, I have a global phosphor-BFI mode coming to the CRT simulator (infinite velocity scanout), to solve the jello effect problem too, so the scan velocity adjustment (able to go infinity) will allow you to zero-out the jello effect. The neat thing about shaders is that I can invent nonexistent displays, in addition to standard CRT display simulators and plasma display simulators. I can invent a FED-SED-DreamDisplay globalPhosphor BFI without a native:simulated Hz integer requirement, thanks to temporal scaling. For 120Hz globalphosphor variable-MPRT BFI (where 75% of pixels are blur busted and 25% of pixels have a very slight dimframe), to get brighter than regular BFI, I will have to use the alternate LCD Saver algorithm of a sudden extra frame once every 15 seconds (ish), deciding how to implement it (perhaps as a gamma2linear balanced alphablend). As the (display+temporal) genius, my plan is to follow the Blur Busters Open Source DIsplay Shaders Initiative, releasing more shaders over 2025-2030. A single 1000Hz OLED can do all of this eventually: As you already know I already have a VRR simulator on TestUFO, etc (if not, then read the article). All of that can go into a shader and simulate VRR to a fixed-Hz display too, via temporal scaling algorithms that looks good up to approximately 1/2.5th of the native Hz. Yes, CRT-VRR too. A software-based GSYNC Pulsar, temporally scaling, perhaps a ~48-400Hz VRR range remapped onto a 1000Hz fixed-Hz OLED. As you can see from my shadertoy, my CRT simulator looks smooth doing 60fps at 280Hz -- just play with the shader variables. My CRT simulator has no integer-divisor native:simulated Hz ratio! My goal is opensourcing all my display algorithms. You can see the reasons why in Version 1.01 of the Open Source DIsplay Shaders spec (scroll down to the bottom half of the BlurBusters OSD Initiative page) -- about the cesspool state of the display industry and how it's time to shake things up a bit by Bring Your Own Algorithm approaches. |
We can re-enable it but people will have to use it entirely at their own risk and they'll have to know what they are doing. So if someone is going to try to turn BFI on with some 60Hz LCD phone and their screen keeps being weird for a while after using the BFI feature, that's not our fault. There should maybe be an extra warning added to the sublabel so the user is aware of this. |
Inviting all stakeholders @MajorPainTheCactus @Ophidon @mdrejhon and others.
This is to discuss further improving the initial groundwork done in this PR - #16282
The text was updated successfully, but these errors were encountered: