-
Notifications
You must be signed in to change notification settings - Fork 300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Composed shadow DOM #531
Comments
I've been told that this may not work for a couple of reasons:
I'd like to understand more about these issues. I can see the first one being a blocker if it would require engines to have to decouple the two, causing large amounts of work. For the second, it's actually quite different to Thanks again for your patience with my possible lack of understanding of the intricacies around this. cc @robdodson as we've been messaging about this. |
Thank you for the proposal. Let me try to share a couple of my thoughts, from a spec editor's and implementor's perspective: As you are already aware of it, I think there is a kind of misunderstanding:
I think that is a wrong assumption. As you know, CSS works on DOM-basis. It would be more difficult to separate both, from an engine's perspective. For example, I am sure that most engines have a { #id -> element } mapping for fast lookup for If we are to have only CSS encapsulation, it is unclear where and how we can maintain this mapping. Eventually, we will want to have a shadow-tree-like concept, internally, so that we can keep track of each "unit of CSS scoping" easily. That is Shadow DOM. |
I didn't actually say that, I don't think. I certainly didn't mean it. I can respect that it'd be difficult to fundamentally separate the two. I assumed that engines had some sort of data-structure such as a hash map between elements and selectors. However, I was also making the assumption that the public interfaces could be selectively patched depending on which method (declarative I would think that if you separated the two, you'd have two mappings:
When DOM encapsulation is switched on, you already have the first tree that operates on the scoped tree and can throw out the old one. |
Another major problem with OP is that rehydration as defined there requires changes to the HTML parser, which seems like a non-starter. |
@annevk that's fair. I've spoken to Rob Dodson in depth about some of the coupling in Chromium and splitting the two doesn't seem like the pragmatic choice, especially given the parser changes. It seems like the parser is a possible blocker for some of the recent discussion items, or to at least do them in a more ideal fashion. Are there any issues related to changing to the XML parser as discussed? Thanks again for discussing this and I'll close now. |
I don't think we're tracking XML5 work anywhere really (which I think we need to do if we really want to make XML a feasible choice for developers), other than in https://github.com/Ygg01/xml5_draft which hasn't been updated for about two years. We should probably maintain a list somewhere of proposals that got rejected because they required HTML parser changes that would either just work (e.g., self-closing elements or intermixing |
This is similar in nature to #510, but comes at it from a slightly different approach, very much similar to that of
<style scoped>
but using existing shadow DOM primitives.I've written up my proposal here but will copy it below for convenience.
Declarative / composed Shadow DOM
This is a light proposal for Shadow DOM (no pun intended). The purpose of this proposal is to flesh out how we can take the current state of Shadow DOM and allow certain aspects of it to be used separately and composed together. The goals are:
CSS-only encapsulation
The idea of CSS only encapsulation has been previously tried and aborted with
<style scoped />
. I've been told that it was abandoned because it was slow. I'm only speculating, but it seems to me that this could work because:<style scoped />
worked for the parent and entire tree below it. This works up to<slot />
.<style scoped />
elements. This would work exactly the same way shadow DOM CSS encapsulation works now, it's just pulling it apart from the DOM aspect.<style scoped>
only scoped that stylesheet, whereas thecomposed
attribute is on the host, encapsulating all<style />
tags inside of it up to the<slot />
elements.The way I'm proposing that CSS encapsulation works is the same way it does now, it can simply be used without the DOM encapsulation aspect. For this to work, you need to know the outer boundary (host) and the inner boundary (slot). Given that there's already ways to flag the inner aspect (via
<slot />
), we only need a way to signify the outer boundary. I propose using acomposed
attribute on the host.There's probably similar ways to do this, but the important point is that you can declaratively enable encapsulation for CSS.
Server-side rendering
If you could enable CSS-only encapsulation, it's pretty trivial to serialise a DOM / shadow tree on the server. This has two benefits.
Deferred / selective upgrades
You might be using custom elements / shadow DOM to template out your layout, but there may be no need to actually upgrade it if it's static and all it does is render once. This means that you don't need to deliver the custom element definitions, the template engine, and your templates for a subset of your components because they're display-only.
If you're upgrading components, you may want to defer their upgrades, or optimise them. CSS-only encapsulation would enable you to deliver HTML that looks like it would on initial upgrade so there's no jank.
Bots
Many bots that don't execute JavaScript, or that may parse content differently, can still have access to the content because it's all accessible via the HTML string.
Currently if you have an
<x-app />
component that renders your page, and you want it scraped, bots other than GoogleBot won't read the content.To me, this is unacceptable because it breaks the web. Sure, some bots might catch up, but not all, and should they? This also makes shadow DOM not viable until they do. Do we want to hamstring web components in such a way?
This is what it'd look like with CSS-only encapsulation.
Once your custom element is delivered to the page, it can be upgraded. The next section describes how this occurs.
Enabling DOM encapsulation
Given a CSS-only encapsulated element, we can quite easily apply DOM encapsulation. Let's take the following example.
To enable DOM encapsulation, we could follow the current model and use
attachShadow()
. When this is called, the following steps take place to perform what we're calling: rehydration.default
attribute.The
<slot default />
attribute is a way to tell the rehydration algorithm that it should not re-parent its content because it's representing the default content of the slot.The above tree would end up looking something like:
Caveats
Backward compatibility
Since
attachShadow()
already exists, and couples both DOM and CSS encapsulation, nothing changes here. This is also why there's no separate way to do DOM only (without CSS) encapsulation. While it makes sense to have CSS-only encapsulation, I don't think it makes sense to have DOM-only because it would be confusing to have something hidden (in the shadow) in a node tree, that is affected by global CSS.The text was updated successfully, but these errors were encountered: