Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Composed shadow DOM #531

Closed
treshugart opened this issue Nov 2, 2017 · 6 comments
Closed

Composed shadow DOM #531

treshugart opened this issue Nov 2, 2017 · 6 comments

Comments

@treshugart
Copy link

treshugart commented Nov 2, 2017

This is similar in nature to #510, but comes at it from a slightly different approach, very much similar to that of <style scoped> but using existing shadow DOM primitives.

I've written up my proposal here but will copy it below for convenience.

Declarative / composed Shadow DOM

This is a light proposal for Shadow DOM (no pun intended). The purpose of this proposal is to flesh out how we can take the current state of Shadow DOM and allow certain aspects of it to be used separately and composed together. The goals are:

  1. Being able to use CSS encapsulation separately from DOM encapsulation (SSR is one beneficiary of this)
  2. Being able to enable DOM encapsulation on a previously CSS-only encapsulated node (rehydration)
  3. Maintaining backward compatibility as it doesn't change the existing API, and allows existing technologies to work with un-encapsulated DOM.

CSS-only encapsulation

The idea of CSS only encapsulation has been previously tried and aborted with <style scoped />. I've been told that it was abandoned because it was slow. I'm only speculating, but it seems to me that this could work because:

  1. <style scoped /> worked for the parent and entire tree below it. This works up to <slot />.
  2. It had to factor in descendant <style scoped /> elements. This would work exactly the same way shadow DOM CSS encapsulation works now, it's just pulling it apart from the DOM aspect.
  3. <style scoped> only scoped that stylesheet, whereas the composed attribute is on the host, encapsulating all <style /> tags inside of it up to the <slot /> elements.

The way I'm proposing that CSS encapsulation works is the same way it does now, it can simply be used without the DOM encapsulation aspect. For this to work, you need to know the outer boundary (host) and the inner boundary (slot). Given that there's already ways to flag the inner aspect (via <slot />), we only need a way to signify the outer boundary. I propose using a composed attribute on the host.

<div composed>
  <style>
    p { border: 1px solid blue; }
  </style>
  <p>
    <slot></slot>
  </p>
</div>

There's probably similar ways to do this, but the important point is that you can declaratively enable encapsulation for CSS.

Server-side rendering

If you could enable CSS-only encapsulation, it's pretty trivial to serialise a DOM / shadow tree on the server. This has two benefits.

Deferred / selective upgrades

You might be using custom elements / shadow DOM to template out your layout, but there may be no need to actually upgrade it if it's static and all it does is render once. This means that you don't need to deliver the custom element definitions, the template engine, and your templates for a subset of your components because they're display-only.

If you're upgrading components, you may want to defer their upgrades, or optimise them. CSS-only encapsulation would enable you to deliver HTML that looks like it would on initial upgrade so there's no jank.

Bots

Many bots that don't execute JavaScript, or that may parse content differently, can still have access to the content because it's all accessible via the HTML string.

Currently if you have an <x-app /> component that renders your page, and you want it scraped, bots other than GoogleBot won't read the content.

<x-app>
  #shadow-root
    can't see this
</x-app>

To me, this is unacceptable because it breaks the web. Sure, some bots might catch up, but not all, and should they? This also makes shadow DOM not viable until they do. Do we want to hamstring web components in such a way?

This is what it'd look like with CSS-only encapsulation.

<x-app composed>
  can see this
</x-app>

Once your custom element is delivered to the page, it can be upgraded. The next section describes how this occurs.

Enabling DOM encapsulation

Given a CSS-only encapsulated element, we can quite easily apply DOM encapsulation. Let's take the following example.

<div composed>
  <style></style>
  <p><slot>slotted content</slot></p>
</div>

To enable DOM encapsulation, we could follow the current model and use attachShadow(). When this is called, the following steps take place to perform what we're calling: rehydration.

  1. Remove content.
  2. Attach shadow root.
  3. Add previous light DOM as the shadow root content.
  4. For each slot, append its content as light DOM to the host if it doesn't have a default attribute.

The <slot default /> attribute is a way to tell the rehydration algorithm that it should not re-parent its content because it's representing the default content of the slot.

The above tree would end up looking something like:

<div composed>
  #shadow-root
    <style></style>
    <p><slot></slot></p>
  slotted content
</div>

Caveats

  1. Unslotted content isn't taken into account yet.

Backward compatibility

Since attachShadow() already exists, and couples both DOM and CSS encapsulation, nothing changes here. This is also why there's no separate way to do DOM only (without CSS) encapsulation. While it makes sense to have CSS-only encapsulation, I don't think it makes sense to have DOM-only because it would be confusing to have something hidden (in the shadow) in a node tree, that is affected by global CSS.

@treshugart
Copy link
Author

I've been told that this may not work for a couple of reasons:

  1. Styling is very much tied to DOM, thus scoping being tied to shadow DOM being enabled at the same time.
  2. It may suffer from the same performance caveats that <style scoped> did.

I'd like to understand more about these issues. I can see the first one being a blocker if it would require engines to have to decouple the two, causing large amounts of work. For the second, it's actually quite different to <style scoped> as it reuses the <slot> element as an inner boundary, making the tree that requires scoping much smaller, and also limits it to a single tree, as opposed to having multiple <style scoped> down the tree (this is how I understand it could be used.)

Thanks again for your patience with my possible lack of understanding of the intricacies around this.

cc @robdodson as we've been messaging about this.

@hayatoito
Copy link
Member

hayatoito commented Nov 3, 2017

Thank you for the proposal. Let me try to share a couple of my thoughts, from a spec editor's and implementor's perspective:

As you are already aware of it, I think there is a kind of misunderstanding:

  • Having only CSS encapsulation is much easier than having both CSS encapsulation and DOM encapsulation

I think that is a wrong assumption. As you know, CSS works on DOM-basis. It would be more difficult to separate both, from an engine's perspective.

For example, I am sure that most engines have a { #id -> element } mapping for fast lookup for querySelector('#id') or CSS selector matching: #id { color: xxxx } or something.
We need such a hashmap per a DOM tree because a DOM tree is a scope of CSS selector matching.

If we are to have only CSS encapsulation, it is unclear where and how we can maintain this mapping.
We have to maintain this hashmap dynamically, reacting each DOM mutation, inspecting tree structures. That is a non-trivial task for an engine; having a real DOM tree and such a pseudo-tree used for CSS encapsulation where they are interleaving.

Eventually, we will want to have a shadow-tree-like concept, internally, so that we can keep track of each "unit of CSS scoping" easily.

That is Shadow DOM.

@treshugart
Copy link
Author

treshugart commented Nov 3, 2017

Having only CSS encapsulation is much easier than having both CSS encapsulation and DOM encapsulation

I didn't actually say that, I don't think. I certainly didn't mean it. I can respect that it'd be difficult to fundamentally separate the two. I assumed that engines had some sort of data-structure such as a hash map between elements and selectors. However, I was also making the assumption that the public interfaces could be selectively patched depending on which method (declarative composed attribute / imperative attachShadow() method) is used. I know little about the implementation details and wish I had more time to understand them to craft a proper proposal with them in mind. I don't mean to waste anyone's time here.

I would think that if you separated the two, you'd have two mappings:

  1. for the DOM tree being scoped for CSS
  2. for the unscoped DOM tree for the public interfaces to operate on

When DOM encapsulation is switched on, you already have the first tree that operates on the scoped tree and can throw out the old one.

@annevk
Copy link
Member

annevk commented Mar 13, 2018

Another major problem with OP is that rehydration as defined there requires changes to the HTML parser, which seems like a non-starter.

@treshugart
Copy link
Author

treshugart commented Mar 13, 2018

@annevk that's fair. I've spoken to Rob Dodson in depth about some of the coupling in Chromium and splitting the two doesn't seem like the pragmatic choice, especially given the parser changes.

It seems like the parser is a possible blocker for some of the recent discussion items, or to at least do them in a more ideal fashion. Are there any issues related to changing to the XML parser as discussed? Thanks again for discussing this and I'll close now.

@annevk
Copy link
Member

annevk commented Mar 13, 2018

I don't think we're tracking XML5 work anywhere really (which I think we need to do if we really want to make XML a feasible choice for developers), other than in https://github.com/Ygg01/xml5_draft which hasn't been updated for about two years. We should probably maintain a list somewhere of proposals that got rejected because they required HTML parser changes that would either just work (e.g., self-closing elements or intermixing <table> and custom elements) or require less effort (<shadowroot>) in an XML environment. Perhaps as part of the declarative shadow DOM effort? That list could then be used persuade implementers at some point to do the work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants