A. M. Douglas

Abstracting the DOM

Summary: neither of these solutions can be considered performant. On the one hand, you're loading dozens of resources for shadow DOM trees; on the other, you've got an in-memory copy of the entire DOM. Both methods presume some arguably unfair minimum requirements.

As Google continues to push for the Web Components standard—even roping in its puppet-competitor Mozilla—and while Facebook's React.js continues to grow in its influence and maturity, an important question arises which will be significant in determining the likely victor between these related technologies: which is better, the shadow DOM or the virtual DOM?

It's necessary for us to first establish what each of these actually is in order to make an informed judgement on which is better. The shadow DOM is one of the concepts that is driving the push for the Web Components standard. It's not strictly necessary to use a shadow DOM for a Web Component, except that what you have then is not really a Web Component, but a custom-element.

A Web Component is a set of DOM elements packed as an isolated and portable block, with its own style/script context, separate from the “true” DOM. We call this isolation of styling/scripting encapsulation and it's very attractive over the incumbent iFrame, a noted performance hog. Take out the shadow DOM, aka the diet iFrame, and you have a custom element, literally HTML elements for which you can provide custom tag names—basically, a means by which to take the notion of semantic DOM nodes to the extreme.

Sure, it's nice to have semantically-named UI components as clean blocks that you can essentially copy and paste into the document, but the performance cost is astronomical in Safari and Firefox with all of the polyfills, and it's no spring chicken in Chrome compared to regular HTML.


Facebook's React.js promises similar modularity of components, but without the expense of performance. In fact, performance is alleged to see an improvement thanks to its 'virtual DOM', literally a JavaScript object which serves to centralise UI state.

The performance improvement over other frameworks results from the virtual DOM not actually being a rendering engine, it's essentially version control for the native DOM. Change something and, rather than rewrite the DOM, it tracks the changes and updates (hence the name “React” rather than “jQuery 3”) and this should, in theory, keep costly DOM manipulation to a regulated minimum.

And it's true, React is a nimple alternative to frameworks like Angular, but this is a common misinterpretation of what React is and what other frameworks are. Angular, and Polymer to an extent, are ASP.NET WebForms reimagined, while React is a diffing algorithm and an abstracted handler for UI state. You can even implement React's benefits in jQuery if you don't like how React does things, or you could roll your own framework adopting React's approach rather than getting tangled up in the React fad of using JSX, attribute styles, and so on. The core ideal of delta copying, of tracking revisions to the DOM, is (or should be) the (only) reason behind React's prevalence in recent months.

Apples and Oranges

You should see by now that the shadow DOM and the virtual DOM are basically incomparable, or at least that to compare them is to take them out of the context of what they are intended to achieve, despite often being mentioned in the same sentence when developers are comparing the latest JavaScript framework flavours of the month that they want to evangelize.

Minimising/abstracting DOM revisions and separating the DOM into lots of sub-DOMs are different solutions to different problems. Eventually, Web Components/Polymer will probably end up taking some concepts from React on board and do away with DOMs within DOMs (seriously, they removed support <style scoped> from Chrome due to complexity, but nesting a different DOM context for every major UI component is supposed to be simpler?

Should I be interested in any of these?

I mean, it depends on your requirements, how good you are at writing portable vanilla JavaScript, and whether you're part of a vanilla-compatible team. I'm with Google's AMP-HTML team when it comes to the vast majority of websites on the internet: you do not need a web application to serve articles to readers.

JavaScript has been heavily abused to serve people obnoxious-yet-profitable advertisements for years, starting at first with low-level, native functions like alert and these days creating modals and overlays block users from consuming content without first doing something profitable.

JavaScript should, for most of these websites, be reserved for basic or necessary functionality. What do I mean by necessary? I mean not even form validation. Don't even talk to me about form validation. For starters, validation only makes sense server-side, where the user cannot switch the validation of.

Even if, for the sake of good UX, you want to give some immediate (i.e. not post-post or post-reload) feedback regarding the validity of user input, you can—and should—validate on the client side using HTML5 attributes like pattern and required, along with :valid or :invalid CSS3 selectors to indicate state.

The most you could possibly need on the client side for polished UX would be an onblur handler to throw into the mix.

Why Web Components are crazy

Scope is nice and everything, but just watch the network timeline for a Polymer site. HTTP2 is coming soon, but not that soon. Sure, you can hit up the Polymer Project website and not find yourself waiting around too long, but that's probably thanks to much of the website's resources being served up by Google's CDN.

The reality is that serving up a Polymer app will incur some performance penalties on the HTTP1 spec, and they're just not worth it. Encapsulation basically enables lazier development with little to no gain for the user. There is enough front-end tooling available to trivialise the production of optimised clean, modular CSS. Write your CSS properly and you shouldn't need encapsulation.

Additional DOMs aren't necessarily horrendous for performance in and of themselves, and they might simplify tracking your app's behaviour, but if you're aiming for isomorphism you can kiss these shadow DOMs goodbye. Do without isomorphism and clients with JavaScript switched off or with no JavaScript at all won't actually see a website.

It is true that the number of clients lacking JavaScript is, well, pretty low these days—chances are the users who switch JavaScript off or who use Lynx probably won't be your target demographic/niche anyway. However, you've still the FOUC to consider if you opt out of isomorphism. And then there's stale-copying/caching/offline support to think about. Seems like a helluva lot to cache.

In my opinion, the whole point of the web and the reason for its use as an app vector orthogonal to native apps is:

  • The web should be light
  • The web should be trustworthy
  • The web should be universal/cross-platform

When you start thinking about Web Components and offline web apps, you're really not far from just installing a native app. It'll be heavier than a website should be, it'll need to essentially install itself or a copy in order for it to work offline, and the complexity of the web app itself and the development time will likely skyrocket. Plus Web Components are only natively supported in Chrome, which means that the Web just stopped being cross-platform which is its singular financial/business-translatable advantage over native apps.

And then there is an issue of trust: where are you getting your components from? JavaScript/jQuery plugins are one thing, but now entire components and eventually page models? And what APIs will these web apps start to access, how much privacy will they start to invade?

Why you still shouldn't consider React

Thanks to its render method and virtual DOM, React.js is pretty much the only sane approach for reusable HTML components. It's also the only sane approach for nesting said components. However, unless you really need a JavaScript MVC framework and thence a React view layer, my suggestion is leave it the hell alone.

Write lean, clean HTML5, write elegant CSS3, and add a dash of vanilla JavaScript if/when it is necessary. Recognise your use-case and don't waste time thinking about some fancy thing you heard about. All JavaScript use comes with a cost to performance on mobile. I write browser-native JS as much as I can but DOM manipulation is always costly.

I mean, you do get something of a budget on modern smartphones, and having Chrome available on iPhone and Android is a big help, but you have to remember that just because Chrome is the rational browser choice, that does not mean that it will be the browser of choice for the majority of users. It won't be.

Oh and don't trust React's virtual DOM to save you from the performance cost of DOM manipulation. The virtual DOM isn't an infinitesimal or a nearly massless neutrino; it is part of a UI library, so add a HTTP request. Then do some intensive DOM manipulation with the virtual DOM.

Sure, it might seem like it's more performant in the beginning, but as time and manipulation go up, the vDOM's powers go down. Don't forget, it works by bundling manipulations and dumping them on the DOM in one go, which means at some point it's guaranteed to bottleneck where native JavaScript won't.

(In all fairness, you can hardly blame iOS users for sticking with Safari since you cannot change the default browser, just install additional ones, making Chrome a bit of a hassle to use if you need to open a link from within an app or email.)

If somebody comes to you with some serious money, a sane deadline that permits for extensive testing and a desire for a performant web app, then you can think about React.


NB: if you must opt for an SPA, please use React and render it on the server. Angular 1.* is dead and Angular 2.* is still going to be Google WebForms. Sure, it works, but if the alternatives are lighter, cleaner, and quicker to develop (once you achieve a reasonable level of proficiency) things like Angular really start to make less sense.

The Future

Lots of developers should be thinking about these technologies and approaches to the web. Native apps are a serious threat to the future of the web and web applications need to be demonstrably equal to or better than native applications in more ways than relative cheapness. Web applications need offline support, high performance, high framerate, real-time notifications and so on. All of these are possible, but with varying degrees of success/stability.

With businesses developing mobile apps just for user engagement, without having any function or service in the app, it's clear that users just like apps and to interact. Suffice it to say that, if performant native apps were written in a common language and thus costed less to develop, the web would not have time to respond. We web developers ought to actually thank Apple, Google and Microsoft for adopting Objective-C, Java and C#.NET respectively.

Labels: ,


Post a Comment