Updating a DOM tree with 110k nodes while scrolling with animated SVGs

Momtchil Momtchev
10 min readSep 28, 2020

They told me it couldn’t be done, but I refused to listen

With well-designed JS, there is always room for one more DOM element

About an year ago I set to remake the profile page on my soaring weather website. That page is a very good exercise in data visualization and web page performance, so I decided to share some of the valuable experience I learned from that single page.

If you go to that page and click the GFS model from the models menu on the top left, your browser will have to deal with a monstrosity. A monstrosity containing 110,283 elements, including 42,068 divs, 7,319 embedded SVGs and 4,444 animated spinners that move until the page is loading.

Browser engineers recommend pages contain fewer than ~1,500 DOM elements.

I am not a front-end expert, so when I started researching it, I got initially somewhat discouraged by that one:

But then, after some testing, I decided that it didn’t look that bad and that it was a very exciting opportunity to learn more about front-end development.

My data, weather data, was simply naturally well suited to being represented in a table. And per-day pagination — what most other weather websites are doing — seemed too old school to me. Infinite scrolling was the way to go. And dynamic loading and updating was the only choice — the data comes from a 200GB compressed data set which doesn’t fit in memory. Loading the whole page at once, would take up to a minute. And if your user changes any settings, well, that will be another minute.

I made the bold decision to drop support for anything but Chrome, Firefox, Safari and the new Edge, and only in their latest versions, in order to give the best possible experience to what represents 95% of the users without doubling the development cost in order to support the 5% left. And very early on I decided not to use any framework. The main page map, which represents a very large portion of the front-end code, would not benefit from a standard framework anyway. All the HTML is either static partials, or handlebars templates. I could have used D3.js, but I didn’t and that was a decision, which I came to regret at some stages. Today, I think that while D3.js is still a very good solution, it offers almost no benefit for a static table which is perfectly served by a standard templating engine like handlebars.

That page was made possible by a number of optimizations and shortcuts, most of which would have not been possible if I had used a framework.

Avoid layout reflowing like the plague

The first, and the absolutely most important optimization, is using a fixed table — table { table-layout: fixed }. This is an absolute requirement. In fact, when rendering a page, the most expensive part is the layout reflowing. Every time a div is resized somewhere, the browser must reflow all the position: static, position: relative and some of the position: absolute elements to accommodate its new size. The key to avoiding this is to keep the layout of the page as fixed as possible. The browser can avoid the expensive reflow when the new content doesn’t affect its parent div size.

A fixed table, in which the size of every cell is determined by the size of the header, is the single most important optimization, which allows modifying the DOM without reflowing. The first pie chart reflects the initial loading time of the empty skeleton table. The second pie chart shows the initial loading time for the AROME weather model, whose page contains “only” 15,240 DOM elements. As you can see the rendering scales more or less linearly with the number of DOM elements and must be done every time an element is resized.

If the table was not fixed, every single DOM change would have resulted another layout reflow — taking up to a second on a table of those proportions. For 110k elements, updated one by one, this adds to… about 30 hours.

The third pie chart represents a typical second of scrolling while loading — which confirms that optimizing the rendering time is critical. Animation (a sizeable portion of the painting) is not a big problem by itself. The problem is resizing divs. In this particular example all the resizing is limited to divs inside tds. tds stay fixed because of the fixed table layout. Scripting is almost not a factor at all and the data received from the server amounts to a few bytes per cell — values like wind speed, wind direction, relative humidity or pressure.

If you are not using a table, try to at least divide the space into as many divs as you can and then try to avoid resizing them.

Keep the number of DOM elements low

They said

Another very particular design decision I made was to have a completely static table with all the elements present, even those that the user had selected to not display. They stayed there, in empty div cells with display: none. I don’t know from which era it comes that old saying “keep the number of DOM elements low”, but after some testing, I can guarantee you that in 2020 every major modern browser is remarkably efficient at handling display: none elements. The performance difference between having 10,000 visible elements and 10,000 visible plus 100,000 invisible ones, is next to none. display: none elements do not affect the layout reflowing. So don’t be too worried about invisible elements. The complexity of a modern browser has surpassed that one of a compiler or an OS kernel and it is currently the absolutely most evolved and smart software we have.

The DOM is slow

They said (and they meant it)

In an early version of that page I used to save the data I loaded from the server in the attributes of the DOM elements before rendering it. If you are doing this, remember to write 1,000 times “I will never keep any data in the DOM tree again”. It is an incredibly bad decision which will hurt performance big time. All data belongs in JS variables. And if you use a framework that saves any data in the attributes of the DOM elements, seriously reconsider using it.

The IntersectionObserver

The mechanism behind the infinite (well, in fact it is quite finite as weather forecasts have limited range — but the principle is the same) scrolling is that the generated HTML table is a skeleton containing all the elements which initially contain the pacman spinners. The data is populated as it is loaded, the trigger being an IntersectionObserver. As the page can be scrolled only horizontally, the observers are attached to the time cells, one per column. As they gradually enter the viewport, the elements to be loaded are added to a queue, and when they are loaded, they replace. The IntersectionObserver handler is as light as it could be, it schedules all the work to be done in a queue. Do not do heavy lifting in an IntersectionObserver callback. Also, be smart and do not add more elements than you need to your IntersectionObserver and remember to remove them once they have been triggered. The 110k DOM elements page has never more than 200 observed elements.

Use requestIdleCallback for the heavy-lifting and manage the number of parallel API requests

No matter how complex a page is, the browser will always be idling at some moment — for the simple reason that the weak link in that chain is always the interface between the screen and the mouse. No human can keep up with the browser, so sooner or later, it will be idling. requestIdleCallback is there to signal you that very moment, so use it.

The loader queue is processed by a handler attached to requestIdleCallback. It aggregates the elements from the queue — if there is a request for the wind at 3000m at 4pm and another one for the wind 2500m still at 4pm, they will get combined in a single API call — then schedules them using async-await-queue. You are free to check my story on parallelizing download loops which is beyond the scope of this story:

The requestIdleCallback handler processes the queue which is a cache of Promises implemented as described in:

How many elements shall we process at each tick? The correct answer is not measured in number of queue elements, the correct answer is measured in milliseconds and according to the Chrome dev team it is 100ms — the limit of human perception. When doing the heavy-lifting in your requestIdleCallback handler, check the elapsed time at every iteration of your processing loop. This should be the exit condition of your loop.

Scheduling

I always use the same simple mechanism to schedule tasks for later execution. It is analogous to the the Linux kernel tasklets and its advantage over simply queuing callbacks is that it allows for a single flow of control in the main function.

The main context is the wind() function called from the IntersectionObserver. It is very fast and light-weight as it doesn’t do anything by itself.

scheduleLoad() schedules API calls for loading the data. It creates a Promise saving the resolve() and reject() function references on the queue and returns that Promise so wind() can await it.

Then, at some point, process() is called by the browser which is idling. It starts unloading Promises from the queue, processes them and then calls their resolve function references to unlock the tasks awaiting them. When its maximum allotted time is over, it ends the processing loop.

Use requestAnimationFrame when updating the DOM tree

I guess you already knew about that one. It is important. Like, very important. This is what allows the animation to run. Same thing as requestIdleCallback — do not do more than 100ms of work. Schedule all the DOM modifications in a queue, then process them when called by requestAnimationFrame until you reach 100ms. Do not modify the DOM outside requestAnimationFrame. If you have never used requestAnimationFrame, it is basically a way to make the browser call you before rendering another frame. It is the perfect moment for modifying the DOM. The time you are going to spend in requestAnimationFrame will (be one of the factors that) determine the frame rate your user will get. Don’t go for 24 frames per second, this is not a Hollywood movie. Be realistic, when updating a 110k nodes DOM, 10 fps is not that bad.

Optimize your CSS

When flowing the layout, the browser must match every element to every CSS selector. Even if the matching uses a very efficient algorithm with a radix tree, it still helps to reduce the number of selectors as much as possible.

I use Bootstrap which is very good and allows artistically handicapped engineers like me to create beautiful pages. It has only one drawback — unless you compile it yourself, it comes with literally a ton of CSS classes. Do not include the CDN version. Either generate your own light version, or use an optimizing plugin for your bundler. I use PurgeCSS for Webpack. It has its limits, especially when you have dynamic CSS classes, but with some manual fiddling it allows you to have a much lighter CSS.

Reuse the SVG elements

When you have over 7,000 SVGs, you are bound to have some repeating elements. SVG allows you to have reusable defs which are like macros and can be shared across a page. The web is full of SVG tutorials, if you have lots of similar SVGs, go check one, it will be worth your time. If an SVG is repeated in a completely identical way, leave it to the browser to handle the reusing and include it with a xlink:href attribute (or simply href if you don’t care about old browsers which won’t run your monstrous page anyway).

You don’t need a framework

Now, don’t get me wrong. A framework is still very important as it helps at keeping structure in a large project. And keeping structure is probably the number one most important task in a complex long-term project. And this is especially more important when you are in a team. But sometimes, skipping the framework, for one performance-critical page, or a single component, could be the right solution. Today there is no framework that can handle 110k nodes.

…and remember to use the profiler

As a closing remark, and in case you never used it, I would just like to remind you that Chrome DevTools comes in with a very powerful and user-friendly profiler. Firefox does have a profiler too, and some seem to like it, but I haven’t used it enough to recommend it.

When implementing a complex page where performance could be a problem, try to profile after every sprint.

And always take all the advices and the old sayings with a grain of salt. The web is a very fast moving target. What was good practice in 2010 was obsolete by 2015 and then it probably changed again in 2020.

--

--