I always knew that Node.js I/O performance was unbeatable compared to other scripting languages. Then, last year, I completely gave up using Python an year ago after I was unable to beat my own mono-threaded JS implementation of a linear optimization algorithm in multi-threaded Python.
Still, being used to the node-gdal-async…
Recently I had to make some examples for using my new cartography library for React. For every example, I wanted to be able to display both the component output and the code itself, like this:
(the code loading the examples discussed here is in https://github.com/mmomtchev/rlayers/blob/master/examples/App.tsx)
As some of the examples were complex and constantly evolving, it was all natural to assume that this was to be all automatic with no maintenance required after the initial setup.
My first reflex was to use React itself, trying to leverage the JSX and to transform it to HTML. …
They told me it couldn’t be done, but I refused to listen
About an year ago I set to remake the profile page on my soaring weather website. That page is a very good exercise in data visualization and web page performance, so I decided to share some of the valuable experience I learned from that single page.
If you go to that page and click the GFS model from the models menu on the top left, your browser will have to deal with a monstrosity. …
Some common pitfalls and design patterns when using Promises and async functions
Single-threaded event-driven asynchronous I/O is one of the most interesting features of the JS language. It is not a new concept, in fact, more than 20 years ago, there was one very influential paper in the world of network programming called “The C10K problem”. It was an eye opening experience for one whole generation of system developers and it spawned the arguably fastest and one of the most widely used low-level I/O frameworks in the world — libevent. …
Reading and parsing a large CSV file in Node.js doesn’t have to be slower than the equivalent compiled C code… that is if you are willing to give up the comfort of the automatic memory management
Recently I had to process a very large (1.6G) CSV-like file in Node.js. After going through several iterations, some reading, and quite a few a-ha moments, I decided to write yet another story on this subject.
Reading large files in Node.js is a subject that comes back very often. In fact, there are already a few Medium and dev.to stories and some very informative…
How many times you have been confronted with the classical problem of parallelizing download loops in crawlers and scrapers
Consider the following typical crawler or scraper code:
Thanks to async / await this is as simple and readable as it can get.
But what if you have thousands of URLs? This would take ages as each iteration waits for the previous one.
Luckily, JS has a solution:
Here we are launching all the downloads without await. The beauty of promise chaining allows us to create new promises out of succession of asynchronous operations. We push all those promises…