• "Use the platform" is not always the best advice

    Swizec Teller wrote about how different DOM diffing libraries perform under pressure in his interesting blog post.

    It’s interesting to see how Preact and Vue perform better than React in that demo. In my opinion the biggest performance bottleneck is “the platform” not the libraries. I rewrote the demo to render everything in a canvas element. It resulted in a much smoother user interactions and animations.

    You can see the demo here [Source]

    It’s important to point out that the canvas demo does not throttle the mouse move. It’s also not using requestAnimationFrame.

    It makes sense to use SVG for programs like this. It’s more declarative and easier to work with, but as you can see in the demo, it’s not fast enough.

    React and other DOM diffing libraries are trying to make it easier for browser to render the demo by reducing number of updates. The React implementation is also using mouse move event throttling and requestAnimationFrame to reduce number of updates. But browsers have to consider so many different aspects of web platform while applying the update that it makes it very hard for them to be spec compliant and perform well at the same time.

    Web platform is full of amazing features. It’s very easy to make good looking apps using CSS and HTML. Simple things like text-shadow would be really hard to implement if the only API for rendering apps was just canvas APIs.

    I’m not a browser engineer and don’t know how browsers work. I’m making this assumption that when browser receives a DOM update from JavaScript it has to go through so many hoops to make sure everything is rendering per spec. While canvas is a very low level API that lets you draw whatever you want on. The low level API makes canvas fast.

    Sometimes “use the platform” is not good advice because platform is polluted with tons of feature that you might not know about but browser has to take them into account.

    Canvas and DOM vs. iOS Core Graphics and UIKit

    HTML canvas element API is similar to Apple Core Graphics library and DOM is similar to Apple’s UIKit. I’ve done a little bit iOS development and learned that any UIKit component is using Core Graphics library to draw pixels. iOS developers can make custom components that uses Core Graphics library or override parts of UIKit components using Core Graphics API. It’s very powerful that developers can mix and match low level and high level APIs to achieve their goals. Web as a platform does not allow easy access to pixel drawing and layout computation algorithms. The default layout system is always enabled. It’s not possible to tell browsers how to lay things on the page in a programmatic way. It’s also not possible to have web components that render their own pixels without breaking web accessibility.

    This is a bigger problem than just performance. Web APIs are usually very limited for doing custom behaviors. A lot of smart people tried to solve this problem. For reference React-Canvas was an attempt to solve this problem. But it can’t be done without changing the platform itself.

    Good news is that CSS Paint API spec is under development. It will make it possible for web components draw their pixels directly. Another good news is CSS Layout API spec that allows developers to override default layout algorithm. These APIs are part of Project Houdini that is an effort for bringing low level APIs to web developers. With low level APIs developers don’t have to choose between rendering everything in canvas or relying on web platform that’s not completely under their control.

    I’m very excited about project Houdini and can’t wait to write apps that utilize those APIs!

  • Setting up test coverage using Mocha, Istanbul, NYC with TypeScript

    It’s a pleasure to work with a project that uses TypeScript for your source code and tests, although setting up test coverage can be a bit tricky. I recently started a project that uses TypeScript for source as well as the tests.

    I used Mocha to run the test and nyc for generating test coverage.

    It tooks many hours to figure out a solution that works end-to-end so I wanted to share the end result.

    Here is what the npm script section is looking like:

      "scripts": {
        "test": "nyc --require ts-node/register ./node_modules/.bin/_mocha",
      "nyc": {
        "include": [
        "exclude": [
        "extension": [
        "require": [
        "reporter": [
        "sourceMap": true,
        "instrument": true

    Mocha configuration is located in test/mocha.opts:

    --compilers ts-node/register
    src/**/*.test.ts src/**/*.test.tsx

    That’s it! Now you can run npm test to run your tests and get coverage report.

    Now all test coverage reports are mapped using sourcemaps.

  • Understanding JavaScript Backward Compatibility And The Strict Mode

    When I started reading about the new version of JavaScript (ES6 or ES2015) I was conflicted about how adding new keywords to the language will not break JavaScript backward compatibility. As the only runtime language of the Web, JavaScript has to be backward compatible. Any code that is valid and running today should be working in feature JavaScript engines.

    Here is the list of all JavaScript reserved keywords in ES5 §

    break       do        instanceof      typeof       case         else
    new         var       catch           finally      return       void
    continue    for       switch          while        debugger     function
    this        with      default         if           throw        delete
    in          try

    It’s illegal to use a reserved keyword as a variable or function name in JavaScript. For example following code throws a SyntaxError which says “Cannot use the keyword ‘delete’ as a variable name”.

    var delete = new document.getElementById('delete');
    // Throws SyntaxError

    But it’s perfectly fine to use ES6 reserved words like let as a variable name in ES5.

    var let = 1;
    // No error

    One might wonder how this code should work in ES6 runtime since let is a reserved work and can not be used as a variable name.

    ES5 defines two “modes” for the language. The regular JavaScript that existed before introduction of ES5 is considered “sloppy mode” and since ES5, JavaScript programmers can choose to write their program in the “strict mode”. The strict mode introduces a set of new rules to JavaScript including the additional reserved words. This set of keywords is called “FutureReservedWord”. Here is the list:

    implements     interface   let       package    private
    protected      public      static    yield

    The FutureReservedWord keywords are not enforced in non-strict JavaScript. But in strict mode they are considered reserved words and it’s illegal to use them as variable names.

    // sloppy
      var let = 1;
    // No error
    // strict
      "use strict"
      var let = 1;
    // Throws SyntaxError: Cannot use the reserved word 'let' as a variable name in strict mode.

    The strict mode helps JavaScript engines determine which set of reserved words to enforce without breaking backward compatibility.

    Sloppy ES6

    Even when using ES6 new features like arrow functions or spreading it’s legal to use FutureReservedWords as variable names:

    let arr = ['one', 'two', 'three'];
    var [, ...let] = arr;
    // let is ['two', 'three']
    const f = ()=> { var private = true; return private; }
    f(); // no error
  • YAML manipulation library that keeps comments and styling

    The usual way for manipulating a YAML file in JavaScript is to convert it to JSON, modify the JSON and then convert JSON back to YAML. Unfortunately this process will remove any comment and can mess up the styling of the document.

    const yamlString = `
      # my value
      value: 100
    const json = YAML.load(yamlString);
    // updating the value
    json.value = 200;
    // getting back the YAML. Comments are gone...
    console.log(YAML.dump(json)); // => `value: 100`

    This was a problem in number of projects that we were using YAML. We wanted to update some values in YAML files without loosing the formatting and comments.

    I wrote the YAWN YAML library to solve this problem. YAWN YAML uses the Abstract Syntax Tree(AST) of a YAML document to rebuild the file structure upon changes. This makes it possible to change a value in the YAML document without loosing structure of the document.

    Here is an example of how it works:

    import YAWN from 'ywan-yaml';
    let str = `
    # my comment
    value: 1 # the value is here!
    let yawn = new YAWN(str);
    yawn.json = {value: 2};
    console.log(yawn.yaml); // =>
    // # my comment
    // value: 1 # the value is here!

    Please note that you need to replace .json value of the yawn instance object. This is because the setter function is looking at the new JSON and reconstruct the YAML structure.

    This library is heavily tested and can be used in browser and Node.js environments. Please file a bug if you found one.

  • Custom Errors in ES6 (ES2015)

    A very quick note. With the new class and extend keywords it’s now much easier to subclass Error constructor:

    class MyError extends Error {
      constructor(message) {
        this.message = message;
        this.name = 'MyError';

    There is no need for this.stack = (new Error()).stack; trick thanks to super() call.

  • High Performance Recursive HTML/JavaScript Components

    I developed json-formatter directive to use it in Swagger Editor. It’s a simple component for rendering JSON in HTML nicely. Inspired by how WebKit renders JSON in developer tools.

        <!-- This will result in an infinite loop -->
        <json-formatter ng-repeat="key in keys"></json-formatter>

    It’s an AngularJS directives, so you can’t use simple recursion by just repeating the directive in directive template, instead you have to do some tricks to go around it. This StackOverflow answer elegantly defines a factory that overrides AngularJS’ compile method to allow recursive directives.

    When rendering large JSON objects, this directive was responding very slowly. For example for a large JSON file which would result in 24,453 HTML nodes it took 3.34 seconds to render. It’s a lot of time for rendering ~25K nodes. Take a look at the HAR file1.

    AngularJS recursive $digest calls

    AngularJS groups $digest calls to minimize DOM manipulations and triggering change events but with our recursive helper factory we’re avoiding that optimization and running a lot of $digests. That’s why we end up with such slow component.

    Since AngularJS has no good way of building recursive components I went ahead and rebuilt json-formatter in pure JavaScript. It’s available here. This component uses no framework and everything is in plain JavaScript. Recursion happens in a simple for loop. It’s much faster compared to the AngularJS directive. Look at HAR file, the same JSON renders in 981 milliseconds.

    AngularJS recursive $digest calls

    Further optimization

    Our component is appending new children to parent DOM node and installing separate event listeners for each child. This is an artifact of porting the code from AngularJS to pure JavaScript. We should really do the iteration in template level without any DOM manipulation and use event delegation to have only one click event listener for entire component.

    Even though without those optimization we’re getting a 3X performance boost I will redactor this component with above ideas to perform better.

    1. Open HAR files in Timeline of Chrome Developer Tools by right clicking and selecting “Load Timeline Data”

  • Non-blocking Asynchronous JSON.parse Using The Fetch API

    Update (June 2016)

    Right after I published this blog post I received this response from amazing Node.js developer Vladimir Kurchatkin that JSON parsing is not happening in a different thread and in fact it is blocking the main thread. In this tweet I admited I was wrong and I need to update my post.

    Nolan Lawson made following video to demonstrate the effect in multiple browsers:

    The problem

    I am working on Swagger Editor performance. One of the solutions to speed things up was moving process-intensive task to Web Workers. Web Workers do a great job of moving process-heavy tasks out of the main thread but the way we can communicate with them is very slow. For each message to be sent to or received from a worker we need to convert it to a string. This means for transferring objects between the main thread and worker threads we need to JSON.parse and JSON.stringify our objects back and forth.

    For larger objects, this can lead to large blocking JSON.parse calls. For example, when transferring back the AST from our AST-composer worker I saw a 50ms pause. A 50 millisecond pause can easily drop 4 frames.

    The solution

    It’s 2015 but JavaScript or the web does not have a non-blocking JSON API! So there is no native or out of the box solution to this. Because communicating with a working is via string, doing JSON.parse in a worker is also pointless.

    When I was exploring the Fetch API (window.fetch) I noticed the Response object has an asynchronous .json method. This is how it’s used:

      .then(function(response) {
        response.json().then(function(result) {
          // result is parsed body of foo.json

    We can use (abuse?) this API to move all of our JSON-parsing business out of the main thread. It can be done as simple as:

    function asyncParse(string) {
      return (new Response(string)).json();

    It works as expected:

    asyncParse('{"foo": 1}').then(function (result) {
      // result is {foo: 1}


    Moving JSON.parse out of the main thread make the actual parsing time less important but let’s see how it’s different than native JSON.parse:

    // jsonStr is 65,183 charctars
    console.time('sync: total time (blocking)');
    console.timeEnd('sync: total time (blocking)');
    console.time('async: blocking time');
    console.time('async: total time');
    asyncParse(jsonStr).then(function(result) {
        console.timeEnd('async: total time');
    console.timeEnd('async: blocking time');


    sync: total time (blocking): 1.149ms
    async: blocking time: 0.745ms
    async: total time: 3.232ms

    The async method is about 2x slower but hey, it’s async and using it blocked the UI for less than a millisecond!


    I’ll experiment with this and if it made sense I’ll make a package and publish it. I hope JavaScript or DOM provides native non-blocking JSON APIs so we don’t have to do hacks like this. With async/await in ES7(ES2016) working with async methods are much easier so we should have async JSON APIs as well.

  • How to split a Swagger spec into smaller files

    If you’re writing a Swagger API spec and it’s becoming too large, you can split it into multiple files. Swagger supports JSON Reference (draft) for using remote and local pieces of JSON to build up a Swagger document.

    JSON Reference Overview

    JSON Reference uses the special key $ref to define a “reference” to a piece of JSON. For example following JSON has a reference to http://example.com/foo.json:

      "foo": {
        "$ref": "http://example.com/foo.json"

    Imagine the JSON file at http://example.com/foo.json is following JSON:

      "bar": 1

    If we “resolve” the JSON object that had a $ref it will give us this:

      "foo": {
        "bar": 1

    As you see object containing the $ref value is replaced with the object that reference was pointing to. Object containing $ref con not have any other property. If they did, those properties would’ve got lost during resolution.

    Remote And Local References

    JSON References can be remote or local. A local reference, just like a local link in a HTML file starts with #. A local reference uses a JSON Pointer (RFC 6901) to point to a piece of JSON inside current document. Consider following example:

      "info": {
        "version": "1.0.0"
      "item": {
        "information": {
          "$ref": "#/info"

    After reference resolution, that JSON will be transformed to to this:

      "info": {
        "version": "1.0.0"
      "item": {
        "information": {
          "version": "1.0.0"

    Note that "info" was not removed from our JSON and the key "info" was not added to the "information" object. We just replaced what is inside "info" object with object containing $ref reference. Using local $ref is a great way of avoiding repeating yourself when writing a JSON object.

    As a convention, when defining a JSON Schema (RFC draft) all the objects that will get repeated go to "definitions" object. Swagger embraces this and uses "definitions" object as a place to hold your API models. The API models are used in parameters, responses and other places of a Swagger spec.

    Using JSON References to split up a Swagger spec

    Swagger spec can use $refs anywhere in the spec. You can put a reference instead of any object in Swagger. By default Swagger encourages spec developers to put their models in "definitions" object. But you can do more than that and use $refs to put parts of your API spec into different files. For simplicity I am going to use YAML for rest of examples.

    Imagine you have a Swagger spec like this:

    swagger: '2.0'
      version: 0.0.0
      title: Simple API
              description: OK
              description: OK
                $ref: '#/definitions/User'
        type: object
            type: string

    This Swagger spec is very simple but we can still split it into smaller files.

    First we need to define our folder structure. Here is our desired folder structure:

    ├── index.yaml
    ├── info
    │   └── index.yaml
    ├── definitions
    │   └── index.yaml
    │   └── User.yaml
    └── paths
        ├── index.yaml
        ├── bar.yaml
        └── foo.yaml

    We keep root items in index.yaml and put everything else in other files. Using index.yaml as file name for your root file is a convention. In folders that hold only one file we also use index.yaml as their file name.

    Here is list of files with their contents:


    swagger: '2.0'
      $ref: ./info/index.yaml
      $ref: ./paths/index.yaml
      $ref: ./definitions/index.yaml


    version: 0.0.0
    title: Simple API


      $ref: ./User.yaml


    type: object
        type: string


      $ref: ./foo.yaml
      $ref: ./bar.yaml


          description: OK


          description: OK
            $ref: '#/definitions/User'

    Note that in paths/bar.yaml we are using a local reference while in the file itself the local reference will not get resolved. Most resolvers will resolve remote references first and the resolve local references. With that order, $ref: '#/definitions/User' will be resolved inside index.yaml after definitions/User.yaml is populated in it.


    json-refs is the tool for resolving a set of partial JSON files into a single file. Apigee’s Jeremy Whitlock did the hard work of writing this library.

    Here is an example of how to use JSON Refs and YAML-JS to resolve our multi-file Swagger:

    var resolve = require('json-refs').resolveRefs;
    var YAML = require('js-yaml');
    var fs = require('fs');
    var root = YAML.load(fs.readFileSync('index.yaml').toString());
    var options = {
      processContent: function (content) {
        return YAML.load(content);
    resolve(root, options).then(function (results) {


    You can find the example in this blog post in this GitHub repository

  • Slides from my talk: An Overview of Map and Set in JavaScript

    Here are my slides my talk that I gave in SF Node.js Club Meetup about Set and Map objects in JavaScript

    Go to slides

  • Slides from my talk: My Experience in Building Swagger Editor

    Here are my slides my talk that I gave in SouthBay JavaScript Meetup about my experience building Swagger Editor

    If you’re not familiar with Swagger check it out. It’s very useful for starting microservices quickly or generate documentations for you API automatically.

    Go to slides

  • Slides from my talk: Working with Arrays and Objects in modern JavaScript

    Here are my slides my talk that I gave in BayNode Meetup about Object and Array objects in ES5 and ES6.

    Go to slides

  • Compare NSDate instance with ease in Swift

    To make it easy comparing two NSDate instances in Swift we can overload <=, >=, >, < and == operators with NSDate types on left and right hand sides of overloading functions. timeIntervalSince1970 is a safe measure for comparing most dates. I used timeIntervalSince1970 to make the decision if two dates are equal, less or greater.

    func <=(lhs: NSDate, rhs: NSDate) -> Bool {
        return lhs.timeIntervalSince1970 <= rhs.timeIntervalSince1970
    func >=(lhs: NSDate, rhs: NSDate) -> Bool {
        return lhs.timeIntervalSince1970 >= rhs.timeIntervalSince1970
    func >(lhs: NSDate, rhs: NSDate) -> Bool {
        return lhs.timeIntervalSince1970 > rhs.timeIntervalSince1970
    func <(lhs: NSDate, rhs: NSDate) -> Bool {
        return lhs.timeIntervalSince1970 < rhs.timeIntervalSince1970
    func ==(lhs: NSDate, rhs: NSDate) -> Bool {
        return lhs.timeIntervalSince1970 == rhs.timeIntervalSince1970

    Note that operator overloading declarations should be placed in global context. I highly recommend documenting this behavior in your developer guide documents.

    With those operator overloading declarations in place, now we can compare dates with ease:

    let date0 = NSDate(timeIntervalSince1970: 0)
    let date1 = NSDate(timeIntervalSince1970: 0)
    let date2 = NSDate(timeIntervalSince1970: 1839203982)
    let date3 = NSDate(timeIntervalSince1970: 1339203982)
    date1 < date2 // true
    date0 == date1 // true
    date3 > date2 // false
  • Get QuickLook Preview of Swift objects in XCode

    When setting breakpoints in XCode, it’s quite hard to see what exactly is inside an object. All XCode give you is memory address of that object. In XCode 6 it’s possible to overcome this by implementing debugQuickLookObject method in your object. This function will be called when program is stopped by a breakpoint and you hover over the object and select the little eye icon.

    For example, in my File class, I’ve implemented this method in my class. As you can see the output is very useful and handy for debugging. It works great for NSManagedObjects too!

    Quick look of an object in XCode

    class File: NSManagedObject {
        @NSManaged var id: NSNumber
        @NSManaged var parent_id: NSNumber
        @NSManaged var name: String!
        @NSManaged var content_type: String!
        func init(json:NSDictionary){ /* ... */
        func debugQuickLookObject() -> AnyObject? {
            return "\(name)\ntype:\(content_type)"

    debugQuickLookObject can return almost anything. From a string to image to sounds. It should return one of the cases of QuickLookObject which is listed here:

    enum QuickLookObject {
        case Text(String)
        case Int(Int64)
        case UInt(UInt64)
        case Float(Double)
        case Image(Any)
        case Sound(Any)
        case Color(Any)
        case BezierPath(Any)
        case AttributedString(Any)
        case Rectangle(Double, Double, Double, Double)
        case Point(Double, Double)
        case Size(Double, Double)
        case Logical(Bool)
        case Range(UInt64, UInt64)
        case View(Any)
        case Sprite(Any)
        case URL(String)
  • How to kill child processes that spawn their own child processes in Node.js

    If a child process in Node.js spawn their own child processes, kill() method will not kill the child process’s own child processes. For example, if I start a process that starts it’s own child processes via child_process module, killing that child process will not make my program to quit.

    var spawn = require('child_process').spawn;
    var child = spawn('my-command');

    The program above will not quit if my-command spins up some more processes.

    PID range hack

    We can start child processes with {detached: true} option so those processes will not be attached to main process but they will go to a new group of processes. Then using process.kill(-pid) method on main process we can kill all processes that are in the same group of a child process with the same pid group. In my case, I only have one processes in this group.

    var spawn = require('child_process').spawn;
    var child = spawn('my-command', {detached: true});

    Please note - before pid. This converts a pid to a group of pids for process kill() method.

  • You don't have to ask developers to install Bower and Grunt to start your app

    It’s very common in font-end applications to have Bower dependencies and use Grunt (or Gulp) for their build system. Usually in README document it’s stated that you need to install Bower and Grunt globally before you can start the project.

    npm install -g bower grunt-cli
    git clone git://myapp.git
    cd my app
    npm install
    bower install
    grunt serve

    In Swagger Editor project, I made this process as simple as git clone and then npm start.

    git clone git@github.com:swagger-api/swagger-editor.git
    cd swagger-editor
    npm start

    No need for installing Bower or Grunt. It also works in Windows.

    npm scripts to rescue!

    With npm scripts we can remove dependencies to global installation of Bower and Grunt and also install npm and Bower packages automatically. First of all we need to install Bower and Grunt as developer dependencies of our app.

    npm install --save-dev bower grunt-cli

    Now, using npm scripts fields we can define start script to trigger npm install, bower install and then grunt serve.

    "scripts": {
      "start": "npm install; bower install; grunt; grunt serve"

    Now, git clone and then npm start will do everything you need. This will work perfectly in Linux, OS X and Windows because "start" is simply a Bash script that uses ./node_moduels/.bin as $PATH. So even if user don’t have Gulp installed globally, gulp command in your npm script will execute gulp binary in ./node_modules/.bin.

    Pro tip: If you name your script, server.js you don’t even have to have a start field in scritps of your package.json, npm will automatically run node server.js for npm start command

subscribe via RSS