Migrating from create-react-app-typescript to Create React App

Create React App 2.1.0 just arrived with TypeScript support! While Will Monk’s fork create-react-app-typescript has served us well, being able to use the official version has a number of advantages. Most importantly: it is supported by the full weight of the Create React App team, and will therefore stay closely aligned with React proper and will always have up-to-date documentation. Furthermore, you are able to use everything that is supported by Create React App, like Sass.

The implementation also deviates a bit from create-react-app-typescript’s. Most importantly, TypeScript is only used for type checking, whereas transpilation is done by Babel. The disadvantage of this is that you are behest to the caveats of Babel’s TypeScript support, most notably the lack of namespaces and having to use the x as Foo type casting syntax. In practice, however, it is unlikely that these caveats will affect a React app, and the upside is that you are now able to tap into Babel’s extensive ecosystem.

So great: we can now use TypeScript for new apps created with Create React App. However, many of us already have apps written using create-react-app-typescript. How much work is it to port those to Create React App proper?

As it turns out: not that much. Let’s get to it.

Step 1: remove react-scripts-ts, add react-scripts

Create React App is a command line application that generates a basic React application for you, but instead of adding all commonly used dependencies directly, it adds a single dependency that bundles them and is maintained by the CRA team itself: react-scripts. Likewise, create-react-app-typescript has its own fork of this: react-script-ts. Thus, the main thing to do when migrating, is switching over this dependency:

$ npm uninstall react-scripts-ts
$ npm install react-scripts

Then, we have to make sure that the new scripts are the ones that are actually called when we run npm start, npm test, etc. Thus, in your package.json, replace:

  "scripts": {
    "watch": "react-scripts-ts start",
    "build": "react-scripts-ts build",
    "test": "react-scripts-ts test --env=jsdom",
    "eject": "react-scripts-ts eject",


  "scripts": {
    "watch": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject",

Step 2: Activate TypeScript support

Although CRA now supports TypeScript, you do have to explicitly enable it. This can be done by simply installing a few packages:

$ npm install --save typescript @types/node @types/react @types/react-dom @types/jest

Step 3: Clean up the remnants of create-react-app-typescript

create-react-app-typescript did a few things differently from how Create React App proper does it, and therefore added a few files that now are no longer needed or are now named differently. Don’t worry, though; we will recreate the relevant and properly named files in a moment.

The outdated files are tsconfig.json, tsconfig.prod.json, tsconfig.test.json and images.d.ts. To remove them with a single command:

$ rm tsconfig.json tsconfig.prod.json tsconfig.test.json images.d.ts

(Note that, apart from a few options, you will still be able to customise tsconfig.json to your likings.)

Step 4: Run it!

You should now be set! If you now run your app for the first time, Create React App will create relevant files, such as a new tsconfig.json, for you:

$ npm start

If everything went well, you should now have a running app. Not so bad, was it?

Depending on your setup, though, there might be a few additional problems you might run into.


I will try to keep the following list up-to-date with problems that people run into, and how to solve them. Ran into any yourself that was not documented here? Let me know on Twitter or by email.

Absolute imports

create-react-app-typescript allowed specifying your imports relative to your root directory. In other words, no matter which file you were editing, you could do

// In src/components/Bar/Bar.tsx
import { Foo } from 'src/components/Foo/Foo.tsx';

This is useful because it allows you to move your files around as you please, but also makes it less transparent where your imports are coming from.

Create React App does not support this. To fix this, make your imports relative:

// In src/components/Bar/Bar.tsx
import { Foo } from '../Foo/Foo.tsx';

Importing CSS files from node_modules

In create-react-app-typescript, you could directly import CSS files that were located in node_modules from within a React component:

// In Foo.tsx
import 'node_modules/bulma/css/bulma.min.css';

With Create React App, you can still import from node_modules but, like above, can not rely on absolute imports. However, you can import directly from subfolders in node_modules:

// In Foo.tsx
import 'bulma/css/bulma.min.css';

Value not found, property does not exist on type, etc.

If you were using modern JavaScript API’s in your code, you have to tell TypeScript to include the relevant type definitions. To do so, add the lib property to the compilerOptions in your tsconfig.json, and add the type definitions you use, e.g.:

    "lib": ["esnext", "dom"]

Also make sure that, if you want to support older browsers like Internet Explorer 11, you include the relevant polyfills.

Other type checking errors

Create React App enables strict mode for TypeScript. This can help you catch many errors, and I would suggest you to try to fix the errors you encounter. That said, create-react-app-typescript only enabled a subset of the strict type checking options, and moving to a stricter mode now might be too much of a hassle. To loosen these restrictions, you can try to manually disable the checks you are violating in the compilerOptions in your tsconfig.json, e.g.:

    "alwaysStrict": false,
    "strictFunctionTypes": false,
    "strictPropertyInitialization": false,

Allocation failed - JavaScript heap out of memory

If you use Yarn, adding or removing dependencies might start to fail. This is likely due to Yarn not being able to process the untold masses of dependencies. One factor that can strongly bloat your number of dependencies is if you are using Storybook 3, because it includes other versions of Webpack, Babel, etc. than Create React App 2 is using. Luckily, Storybook also just released a new version that should be compatible with the package versions used by Create React App, so following the Storybook v4 upgrade instructions should solve this issue. Be sure to also read the paragraph below to make Storybook work.


Storybook uses its own Webpack configuration to load your stories. It uses babel-loader and, if you followed the official docs on using TypeScript with Storybook, awesome-typescript-loader to do so, which are not included with CRA. Thus, you will have to install those manually:

$ npm install babel-loader awesome-typescript-loader

Additionally, while Create React App uses Babel to parse JSX, Storybook expects TypeScript to do so. Thus, create a Storybook-specific TypeScript configuration in .storybook/tsconfig.json that extends the one you already have with that setting, as follows:

  "extends": "../tsconfig",
  "compilerOptions": {
    "jsx": "react",

Then, in .storybook/webpack.config.js, tell awesome-typescript-loader to use that configuration file:

    loader: require.resolve('awesome-typescript-loader'),
    options: { configFileName: path.resolve(__dirname, './tsconfig.json') }

You should now be good to go!


The main driver behind adding TypeScript support to Create React App was Bruno Lemos, who worked very hard on his pull request. Of course, he could not have done that without the support of the Create React App team, and in particular Joe Haddad, who spent a lot of time reviewing the pull request - and also reviewed this blog post. And of course, we should be grateful to Will Monk, who maintained (and is still maintaining) his excellent fork of create-react-app that allowed us to use TypeScript when it was not officially supported yet.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

Towards better bug reports

I just implemented a little change that I think will greatly improve the quality of bug reports I receive: I created a minimal boilerplate repo for my project.

The best bug reports are minimal, complete, and verifable. In other words:

  • The maintainer should not be distracted by things not relevant to the bug.
  • All info relevant to the bug should be included.
  • The maintainer should be able to observe the buggy behaviour by themselves.

Submitting a high-quality bug report is not always easy though, which can often lead to it not being worth the effort to report. Consider, for example, a package I maintain: wdio-webpack-dev-server-service. To create a minimal, complete and verifiable example for a bug report, you would have to:

  • create a barebones Webpack project
  • set up WebdriverIO and add a minimal test
  • set up wdio-webpack-dev-server-service
  • and of course: reproduce your bug

That’s three tools to configure! And since it’s probably been a while since you’ve set up those tools for your own project, that’s three tools for which you’ll have to dig up the documentation again.

To save my users the effort and thus hopefully encourage high-quality bug reports, I created a repository with a barebones project. The repository contains all the boilerplate and instructions needed to get wdio-webpack-dev-server-service up and running. I then updated the issue template to refer to that repository and to encourage bug reporters to use it to create a minimal, complete, and verifiable example.

Now bring on those bug reports.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

If you measure test coverage, aim for 100%

When it comes to unit testing, there are three schools of thought that I’m aware of:

  1. We don’t do unit testing, since they are overrated/too hard/take too much time/whatever.
  2. We only test critical/error-prone parts of our code, and bugs that have been fixed.
  3. We want a significant part of our code to be covered by tests.

I’m usually in the latter camp: I think that automated tests are one of the most effective ways to ensure code quality on a long-running project.

This strategy, however, is often accompanied with a policy of maintaing at least a certain level of code coverage. And since there are usually some parts of the codebase where the effort required to bring them under test is considered not to outweigh the benefits, unit test policies usually stipulate a minimum amount of code coverage of 80%, 90% or 95%.

I’d like to argue that best practice is to require code coverage analysis to report a coverage of 100%.

Why not <100%

When a measure becomes a target, it ceases to be a good measure.

Goodhart’s Law

The risk of requiring a certain level of coverage is that achieving that level becomes the goal, instead of the goal being to write proper tests. And what is the easiest way to achieve 80% code coverage? Simple: write tests for the 80% of your code that is easiest to test.

However, you do not want to test the code that is easiest to test: you want to test the code that is most important to test. In other words: whether a certain line of code is worth the effort of writing a test should be consciously decided, rather than certain parts not being tested simply because the other parts were enough to achieve the required coverage metrics.

Reaching 100% without losing your mind

Code coverage reports should offer guidance, not a goal to meet. More specifically, they can help you find out which parts of your code you actually forgot to test. It regularly happens that there are some unhappy paths that I did not think of when writing tests, but of which it was obvious that they would need to be dealt with when writing code. When anything below 100% coverage is configured to be insufficient, my testing tools will then point out to me that those parts still need a test.

Of course, it is still likely that there are parts of your code for which the benefits of testing do not outweigh the costs. However, rather than lowering my coverage target by some arbitrary number, I mark those parts as irrelevant for the coverage report, with a comment explaining the reasoning behind not testing it.

In other words: I don’t want 100% my code to be covered by unit tests, but I at least want 100% of my code to have been considered for unit tests. That way, what to test is still left to the programmer’s best judgment, and code coverage analysis becomes helpful rather than yet another tool to satisfy.

Now, I haven’t seen the above mentioned as best practice before, so I’m very much interested in what people have to say about it. That said, regardless of the testing strategy you use, I think developer buy-in is always a good starting point. The tooling we use should help us achieve our goals, rather than set them for us.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

Spend effort on your Git commits

Version control systems like Git are widely appreciated for their ability to provide a centralised location for source code, for helping people work together on the same code base, and for allowing you to scrap the crap you just wrote and get your code back to the state it was in before you started your misguided refactoring.

However, there’s one thing it can do for you that is overlooked too often: help you document your code. In fact, it can prove to be at least as useful as comments.

Let me show you why.

What commits can do for you

An important tool in Git is git blame. Apart from telling you which blockhead introduced an especially ill-conceived piece of code (you, a month ago), it also allows your editor to provide a lot more context to your code. For each line of code, it not only lists the author, but also the commit that last touched it.

Thus, your editor can show the commit messages associated with each line, providing more guidance on why it was introduced. It can tell you why a configuration option was set to a specific value. And in contrast to regular comments, commit messages are directly linked to the version of the code they apply to, so they will never become outdated.

You can also inspect each commit to find out which other lines were changed, in which files, at the same time. Since most behaviour can only be implemented by modifying your code in multiple places, it often helps to see those changes together to understand why they were needed. And if you see a stray comment that no longer makes sense, its commit will often tell you what was actually meant.

A commit can also serve as documentation on how to achieve a certain task. For example, I recently converted this blog from Hexo to Jekyll (my apologies if you are subscribed to my feed and saw duplicate posts). By referring to a single post’s migration commit, I could see exactly what steps were needed to migrate a post — and which step I forgot when one of them didn’t show up properly. Apart from being useful as a reference for yourself, this can also be helpful in instructing other contributors: “to add a new endpoint to the API, take a look at this commit for inspiration”.

Crafting your commits

The above is very useful, but it works best if you invest at least some effort into your commits.

This means that you put some thought into your commit messages and how they document your changes. Make sure they are descriptive enough, that they still make sense a week from now, and that you list any trade-offs you might have had to make — although you might also want to accompany this with a comment to that effect.

It also means that you think about what to include in a commit: does it make sense to see these changes together? There’s a reason Git has the concept of a staging area: it is not necessarily the case that all changes in your working directory make sense in the same commit. As an added bonus, this also makes it more feasible to revert faulty work.

So basically, try to avoid git commit -am "asdf", and commit consciously. Unless, of course, there’s an emergency.

In case of fire: `git commit`, `git push`, then leave the building.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

The case for Functional Reactive Programming

This post is a summary of a talk I gave today, in which I made the case for Functional Reactive Programming in Javascript using libraries such as RxJS.

Bugs, bugs, bugs

If debugging is the process of removing bugs, then programming must be the process of putting them in. —Edsger W. Dijkstra

All else being equal, software with fewer bugs is better than software with more bugs. To minimise the amount of bugs we ship, we spend a considerable amount of effort on debugging. A more time-efficient way of achieving this same goal, of course, is to prevent us from writing these bugs in the first place.

To find out how we can avoid writing bugs, let’s consider what a bug is: software doing things the programmer did not expect it to do. In other words: the harder it is for us to understand the code we work on, the more likely it is to contain bugs.

So what makes our code hard to follow? Consider the following example code:

let counter = 0;

function increment(){

function decrement(){

It looks rather innocent: we have a counter with an initial value of 0, and the functions increment and decrement to respectively increase and decrease its value by 1. It exhibits a problematic property described by three words: shared mutable state.

In short, the state of the application is located in counter, which holds the current value of the counter at any point during the lifetime of the application. Both functions share access to counter, and both can modify (“mutate”) its value.

Looking at the code above, my earlier claim that this is hard to follow seems absurd. Consider, however, any non-trivial codebase, where functions consist of more than one line, and increment, decrement and counter might all be located in different files. In that case, anyone who wants to edit either of those will have to keep the rest of the codebase in mind, because bugs might be lurking there when you modify the code here.

Our saviour: pure functions

The way around this is by using pure functions: functions that do not modify anything outside of themselves (in other words: they don’t rely on side effects). Instead, they receive everything they need as input, apply some transformations, and provide the result as output. With this in mind, the increment function might receive a number representing the current counter value, and return a new number that is 1 higher than that:

function increment(counter){
  return counter + 1;

And of course, likewise for decrement.

So how would we use functions written in this way? Well, rather than simply calling the function and hoping for something to happen, we have to pass it the correct input and then use the output to update the counter:

let counter = 0;
counter = increment(counter);

But of course, this is not what we usually do. Rather, we increment the counter in response to e.g. the user clicking a button. So we’d actually write a handler for the click event:

let counter = 0;

function onClick(){
  counter = increment(counter);

…but now we’re back in the land of shared mutable state: onClick changes a value outside of its own scope! Does this mean we’re stuck now? Of course not!

Functional Reactive Programming to the rescue!

Using the principles of Functional Reactive Programming, we can write the largest part of our application using nothing but pure functions. We can do so by pushing side effects to the “edge” of our application, where it interacts with the “outside world”, such as responding user input or receiving HTTP responses. We convert those into Observables as soon as possible, which we can manipulate using nothing but pure functions.

For our counter example, this would look somewhat like this:

// Convert user input to Observables
const incrementButton = document.getElementById('incrementButton');
const decrementButton = document.getElementById('decrementButton');

const incrementClick$ =
  Observable.fromEvent(incrementButton, 'click');
const decrementClick$ =
  Observable.fromEvent(decrementButton, 'click');

// Convert this input to an Observable producing counter values,
// using nothing but pure functions
const counter$ =
    incrementClick$.map(() => 1),
    decrementClick$.map(() => -1)
  .scan((accumulator, value) => accumulator + value);

// Subscribe to counter values, and display them to the user
counter$.subscribe(counter => {
  document.getElementById('counter').innerHTML = counter.toString();

As you can see, we need a few lines of side effects at the top and bottom where we deal with the outside world. The meat of our application, however, consists only of pure functions that do not share state. The challenge here is to understand Observables, but once you’ve got that down FRP code is relatively easy to follow. Which means we’ll write fewer bugs!

(And that’s even without considering how easy it is to write unit tests for pure functions.)

Summing up

As we saw, code that is hard to follow has more bugs. Code becomes hard to follow when it contains shared mutable state. Pure functions avoid this, but it can be a challenge to use them extensively.

That is where Functional Reactive Programming comes in. FRP allows us to represent changing state, while allowing us to keep (most of) our application logic pure.

If this captured your interest, I would encourage you to read more about Functional Reactive Programming.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

3 metaphors that show the power of the Observable

There’s a lot of excitement about reactive programming in the front-end community:

Reactive programming is everywhere, so it’s time we took a look at its primary concept: the Observable. What problems does it solve for us?

To answer this question, I will present three different metaphors that highlight situations that Observables can deal with elegantly. After this article, you should have a good idea of when to use Observables, and for what.

Metaphor 1: Observables as special Promises

A Promise is a data structure that represents a value that might not be immediately available. This is very useful when you want to perform some action as soon as that value does become available. An example use case is when you have made an HTTP request: at a certain moment, the response is going to come in, at which point you will want to perform some action. A Promise can then represent that eventual response.

A Promise could be described as such:

A long arrow pointing to the right. At the far end at the right-hand side, a circle labeled

The arrow represents the passage of time; at a certain point, the promise is resolved with the value 3.

What can Observables do for me?

A Promise is useful if you’re waiting for a single value, and want to perform an action when that value comes in. But what if you’re waiting for multiple values? For example, what if you’re waiting for user input, where you’re waiting for multiple keypresses that drip in one by one?

This is where Observables come in. They can be seen as Promises that can return multiple values:

A long arrow pointing to the right. On top of the arrow are, from left to right, three circles labeled

The above is an Observable that, at different moments in time, delivers the values 1, 2 and then 3. Some time after that, it “completes”: no more values are to be expected. This is indicated by the vertical line.

Metaphor 2: Observables as a Design Pattern

Design Patterns are best practice approaches to common problems in software design. For example, the Iterator pattern is a common approach to dealing with containers with multiple elements, e.g. lists of users. Your code can use an Iterator to get access to the container’s elements, without needing to care about the container’s implementation.

Schematic representation of an Iterator: a block of code with three arrows pointing to an Iterator. The arrows represent method calls to access elements from the container.

Likewise, the Observer pattern is a common approach when you have code that needs to become active when something happens elsewhere in the code. In this pattern, your code (the Observer) can register itself with a Subject, which will then notify it when the change has occurred.

Schematic representation of an Observer: a block of code (the Observer) with an arrow pointing to it from a Subject. The arrow represents a method call on the Observer to notify it of an update.

What can Observables do for me?

Whereas an Iterator is useful when you have a container with multiple elements readily available, an Observable shines when these elements might not yet be available. For example, when you need to call several REST APIs to collect all profile data related to a user, the different elements of the profile might arrive at different moments. Using an Observable, you can construct the full profile as the data rolls in.

Observables can be seen as a combination of the Iterator pattern and the Observer pattern. Whereas the Iterator is pull-based (your code “pulls” values out of the Iterator), an Observable is push-based: the Observable “pushes” values to your code as they arrive.

Schematic representation of an Observable: a block of code with three arrows pointing to it from an Observable. The arrow represents method calls on your code to provide it with new elements as they arrive.

Metaphor 3: Observables as a way around shared mutable state

Consider two functions that increment and decrement a counter, respectively:

let counter = 0;

function increment(){

function decrement(){

In this example, both functions (shared) can change (mutable) the value of counter (state). This setup has some disadvantages, the most important of which is that it can make your code hard to follow. For example, you cannot simply look at the increment function in isolation: you have to keep track of every line of code that refers to counter and keep in mind how they can affect the function’s code and vice versa. In the toy example above this is not a problem, but as your codebase grows, your code will become harder and harder to maintain.

A common approach to deal with this is to write your functions in a way that they do not depend on anything outside that function — so-called pure functions. Rather than relying on the application being in a certain state, everything the function needs will have to be passed in as an argument. This results in functions that look like this:

function increment(previousValue){
  return previousValue + 1;

function decrement(previousValue){
  return previousValue - 1;

The question then is: how we can know what the present value of the counter is, e.g. to show it to the user? You could store it in a variable which could then be updated like this:

let counter = 0;
counter = increment(counter);

…but then we’re back in the land of mutating state! If we want to show this value to the user, we still have to be aware of every line of code that references counter and might require updating the view.

What can Observables do for me?

Observables provide a way to deal with changing values without having to resort to shared mutable state. Instead, you explicitly describe the way the value can change using pure functions. So to take our use case of showing the counter value to the user, we could simply subscribe to an Observable that delivers the new value of the counter as it changes, and use that to update the view.

An application window showing a 3, the current value of the counter. An arrow pointing to the window is delivering the new values 2, 1 and 2, respectively.

Let’s say we have two buttons for incrementing and decrementing our counter, respectively:

<button type="button" id="increment">Increment</button>
<button type="button" id="decrement">Decrement</button>

An Observable delivering the value 1 everytime the Increment button would look like this:

const incrementClick$ =
  Rx.Observable.fromEvent(document.getElementById('increment'), 'click')
  .map(ev => 1);

Likewise for decrementing, but with the value -1. An Observable delivering counter values would then be constructed as such:

const counter$ =
  // Create an Observable delivering 1 and -1 on
  // their respective button clicks:
  Rx.Observable.merge(incrementClick$, decrementClick$)
  // Emit a 0 before any button is clicked:
  // Then add whatever is emitted (1, -1 or 0) to
  // whatever we had before:
  .scan((accumulator, value) => accumulator + value);

You can then subscribe to that Observable to update the view with the latest counter value:

counter$.subscribe(counter => {
  document.getElementById('counter').innerHTML = counter.toString();

(You can view the full working example at JSFiddle.)

It should be noted that there are nothing but pure functions in the above example. There is no explicit variable holding the state, yet we are still able to display a dynamic counter value.

What’s next?

We’ve seen three different metaphors that helps us think about Observables:

  • As Promises that can return multiple values
  • As a Design pattern for containers that will have multiple elements, eventually
  • As a way to represent mutable values without requiring shared mutable state

These metaphors provide insight into how Observables can be useful. As a next step, I’d recommend reading up on what you can do with Observables, and how they can help you structure your application. Coincidentally, I’ve written another article that deals with exactly that!


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

toBeTruthy() will bite you — use toBe(true)

Here’s a short public service announcement for those who write their tests using Jasmine or Protractor: avoid the toBeTruthy() and toBeFalsy() matchers and use toBe(true) and toBe(false) instead.

(Off topic: I’ve also seen people confuse Jasmine and Karma. Karma fires up a browser to run tests in, but the actual tests are written using a testing library such as Jasmine.)

What’s the problem?

I’ve seen people using these matchers assuming they’re just a weird spelling of toBeTrue() and toBeFalse(). But they’re not: truthy and falsy refer to values that are evaluated to true and false after being coerced to a boolean!

So yes, true is truthy, but 42 is also truthy, and even [] and 'false' are truthy! In fact, everything that is not 0, "", null, undefined, NaN or false is truthy.

This means that if you’re testing a function that errorenously returns 'false' (a string) instead of false (a boolean), toBeTruthy() will match and toBeFalsy() will not.

So just like you should never use ==, try to avoid toBeTruthy() and toBeFalsy().

So then what should I use?

Just use the plain old toBe() matcher to check for a value toBe(true) or toBe(false), respectively. Unless you actually want your function to return different types of values, but even then I’d recommend you to change your code: otherwise, everywhere you’re calling that function, you have to check for all possible return types.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

Up-and-coming: Reactive programming in Javascript

Previously, I took a look at the current state of Javascript frameworks. Today, I will take a look at what I think will be the next frontier of front-end programming: functional reactive programming.

(Functional) Reactive Programming

Reactive Programming revolves around the use of a data structure representing asynchronous data, called observables or streams. You could compare them to Promises.

A promise returning the value 3

(Diagrams are courtesy of RxMarbles.com. The arrows represent time.)

Roughly speaking, promises represent a value that might not be known when they are first used, but might arrive any moment. Observables are similar, except that they represent the potential arrival of multiple values.

An observable returning the values 1, 2 and 3, in that order

Observables are great for representing related values arriving asynchronously, such as characters entered into an input field.

When we use functions that take observables as input and produce a new observable as output (i.e. pure functions), we are doing what is commonly referred to as Functional Reactive Programming. There are many Javascript libraries providing commonly used operators, such as RxJS, Bacon.js and Kefir.

These can be simple operators like delay, that simply returns a new stream that produces the same values as the input stream, only with a slight delay.

Example of `delay`

The operators might use the actual values produced like map, which returns a new stream that produces the values of the input stream after being passed through a given function.

Example of `map`

Previous values could also be used, like in scan, which produces an accumulator using a given function (similar to the reduce function you might know already).

Example of `scan`

They might even combine multiple streams like merge, which returns a stream that produces all the values produced by its input streams.

Example of `merge`

One you get used to them, observables can greatly simplify dealing with asynchronous data. For example, consider a situation where you have two input fields, and you want to check whether both of them have a value. When you have observables field1$ and field2$ (convention is to denote observables by appending a $), you can use RxJS to create a stream that emits a new boolean every time the form changes validity:

const isValid$ =
  Rx.Observable.combineLatest(field1$, field2$)
  .map(fields => fields.every(field => field.length > 0));

(You can see it in action in this fiddle.)

This might be difficult to parse the first time you read it, but once it clicks, working with asynchronous data becomes far easier with observables in our toolbox. Note that the above approach easily scales to however many form fields you have.

An additional benefit is that it steers you away from a whole class of bugs. To illustrate: consider trying to do the above using a more traditional approach. We would likely create an event listener to check the validity of the form when the user enters input:

function checkValidity(){
  return document.getElementById('field1').value.length > 0 &&
         document.getElementById('field2').value.length > 0;

We wouldn’t be the first to attach the event listener to the final form field and be done with it. We fill in the form, see the validity update, and conclude that it works. After deployment, however, some users will fill out the form, and then go back to the first field and clear its contents. Surprise: our form will still claim that it’s valid. We forgot that the event handler also had to be attached to the other form field.

Obviously, what with you being the perfect programmer, this wouldn’t happen to you. But by using observable, such errors would be really easy to spot. When your less gifted colleague adds is tasked with adding a form field, there’s only one place it needs to be added: the combined form stream. It will then just work, validation included, as the data flows directly from the form fields into the validation.

There is much more to Functional Reactive Programming than this, and fully wrapping your head around it might take some time. If you’re interested, you might want to try Getting Started with RxJS.

FRP today

If you’re as excited about FRP as I am, you’re in luck: it’s already gaining a lot of headway.

Consider, for example, the combination of React and Redux I touched upon in my previous post. With Redux, you describe how your application’s state evolves in response to user actions with a function of the form (currentState, action) => newState. You then use React to describe what view that state should result in with a function of the form state => view.

Wait a minute… If you squint hard enough, that looks just like manipulating an observable with the scan() and map() methods!

React/Redux's architecture modelled

This is a very limited reactive system. In essence, all asynchronous input to your application will have to be converted to that one observable for actions. This can get quite awkward for asynchronous input that isn’t exactly a user action, such as HTTP requests. This often leads to workarounds such as emitting multiple actions, e.g. one when a request is sent, and another when the response comes in. Several libraries have arisen that claim to make it easier, but all of them are bound to that single stream.

A Redux alternative called MobX is also gaining ground, providing an implementation of observables to model component state. I haven’t looked into it too deeply yet, but it appears to distinguish itself from Redux mainly by enabling components’ local state to be modelled as observables as well, whereas Redux only supports a single place to hold all your application state.

On the Angular side, the developers have decided to endorse and use RxJS for the major rewrite that they are currently completing. They’re using it for HTTP requests, the planned router is using it as well, and you can pass them to your view templates for more efficient rendering. That said, user input is still handled through callbacks, and in practise, many Angular apps will likely apply none or few transformations on their observables before passing its values back to regular callback- and promise-based code.

A glimpse of the future: Cycle.js

The first steps toward wider adoption of FRP have been taken, and they don’t seem to be stopping any day soon. So where will this bring us?

One framework that is worth looking into in this regard is Cycle.js. Cycle applications are conceptually simple: all they do is convert a series of input streams (sources, in Cycle terminology) to a series of output streams (sinks). These input streams are provided by what Cycle dubs drivers, and the output streams are fed back into those drivers. The values emitted to the output streams are descriptions of the side effects you want to happen, and the drivers actually execute them, looping potential results back into the input streams. These drivers are usually small, separate libraries tailor-made to perform a specific side-effect in response to incoming observables, and to deliver any potential outside values through observables as well. A consequence is that the app itself is side-effect free, making it easy to reason about and easy to test.

Let’s illustrate this with a very simple example application.

To be able to show something to the user, we output a description of what we want the DOM to look like onto the DOM driver’s output stream. The DOM driver will then apply those changes to the DOM.

Likewise, the DOM driver also provides input streams that contains user input. Which can, in turn, be a trigger to push new DOM descriptions onto the output stream and thus update the view.

User interaction is only one type of side effect though. Another commonly used driver is the HTTP driver. By pushing values onto the output stream to the HTTP driver, apps can send HTTP requests, and the accompanying responses will be pushed onto the corresponding input stream.

The DOM and HTTP drivers are just the tip of the iceberg. There are plenty more drivers imaginable, such as the history driver for the browser’s History API, a storage driver for interacting with localStorage, and more. Of course, you can also write your own.

Wrapping up

As we saw, the combination of observables and functional programming principles can greatly simplify working with asynchronicity such as often abundant in web applications, and that is still difficult to deal with in the current crop of Javascript frameworks. It’s been getting more and more uptake as React developers slowly get familiar with a reactive programming style, and Angular developers set to have the full power of RxJS available in version 2. Cycle.js shows, however, that observables can be used far more extensively, for all that is asynchronous, in an elegant application architecture.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

A short history of Javascript frameworks: a comparison of JQuery, AngularJS and React

Javascript frameworks come and go – by the time you have finished reading this post, three new frameworks will have been released. While it may sometimes look like they’re just introducing more syntax to learn, the ones that actually get popular often introduce new paradigms that allow us to build features more quickly and with fewer bugs. We can learn a lot by taking a step back: where did we come from, and where are we now, in the year 2016? Which problems do the big frameworks solve, and which are left as an excercise for the programmer?

At first, there was JQuery

With an easy-to-learn syntax, JQuery led to the first bits of interactivity being added to our webpages. If you knew CSS, you knew enough to manipulate your webpage on the client side. Making a button click trigger the addition of a list item would typically look something like this:

    $('ol.todos').append('<li>New item!</li>');

No longer were full page refreshes necessary for every small change, and users rejoiced. That said, this style of programming can quickly become unwieldy. As an example, let’s consider that we do not only want to show the new list item to the user, but also to save it somewhere. This requires that we keep an internal model of the list. In turn, whenever the list is modified, both the view and the model will have to be updated:

let todos = []
    $('ol.todos').append('<li>New item!</li>');
    todos.push('New item!');

It’s easy to see how quickly this will lead to the introduction of bugs: as soon as the developer forgets to update the view when the model is updated, or vice versa, the user will be looking at incorrect data.

Enter AngularJS

Then, Angular came onto the stage, ushering in the era of actual web applications. It came with strong opinions on how to structure your projects, removing a lot of the error-prone paperwork that we now associate with JQuery. In Angular, your app’s components consist of controllers and templates, with the former responsible for manipulating the state of your model and the latter for rendering it to the user. Thus, our little example now looks like this:

<ol ng-controller="TodoListController as vm" class="todos">
    <li ng-repeat="todo in vm.todos"></li>
function TodoListController{
  let vm = this;

  vm.todos = [];

  vm.addTodo = function(){
    vm.todos.push('New item!');

Angular eliminated a complete class of bugs: you can safely use vm.todos, and it would contain exactly the same list as the one the user is looking at. It quickly grew to be extremely popular, and continues to be widely used to this day.

That said, Angular isn’t perfect either. Consider the case where we have two components: a list component, that renders a list of items, and a menu component, that allows the user to navigate through your app.

How can we make sure the menu component knows about the number of list items? We can e.g. use the Angular concept of services: a single place to store your models, accessible by all your controllers. Our list controller might now look something like this:

function TodoListController(TodoListService){
    let vm = this;

    vm.todos = TodoListService.getTodos();

    vm.addTodo = function(){
        vm.todos.push('New item!');

However, this looks suspiciously similar to the problem we had with JQuery: we are now responsible for keeping things in sync again: the TodoList component’s internal model and the service’s model. This makes us vulnerable to bugs in which one part of your application displays data incongruous with that in other parts of your application.

An alternative approach would be to have your controllers manipulate the service’s model directly. This tight coupling, however, comes with its own set of problems in which changes in one part of your application might have unintended consequences in others.

React to the rescue

While Angular remains a popular choice for web applications, an alternative solution has quickly been embraced by a significant number of developers: the combination of React and its trusty sidekick Redux.

At the core of the style of programming that characterises this combination is the embrace of immutable data. Typically, a React component is merely a function that describes what, given the state of your application, your view should look like:

state => view

To emphasize: the component does not alter your models (state), but merely defines what should be shown to the user (view). When our TODO list component is given an array of four items, its render function will return an <ol> containing four <li>s. When the same list is provided to our menu component, it can return the text TODOS (4).

The advantage of not manipulating your models in your components, is that there is no risk in sharing the models between components. The menu component can safely access the model of the list, since it cannot manipulate it and thus cannot cause inadvertent effects in the list component. When the model is changed, the rendering function is simply called again with the updated model, and React makes sure that the new view will be displayed to the user.

Every component now always has access to an up-to-date version of our models, and so does our view. Redux helps you managing updating your models. You define the possible actions a user can perform, and a function that describes how those actions affect the state:

(currentState, action) => newState

Again, note that this function uses immutable data: it does not modify currentState or action, but simply returns a new object newState that describes what the state should look like given the previous state and the action that was performed. (Apart from safe state sharing among different components, this also enables other cool features.)

What’s next?

By taking a birds-eye view of the history of Javascript frameworks, we’ve seen what each brings to the table, and what problems they solve. Angular largely removed the need to keep your views and models in sync, and React/Redux gives us the ability to safely share our models between our components, while maintaing a proper separation of concerns.

While the idea of learning so many frameworks can be a bit too daunting, simply being aware of the new paradigms they bring to the table can go a long way. While React has seen rapid adoption, Angular is still going strong; thus, you can simply stick with Angular if you’re comfortable with that. Nonetheless, the lessons taught by React can help be carried over to Angular, and are thus still useful to study.

No framework is perfect, and neither is React. In my next post, I will look at a remaining pain point, and what trend is emerging to deal with it.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

TypeScript is just Javascript

André Staltz recently argued that all Javascript libraries should be authored in TypeScript – which turned out to be rather controversial on reddit. I think that a lot of the resistance originates in the expectation that TypeScript is a completely new language that compiles to Javascript, comparable to the likes of CoffeeScript. In this post I’ll try to clear up this misconception.

It’s not a different language?

Of course it does differ from Javascript - not much room for nuance in short and catchy headlines. That said, you could consider the TypeScript compiler to be somewhat like an advanced Javascript linter. You can add it to your build process, and it will tell you when you’re doing something in your code you probably did not mean to do. It differs from a normal linter, however, in that you add instructions to your code that help the compiler help you.

Helping TypeScript help you

Let’s say we’re writing a Javascript utility library that ensures strings are at least of a certain length by padding them with spaces on the left if necessary. Let’s call this utility function leftpad. The code might look something like this:

function leftpad (str, len) {
  str = String(str);

  var i = -1;

  len = len - str.length;

  while (++i < len) {
    str = ' ' + str;

  return str;

Typically when publishing a library, you include some documentation on how to use it. In this case, it should mention that the second argument (len) should be a number for this function to work.

But while properly documented code is good, self-documenting code is better. It’s less work to maintain, and cannot get outdated. Since all valid Javascript is also valid TypeScript, the above code already is valid TypeScript. However, we can also extend it as follows:

function leftpad (str, len: number) {
  str = String(str);

  var i = -1;

  len = len - str.length;

  while (++i < len) {
    str = ' ' + str;

  return str;

(In case it’s hard to spot the difference: I added :number after the len argument.)

When you feed this code to the TypeScript compiler, it will simply strip away :number. Before doing that, however, it will check the rest of your code, and warn you when you call leftpad with something other than a number for len.

How does TypeScript help your library’s users?

If your library’s users also use TypeScript, your type annotations enable the compiler to warn them when they pass arguments of the wrong type. Note that the user can still write plain Javascript: simply feeding it to the TypeScript compiler will generate the warnings.

Furthermore, if their editor supports reading your library’s type annotations, it can use those to provide better autocompletion features.

Even without requirements on the tools used by your users, TypeScript libraries have an advantage of being able to generate better documentation. In the past, people tried to add type annotations using JSDoc:

 * …
 * @param {number} len
 * …
function leftpad (str, len) {
  // …

A significant disadvantage of comments is that they can easily become outdated. With TypeScript, however, your code will fail to compile if the type annotation becomes incorrect. Thus, if you generate your documentation using something like TypeDoc, your documentation will always include up-to-date type information.

So: should all Javascript libraries be authored in TypeScript?

Of course not. André, too, simply needed a catchy headline. However, if you’re writing a Javascript library, already have a build pipeline, and want to provide a great experience to your library’s users, seriously consider adding type annotations in your code. And since TypeScript currently has the largest following, you might as well use that to add them.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

Undo/redo actions by composing Redux Reducers (or: how do the Redux DevTools work?)

One of the reasons I created A Grip on Git was that there were some things with which I wanted to play. One of those things was Redux, a library that greatly simplifies state management in your Javascript applications. It helps you to be more explicit about possible changes in your application state by defining all possible state transformations as (pure) functions (referred to as reducers in the Redux documentation).

This has many benefits for you as a developer. One such benefit is that it enables a very cool project called Redux DevTools. DevTools allows you to undo and redo actions you performed earlier on demand, bringing a running app into the state it would have been in had those actions never happened.

A showcase of Redux DevTools.

In A Grip on Git, I wanted to do something similar. As you scroll down the tutorial, Git commands are executed as appropriate to that point in the tutorial. When you scroll back up, however, the visualisation should transform back to the previous state, as if the later commands had never happened. To explain how this works, let’s first look at reducers.

How did reducers work, again?

Reducers are simple functions that take the current state and an action, and return the new state:

(state, action) => newState

To keep your app maintainable, it is often advisable to split up your reducers to only concern themselves with parts of the state. For example, you can have a gitCommands reducer that can only manipulate repository part of the application state, and a view reducer that can only manipulate the view part of the state:

import gitCommandReducer from 'gitCommandReducer.js';
import viewReducer from 'viewReducer.js';

(state, action) => {
  return {
    repository: gitCommandReducer(state.repository, action),
    view: viewReducer(state.view, action),

Since this pattern is so common, Redux includes a utility called combineReducers with which you can remove the boilerplate. We won’t be using that where we’re going, though.


The reducers demonstrated above stand on equal footing. To enable undoing and redoing, however, we’re going to create a reducer that wraps the other reducers, applying to the application state as a whole. This snapshotReducer will be active in these three cases:

  1. When an action is a Git command, it will keep track of it.
  2. When an action is of type VISIT_SECTION, and that section hasn’t been visited before, it will save a snapshot of the commands resulting in the current state.
  3. When an action is of type VISIT_SECTION, and that section has been visited before, it will restore the state the way it was when that section was first visited. It will do so by replaying all the actions kept track of in step 1, repeatedly applying the gitCommandReducer starting from an empty repository.

The code looks something like this:

// Reducers
import gitCommandReducer from 'gitCommandReducer.js';

// Possible action types
import { GIT_COMMIT, GIT_PUSH } from 'gitCommandActions.js';
import { VISIT_SECTION } from 'viewActions.js';

const snapshotReducer = (state, action) => {
  // Case 1: keep track of Git commands
    action.type === GIT_COMMIT ||
    action.type === GIT_PUSH
    // etc.
    const history = state.history || [];
    return {
        history: history.concat(action),

  // Case 2: save a snapshot
    return {
        snapshots: {
            [action.sectionName]: state.history,

  // Case 3: restore the previously seen state
    return {
        // This is why they're called reducers:
        repository: state.snapshots[action.sectionName]
                    .reduce(gitCommandReducer, {}),

Now all that’s left to do is wrapping the state produced by our original reducers:

import gitCommandReducer from 'gitCommandReducer.js';
import viewReducer from 'viewReducer.js';

(state, action) => {
  return {
    repository: gitCommandReducer(state.repository, action),
    view: viewReducer(state.view, action),

…with the state as produced by our new snapshotReducer:

import gitCommandReducer from 'gitCommandReducer.js';
import viewReducer from 'viewReducer.js';
import snapshotReducer from 'snapshotReducer.js';

(state, action) => {
  return snapshotReducer(
      repository: gitCommandReducer(state.repository, action),
      view: viewReducer(state.view, action),

The bottom line

Redux’s elegant uncoupling of state management has many benefits. By composing our reducers, we can implement undo/redo functionality without needing to alter the original reducers. Likewise, Redux DevTools work on any Redux app regardless of what its reducers do. By wrapping it around your app’s reducers, it can keep track of all actions coming in, and replay them using your reducers when necessary.

This is just one of the many ways Redux can improve your life as a developer, so if you haven’t tried it yet, I highly encourage you to do so.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

A Grip on Git: An interactive, visual Git tutorial

Git is one of the most important tools in software development right now. It doesn’t matter what sector your company is active in, what programming language you use or whether you do waterfall, scrum, kanban or what not; the most common denominator is going to be version control. And most of the time, you’ll be using Git to do it.

Unfortunately, knowledge of Git is often limited to memorising a few commands like git add and git commit, sometimes copy-pasting a git clone or a git pull, and every now and then finding a StackOverflow answer that spells out the exact command needed to merge, delete a branch, or perhaps even to rebase. And sure, this will get you by most of the time.

Every now and then, however, it will blow up in your face: you might have gotten stuck in a headless state, perhaps you thought a force push was a good idea, or you might mess up some other way. Not pleasant. But what’s even worse: you’re likely giving up on Git’s best features! Proper understanding of Git, and the resulting proper workflow, can prevent many programming errors and thus improve the quality of your code. And when problems do slip through the cracks, a clean commit history is excellent documentation for your code, allowing you to more easily spot potential fixes.

Introducing: A grip on Git

Thus, I wrote A grip on Git, a tutorial aimed at those who use Git at a basic level but don’t really know what the commands they copy-paste actually do. It’s a short read (about eleven minutes) that quickly describes a typical Git workflow, accompanied by a visualisation that displays what happens behind the scenes as you read/scroll. My hope is that this will help people get a better grasp of Git: being able to visualise what is happening should help in remembering what commands you need when working with Git.

Of course, my main goal working on this project was to learn a few new things – about which I should be able to blog shortly. Nevertheless, I do hope it will be useful to some, as I sincerely believe Git is a very important tool. So go check it out!


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

Error: Error while waiting for Protractor to sync with the page: {}

This post is for people running into the following error when running a Protractor test for their Angular apps:

Error: Error while waiting for Protractor to sync with the page: {}

Many search results on the internet tell you that you can solve it by adding browser.ignoreSynchronization = true; to your code, with a remark like:

the key is to browser.sleep(3000) to have each page wait until Protractor is in sync with the project

Unfortunately, this usually is not what you actually want.

How to solve it

Most of the time, the issue will be that you’ve added ng-app to a different element than the <body>, which is where Protractor guesses that your Angular app is located in the page.

You can use a different element, but you will have to tell Protractor which one. You can do so by specifying a CSS selector referring to the element to the rootElement property of your Protractor configuration file. A selector that will work most of the time is

  rootElement: '*[ng-app]',

In other words, this tells Protractor to find Angular on the element with the attribute ng-app.

Why does ignore synchronization work?

Protractor runs on top of WebDriverJS. WebDriverJS is a Javascript interface that let’s you control browsers programmatically, which is useful for e.g. automatic tests.

So then… What does Protractor add? The problem in testing Angular apps using WebDriverJS is that Angular has its own event loop separate from the browser’s. This means that when you execute WebDriverJS commands, Angular might still be doing its thing.

One could work around this by telling WebDriverJS to wait for an arbitrary amount of time (i.e. 3000ms in the example above) and hope that Angular has settled down during that time. Of course, that wasn’t pretty. Thus, Protractor was created to synchronize your tests with Angular’s event loop, by deferring running your next command until after Angular has finished processing the previous one.

This is very nice and all, but becomes problematic when you’re testing a website of which some pages are written in Angular, and some pages aren’t. In the latter case, no matter how long Protractor waits, there is no Angular to complete its cycle – in which case it will terminate with the error above.

Thus, for non-Angular pages, you can tell Protractor not to look for Angular by setting browser.ignoreSynchronization = true – which in practical terms will mean that you’re just using WebDriverJS.

So by adding that to your configuration when Protractor cannot find Angular on your page, you’re giving up on all that makes testing Angular apps easier than plain WebDriverJS. And yes, adding browser.sleep after all your commands will likely work, but it’s cumbersome, will break as soon as Angular takes longer than the pause you set, and makes your tests take excessively long.

The bottom line

  • Use the rootElement property in your Protractor configuration.
  • Only use browser.ignoreSynchronization = true when testing a page that does not use Angular.


This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.

Subscribe via RSS