About the Author:

Intro to Decentralized Databases with GUN.js

July 5th, 2017

Distributed Systems and Decentralization

Whether we realize it or not, the software systems that we use in our day to day are an amalgamation of varying services. They are distributed across many computers often in different geographical locations. Most of the distributed systems rely on a design that centralizes ownership and/or authority to an organization. Lately, there has been a rise in technology, such as cryptocurrencies and the blockchain, that implements distributed systems in a decentralized manner; participation is open to those who abide by the agreed upon and implemented protocols.
Databases are an import component of software systems and are often themselves distributed for purposes of high availability. In these scenarios, the data are ultimately under a centralized model. The abstracted database architecture is the authority and results in a mechanism that maintains the current state of the data. Looking from the perspective of a decentralized system, each participant or node would own their data and share with nodes across the network as needed. This style of architecture is referred to as peer-to-peer or p2p and is used in file sharing protocols like BitTorrent.

Decentralized Databases with GUN.js

GUN.js is a real-time, decentralized, offline first, graph database. Sure is a mouthful. Knowing a little bit about decentralized architecture we can understand the other features. Under the hood, GUN allows for data synchronization to happen seamlessly between all connected nodes by default. It’s offline first capabilities mean that if connectivity is lost to other nodes due to a network error or no availability, the application will store all changes locally and automatically synchronize as soon there is a connection. Finally, the flexible data storage model allows for tables with relations (MSSQL or MySQL), tree-structured document orientation (MongoDB), or a graph with circular references (Neo4j).

A Working Example

In this article, we are going to build a simple note taking application in React using GUN that will update real time between two different clients. This will help to illustrate how the key features of GUN look in action.

Getting Started

I have a simple React boilerplate project that I use for some of my projects in order to avoid recreating an empty project. I really recommend doing this for beginners to become familiar with the libraries used and the build tools. There are plenty of starting projects and command line tools used to generate React projects, but those are recommended after you’ve gained proficient understanding.
Let’s begin by adding the Gun.js library:

$yarn add gun --save

The project comes with a server.js file that creates an HTTP server using Express.js. It has been configured to run both for development and production environments. Here we will add a GUN datastore that clients will connect to and for our purposes, the data will persist in a JSON file on our server. I’ve created a directory /db and the file data.json where GUN will write to:

├── LICENSE.MD
├── README.md
├── db
│ └── data.json
├── images
│ └── favicon.ico
├── package.json
├── server.js
├── src
│ ├── components
│ │ ├── App.js
│ │ ├── Auth.js
│ │ ├── Home.js
│ │ └── NoteForm.js
│ ├── index.html
│ └── index.js
├── test
│ └── App.test.js
├── webpack.config.js
└── yarn.lock

Modifying server.js add the following:

const Gun = require('gun');
...
app.use(Gun.serve);
const server = app.listen(port);
Gun({ file: 'db/data.json', web: server });

Please keep in mind that this setup is not meant for production, please refer to the documentation for configuring Amazon S3 and a module that utilizes LevelDB.
Earlier I explained how GUN is a distributed database, where it can have many nodes connecting to each other, so we will add the library to the front-end. In the previous step we added the server node for all client nodes to connect to, we’ll pass the URL to it as a configuration for the client stores to synchronize with. For now, we’ll keep the database operations in src/components/App.js:

import React, {
 Component
} from 'react';
import Gun from 'gun';
import Home from './Home';

class App extends Component {
 constructor() {
 super();
 this.gun = Gun(location.origin + '/gun');
 window.gun = this.gun; //To have access to gun object in browser console
 }
 render() {
 return ( <
 Homegun = {
 this.gun
 }
 />
 );
 }
}
export default App;

To test that this works we’ll run the application. In the package.json file there is a section for scripts that can be run and the start command will run the development server:

$yarn start

Open up a browser window and navigate to http://localhost:8080 to see the home page. In that window open the developer tools. In the console, we will run some commands to interact with the database to see that it works on the client and that it synchronizes with the server peer node.

var note = {title: 'first item', text: 'from command line'};
gun.put(note);
Inspecting db/data.json in our project we can see there is data similar to this:
{
"graph": {
"EVz9V7xwmMW2MZBGHkwAntex": {
"_": {
"#": "EVz9V7xwmMW2MZBGHkwAntex",
">": {
"title": 1498156296164.74,
"text": 1498156296164.74
}
},
"title": "first item",
"text": "from command line"
}
}
}

This can be verified in your localStorage of the browser as well by finding a similar key/value pair:

gun/g0ZMK77W4wwVEHuyXzlPdVgc
{"_":
{"#":"EVz9V7xwmMW2MZBGHkwAntex",
">":
{"title":1498156296164.74,
"text":1498156296164.74}},
"title":
"first item","text":"from command line"
}

So what exactly happened? We’ve just successfully stored a note into the GUN database with the JSON data from the variable note and some extra data. Compared to original data we can deduce that “_” is a metadata object created by GUN. The document is assigned a unique id ‘#’ or “soul” in GUN parlance and another child object ‘>’ containing the timestamp when a field was last updated.
For good measure, let’s open up a new incognito window to the localhost URL to verify that we can access this data. When you inspect the localStorage you will notice that it is empty. This is because this node has not yet retrieved or subscribed to any data. Using the .get() call we go ahead and call it by chaining the .on() function:

gun.get('g0ZMK77W4wwVEHuyXzlPdVgc').on(function(data, key){ console.log(data, key); })

And now the same data will show up in the localStorage of this window as seen in our first instance. In the documentation, you will see that there are two methods for getting the data by chaining .on() or .val(). With the above example, we’ve subscribed to the data, which will give us real-time updates to the object. .val() is used to only get the data at the time of the call with no future updates. What is important to note here is that if we don’t explicitly make that .get() call in the new node of the application, that node will not know of that key/value pair.

Data Modeling

Let’s take a step back and consider how to design the data modeling for our sample application. Since we would like to work with a list of notes, we need to consider how that can be stored in GUN. Looking at the documentation we can see that .put() only accepts objects, strings, numbers, booleans, and null. The resulting object will automatically name the object based on its soul, the unique id. A deeper read on the .get() call shows that it can be chained with .put() to set the name of the key:

gun.get('key').put({property: 'value'})
gun.get('key').on(function(data, key){
// {property: 'value'}, 'key'
})

GUN does not use arrays but uses the concept of a set from mathematics, where each element is a unique object. With this understanding, we will store each note at the top level of the database and add a reference to the document into a set called ‘notes’. This way each node only needs knowledge of the notes object in order to subscribe to the data synchronization and get access to the individual notes.
In the next step, we’ll create the component to create, list, and view notes. First, n each window clear out the data by calling localStorage.clear(). Stop the running server process and delete the data in db/data.json. Once finished you can restart the server. The server needs to be stopped because GUN maintains the data in memory as well. This feature adds more fault tolerance as the file can be deleted and will be recreated again.
react-gunjs-notes

User Interface

For the UI framework we will go with the React version of Bootstrap:

yarn add react-bootstrap --save

I’ve gone ahead and created another component called NoteForm to keep the Home component from being too cluttered. Though this isn’t the cleanest design practice, it serves to help teach us how to use GUN with React.
NoteForm.js abstracts the UI for the form:

import React, { Component } from 'react';
import { Panel, ButtonToolbar, Button, FormGroup, ControlLabel, FormControl } from 'react-bootstrap';

class NoteForm extends Component {
 componentWillMount() {
 this.resetState = this.resetState.bind(this);
 this.resetState();
 }

 componentWillReceiveProps(nextProps) {
 const { id, title, text } = nextProps.note;
 this.setState({ id, title, text });
 }

 resetState() {
 const { id, title, text } = this.props.note;
 this.setState({ id, title, text });
 }

 onInputChange(event) {
 let obj = {};
 obj[event.target.id] = event.target.value;
 this.setState(obj);
 }

 saveBtnClick() {
 this.props.onSaveClick(this.state);
 }

 render() {
 return (<Panel bsStyle = "primary">
<form>
<FormGroup>
<ControlLabel> Title</ControlLabel><FormControl id = "title"
 type = "text"
 placeholder = "Enter a title"
 value = { this.state.title }
 onChange = { this.onInputChange.bind(this) }
 /></FormGroup><FormGroup>
<ControlLabel> Note text:</ControlLabel><FormControl id = "text"
 componentClass = "textarea"
 placeholder = "..."
 value = { this.state.text }
 onChange = { this.onInputChange.bind(this) }
 /></FormGroup><ButtonToolbar>
<Button bsStyle = "primary"
 onClick = { this.saveBtnClick.bind(this) }> Save</Button><Button onClick = { this.resetState }> Cancel</Button></ButtonToolbar></form></Panel>
 );
 }
}
export default NoteForm;

Home.js subscribes to the data and updates it. The list rendering is managed here as well:

import React, { Component } from 'react';
import { Panel, Button, Col, ListGroup, ListGroupItem } from 'react-bootstrap';
import Gun from 'gun';
import _ from 'lodash';
import NoteForm from './NoteForm';

const newNote = { id: '', title: '', text: '' };
class Home extends Component {
 constructor({ gun }) {
 super()
 this.gun = gun;
 this.notesRef = gun.get('notes');
 this.state = { notes: [], currentId: '' };
 }

 componentWillMount() {
 let notes = this.state.notes;
 const self = this;
 this.gun.get('notes').on((n) => {
 var idList = _.reduce(n['_']['>'], function(result, value, key) {
 if (self.state.currentId === '') {
 self.setState({ currentId: key });
 }
 let data = { id: key, date: value };
 self.gun.get(key).on((note, key) => {
 const merged = _.merge(data, _.pick(note, ['title', 'text']));
 const index = _.findIndex(notes, (o) => {
 return o.id === key });
 if (index>= 0) {
 notes[index] = merged;
 } else {
 notes.push(merged);
 }
 self.setState({ notes });
 })
 }, []);
 })
 }
 newNoteBtnClick() {
 this.setState({ currentId: '' });
 }
 itemClick(event) {
 this.setState({ currentId: event.target.id });
 }
 getCurrentNote() {
 const index = _.findIndex(this.state.notes, (o) => {
 return o.id === this.state.currentId });
 const note = this.state.notes[index] || newNote;
 return note;
 }
 getNoteItem(note) {
 return (<ListGroupItem key = { note.id }
 id = { note.id }
 onClick = { this.itemClick.bind(this) }> { note.title }</ListGroupItem>)
 }
 onSaveClick(data) {
 const note = _.pick(data, ['title', 'text']);
 if (data.id !== '') {
 this.gun.get(data.id).put(note);
 } else {
 this.notesRef.set(this.gun.put(note))
 }
 }
 render() {
 this.getCurrentNote = this.getCurrentNote.bind(this);
 return (<div>
<Col xs = { 4 }>
<Panel defaultExpanded header = 'Notes'>
<Button bsStyle = "primary"
 block onClick = { this.newNoteBtnClick.bind(this) }> New Note</Button><ListGroup fill> { this.state.notes.map(this.getNoteItem.bind(this)) }</ListGroup></Panel></Col><Col xs = { 8 }>
<NoteForm note = { this.getCurrentNote() }
 onSaveClick = { this.onSaveClick.bind(this) }
 /></Col></div>
 );
 }
 }
 export default Home;

The two important functions to observe in Home.js are componentWillMount() and onSaveClick() . When the component is mounted the .on() calls subscribes the component first to ‘notes’. The code in the callback is triggered on first when initially called and then each subsequent time when a new note is added. As notes is a list of references to actual note objects, additions are the only changes that will happen. Inside the callback, we see _.reduce() call that goes through each note reference and creates an individual subscription for each note. The callback inside of self.gun.get(key).on((note, key) => { ... }) are triggered when that specific note is updated.
onSaveClick() saves the new note or changes to an existing one. When a new note is created this.gun.put(note) returns a reference to the note which is added to the exists set inside of ‘notes’.
Open an incognito tab as we did, in the beginning, to see how the real-time updates show up in the UI as you add and edit new notes.

Conclusion

We’ve created a very simple note taking application that synchronizes the data across all connected peers. From here it would be useful to create a user authentication component and expand on the data model to include security and ownership to lists and individual notes. GUN is an extremely powerful yet minimal database system that can be utilized in a wide variety of scenarios for web and mobile. It is highly recommended digging deeper into the documentation to learn about more features and the computer science theory behind its design. GUN has very active and friendly community on Gitter as well that you can reach out to. For access to the completed project code go this repo.

About the Author:

Where Did Async/Await Come from and Why Use It?

June 14th, 2017

Like any astute JavaScript developer, you’ve been keeping your eye on the onslaught of new language additions that the TC39 Gods have bestowed upon us humble users. The impact of these range from fundamentally game-changing constructs like block-scoped arrow functions and native promises to minor conveniences like the exponentiation operator.

Yet, there have been few proposals that have caused as much simultaneous excitement and confusion as async functions. To those that understand them, they represent the introduction of truly readable asynchronous code to the JavaScript language. To those that don’t – and I counted myself among them not long ago – the previous sentence reads as Klingon and they revert to the comfort of callbacks and/or promises.

The goal of this blog is to present a practical case for async functions. I will set the historical context for the relevance of these new function types, explain their implicit advantages, and give you my take on their place in the ECMAScript landscape. If you just want to learn about async functions, jump to the good stuff. For a more technical look at async/await and it’s inner workings, check out some of the resources down below. For those that want to take in the full prix fixe menu, let’s dive in.

Asynchronous Swimming

 

Explore JavaScript Courses

Asynchronicity

Hearkening back to the early days of the web, JavaScript was born out of the growing necessity to make web pages that were more than just static displays of text and images. From its origin, JS has had first-class functions, meaning that functions can themselves be passed to other functions like any other object. Functions in JavaScript, after all, are really just objects under the covers. This concept would become crucial in later advancements of the language.

One of these major advances was the introduction of Asynchronous JavaScript and XML, or AJAX, requests. These enabled browsers to make requests to the server without reloading the page, in turn receiving the data back at a later time and using it to update the web page. With this addition, JavaScript evolved into a language that masterfully handled asynchronous operations. Personally, I think we owe this to two important constructs of the JavaScript language:

  • The Event Loop: JavaScript is (somewhat) unique in that it is single-threaded and non-blocking. This means that only one block of code is executed at a time, with asynchronous operations queued, managed, and executed at a later time by the event loop. That is a topic for its own blog post, but in my opinion, Philip Roberts’ event loop talk at JSCOnf EU 2014 is the holy grail of explainers.

  • Callbacks: Although not unique to JavaScript, these ~~are~~ were crucial to working with asynchronous code and where having first-class functions became key in JavaScript.

Let’s take a closer look at callbacks and the evolving manner in which we’ve handled asynchronicity in JavaScript. To do this, we will use the Chuck Norris API to demonstrate how each pattern helps us complete an asynchronous task.

Callbacks

Remember when we said functions were first-class objects in JavaScript? Here is an example of that functionality in the wild:

function conditionalCall(bool, arg, funcA, funcB) {
    return bool ? funcA(arg) : funcB(arg)
}

In this instance, we are passing four arguments to the conditionalCall function. A boolean value, an arbitrary argument, and two functions. Based on the truthiness of the boolean value, either funcA or funcB is called with arg as the input. We are only able to do this based on the fact that conditionalCall can accept functions as arguments just like any other data type.

Building on this pattern, callbacks were conceived as an elegant way of handling asynchronous operations. Functions that contain asynchronous behavior can leverage first-class functions by taking a callback as an argument, invoking it upon completion (or error) of their asynchronous operation. Using our Chuck Norris API and callbacks, it would look something like this:

const request = require('request')
request('https://api.chucknorris.io/jokes/random', (err, res, body) => {
    if (err) {
        console.error(err)
    } else {
        console.log(JSON.parse(body).value)
    }
    console.log('RESPONSE RECEIVED')
})
 
console.log('REQUEST SENT')

Here we fire off an AJAX request to chucknorris.io, passing in the callback as the second argument to the request function. This callback function is only invoked when a response has been received. If you note the logged output, the synchronous code is executed well before the callback’s function block.

This pattern was immensely useful in providing a way to interact with functions like request that operated asynchronously. As its usage evolved, however, weaknesses of the pattern came to the forefront. The following is a non-exhaustive list of some of these shortcomings.

  1. Callback Hell: The callback pattern is nice, but what happens when you have to make subsequent asynchronous calls that rely on the previous async response? You end up with a clunky pyramid of a codebase that is not only hard to parse but just plain ugly. Or in other words, it’s callbacks all the way down.
    firstFunc(1, (err, res1) => {
        secondFunc(res1.value, (err, res2) => {
            thirdFunc(res2.value, (err, res3) => {
                console.log(`Answer: ${res3.value}`)
            })
        })
    })


Welcome to callback hell

2. Error Handling: Callback best practices say to denote an error in an async operation with an error variable as the first parameter of the callback. The user should first check this parameter to see if something went wrong, only proceeding as normal if the input is null. Although this works, it departs from the normal try...catch error handling mechanism and generally just makes code unnecessarily more verbose.

In summation, callbacks were instrumental in JavaScript but introduced syntactical madness. Enter the next stage of the async revolution: the Promise.

Promises

Promises are a topic in their own right and have their own origin story. They took quite awhile to make their way through the ECMAScript proposal stages, resulting in their implementation in third-party libraries like bluebird.js well before they were native to the language. In order to remain focused, this section will simply cover using (and not creating) native ES6 promises to handle asynchronous functions.

You can think of a promise as an object that is always in one of three states: Pending, Resolved, or Rejected. There are two exposed methods on a promise, called then and catch, respectively used to handle responses and errors. Using this knowledge, let’s walk through how this works:

  1. A promise is invoked, causing all of it’s synchronous code to be run
  2. Based on the success of it’s contained asynchronous operation, it is either resolved or rejected

– Resolved: The then method is invoked, passing the result in as the argument
– Rejected: The catch method is invoked, passing the error in as the argument
3. These results can be chained to handle subsequent async requests in an orderly manner.

Here is how our Chuck Norris joke-producing code would look with promises, this time using axios to make the HTTP request:

const axios = require('axios')
axios('https://api.chucknorris.io/jokes/random')
    .then(res => console.log(res.data.value))
    .catch(err => console.log(err))
    .then(() => console.log('RESPONSE RECEIVED'))
 
console.log('REQUEST SENT')

This code should demonstrate that we’ve solved a few of our callback issues. First, error handling is done much more elegantly, as we now have an explicit control flow for handling an error case. It is not perfect, however, as we are still unable to use our beloved try...catch statement. Perhaps even more important, one might imagine how this solves what we’ve affectionately come to know as callback hell. Let’s take our example from before and reimplement it using promises to demonstrate the improvement:

firstPromise(1)
    .then(res1 => secondPromise(res1.value))
    .then(res2 => thirdPromise(res2.value))
    .then(res3 => console.log(`Answer: ${res3.value}`))

Not only can we use promises to chain sequential code together, promises returned within a resolved promise’s then method can themselves be resolved by a subsequent then method. Easy peasy, right?

And then...

Sort of. Once you wrap your head around this pattern and use it in practice, you start to create a lot of boilerplate code simply to enable sequential, asynchronous operations.

Yea verily, we finally have a solution to all this madness: Async functions.

Async Functions

Async functions have come at a time when native promises have become widely adopted by developers. They do not seek to replace promises, but instead improve the language-level model for writing asynchronous code. If promises were our savior from logistical nightmares, the async/await pattern solves our syntactical woes.

One last time, let’s see what our Chuck Norris example looks like with async functions:

const axios = require('axios');
const getJoke = async () => {
  try {
    const res = await axios('https://api.chucknorris.io/jokes/random')
    console.log(res.data.value)
  } catch (err) {
    console.log(err)
  }
  console.log('RESPONSE RECEIVED')
}
 
getJoke()
console.log('REQUEST SENT')

By simply wrapping our code in an async-style function, we can utilize asynchronous operations in a naturally synchronous manner. Also, we’ve finally been able to reincorporate our normal JavaScript error handling flow!

Once more, let’s return to our complex example of handling sequential async calls:

(async () => {
    const res1 = await firstPromise(1)
    const res2 = await secondPromise(res1.value)
    const res3 = await thirdPromise(res2.value)
    console.log(`Answer: ${res3.value}`)
})()

Although we have what looks at first glance to be a simple series of assignments, we actually have three sequential asynchronous operations, the latter two rely on the previous one’s response. This new syntax is extremely useful for many use cases, but it does not come without its potential pitfalls. We’ll explore these in the final section, but first, let’s check out all of our async/await plunders!

Why should I use the Async/Await Pattern?

Hopefully, the main benefit of async functions is clear, but there are a few more gains to be had from their usage. Let’s walk through the mains ones.

Synchronous-Looking Code

Async functions take the promises that many of us have come to know and love and give us a synchronous-looking manner in which to use them. When used effectively it creates cleaner code which, in turn, leads to more maintainable code. In the rapidly evolving JS landscape, this notion is evermore important.

This is particularly useful when leveraging sequential operations that rely on intermediate results. Let’s use a more relevant (if not contrived) example to demonstrate this point.

getUser('/api/users/123')
    .then(user => {
        getUserPassport(`/api/passports/${user.passportId}`)
            .then(passport => runBackgroundCheck(user, passport))
            .then(pass => console.log('check passed:', pass))
    })

In the above code, we leverage promises to asynchronously retrieve a user, subsequently retrieving their passport information, as well. Only then can we run their background check using the previous two results as arguments to runBackgroundCheck. Due to scoping constraints, this prevents us from simply chaining the function calls and forces us into a similar pattern to callback hell. Sure, we could create temp variables, or do some trickery with Promise.all to avoid this, but those are really just band-aids on a lesion. What we really want is a way to store all of our results in the same scope, which async functions allow.

(async () => {
    const user = await getUser('/api/users/123')
    const passport = await getUserPassport(`/api/passports/${user.passportId}`)
    const pass = await runBackgroundCheck(user, passport)
    console.log('check passed:', pass)
})()

Much better!

Promises All the Way Down


Turtles all the way down

In addition to async functions leveraging promises in their composure, they return a promise, as well. This allows for us to do a few neat things:

  1. We can chain off of an async function…
  2. Which allows us to mix async functions and promises…
  3. So that we can refactor existing promise-based functions as async functions, without the need to change how that function was utilized.

Let’s reintroduce the background check example to support this claim:

const axios = require('axios');
function runBackgroundCheck(user, passport) {
    return axios(`https://us.gov/background?ssn=${user.ssn}&pid=${passport.number}`)
        .then(res => res.data.result)
}

If we were to refactor this promise-based function using an async function, it would look something like this:

const axios = require('axios');
async function runBackgroundCheck(user, passport) {
    const res = await axios(`https://us.gov/background?ssn=${user.ssn}&pid=${passport.number}`)
    return res.data.result
}

In my opinion, this makes the return value of the function much more obvious. This example is trivial, of course, but hopefully, this concept makes you think about potential code refactoring gains that this pattern allows.

Proper Error Handling

One of the downsides of promises is that they forced us to use a unique convention to handle errors instead of leveraging the traditional try...catch syntax. Async functions give us back the ability to utilize that pattern, while still leveraging promises if we wish.

Using the background check example one more time, let’s handle any errors that may arise during execution:

async () => {
    try {
        const user = await getUser('/api/users/123')
        const passport = await getUserPassport(`/api/passports/${user.passportId}`)
        const pass = await runBackgroundCheck(user, passport)
        console.log('check passed:', pass)
    } catch (err) {
        // Handle failure accordingly
    }
}

No matter how those functions (getUser et al.) are implemented, either with promises or async/await, runtime and thrown errors will be caught by the wrapping try...catch block. This is useful as we are no longer required to have a special syntax for rejected promises within an async function.

This pattern also improves error messages and debugging by leveraging the sequential nature of the resultant code. This means that error messages are more reflective of where the error occurred and stepping through code with await statements becomes possible. I won’t go over these improvements in depth, but this post does a nice job explaining why.

Considerations

You might be asking yourself, should I start using this pattern in my JavaScript development today? The truth is, that depends…

Support

Node.js now supports async/await by default, as of Node v7.6. That means that async/await is supported in the current branch, but it will not fall under LTS (currently at v6.x) until Node 8 gets LTS in October 2017.

As far as browsers go, async functions are now supported by all main vendors (sans IE). It must be stated that all browsers’ support was only added this year, so you are potentially limiting yourself by including it in your client code just yet. If you insist on using the pattern, I would recommend working something like Babel’s async-to-generator transform into your transpilation process before you ship the code. Be wary, though, as I have heard the resultant code is quite bulky when compared to the source. And no one likes a fat bundle.

If you think those risks are worth the upgrade, then go for it brave warrior!


I like you, but you're crazy

Silent but Deadly Errors

Like promises, errors in async functions go silently into the night if they are not caught. When utilizing this pattern you must be careful to use try...catch blocks where errors are likely to appear. This is always one of the key oversights I had when debugging issues involving promises and I expect it to be a recurring theme as I continue to use async functions.

Sequential Code Trip-Ups

Although async functions give your code the appearance of synchronicity, you want to avoid actual synchronous (i.e. blocking) behavior where possible. Unfortunately, it is easy for async functions to lull you into this behavior by mistake. Take the following example:

async () => {
    const res1 = await firstPromise()
    const res2 = await secondPromise()
    console.log(res1 + res2)
}

At first glance, this seems fine. We are making two asynchronous calls and using the results of both to compute our logged output. However, if we run through the code, you’ll notice that we are blocking the function’s execution until the first promise returns. This is inefficient as there is no reason these calls can’t be made in parallel. To solve this issue, we just need to get creative and reach into our Promise toolbelt:

async () => {
    const [res1, res2]= await Promise.all([firstPromise(), secondPromise()]);
    console.log(res1 + res2)
}

By using Promise.all, we are able to regain concurrency while continuing to leverage our new async/await pattern. Blocking be gone!

TL;DR

This was a long one. In short:

  • Async/await improves code readability
  • Async/await gives us synchronous-like syntax for asynchronous behavior
  • Async/await can be used with and in place of promises
  • Async/await enables try...catch error handling for asynchronous operations
  • Async/await is supported by Node.js and all major browser vendors
  • Async/await officially arrives in the ES2018 language spec

Explore JavaScript Courses

Additional Resources

About the Author:

Creating Network Diagrams With Vis.js

May 26th, 2017

Network diagrams are a staple in the data visualization world when you want to show how one thing relates to another. They are indispensable in the world of data modeling where you need to show class relationships. They are also used extensively in the realm of semantics and linked open data as they can also show how data is clustered thematically.
There are a number of network diagram visualization tools, including those part of the d3.js suite. But perhaps the easiest toolset to get up and running with is the visJS.org package. This Vis.js library makes use of the Canvas api and is optimized for both speed of rendering and level of interactivity.
The core network library can be downloaded from the visJS.org site along with a default CSS file that controls element styles.

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/vis/4.19.1/vis.min.css"/>
<script type="javascript" src="https://cdnjs.cloudflare.com/ajax/libs/vis/4.19.1/vis.min.js"/>

This in turn exposes the base vis object, along with its dependent vis.Network class. Typically, data is collected via arrays of objects along with a configuration object (options) that controls the output. This also takes, as an argument, a div element that can be resized to specify the display area of the network graph itself.

<style type="text/css">
.network {
  display:block;
  width:800px;
  height:800px;
  border:solid;
  background-color:white;
}
</style>
<div class="network">Network Graph</div>

A network at its core consists of two entities – a node, which represents a thing, and an edge or vector, which represents a relationship between two nodes (or things). Nodes are represented as arrays of node objects, edges as arrays of edge objects, with an edge only showing up if it connects two nodes. This means that the most trivial network graph would look something like the following:

<script type="javascript">
    var nodes = [
       {
         id:"a",
         label:"A"
       },
       {
         id:"b",
         label:"B"
       }
];
var edges = [
       {
         from:"a",
         to:"b"
       }
];
    var options = {};
    var container = document.querySelector('.network');
    network = new vis.Network(container, data, options);
</script>

This produces two nodes, A and B, and a line connecting them. Out of the box the user can drag both nodes and edges around, and clicking on a node will change its shape (and initiate a click event on the node which can be captured).
Graph 1. A simple graph
A slightly more complex graph can incorporate arrows, add labels to edges, and change foreground and background colors and shapes.

<script type="javascript">
    var nodes = [
       {
         id:"a",
         label:"A",
         shape:"diamond",
         color:{background:"red",border:"maroon"},
         font:{color:white}
       },
       {
         id:"b",
         label:"B",
         shape:"oval",
         color:{background:"green"},
         font:{color:white}
       },
       {
         id:"c",
         label:"C",,
         shape:"square",
         color:{background:"blue"},
         font:{color:white}
];
var edges = [
       {
         from:"a",
         to:"b",
         label:"1",
         arrows:"to"
       },
       {
         from:"b",
         to:"c",
         label:"2",
         arrows:"to"
       },
       {
         from:"c",
         to:"a",
         label:"3",
         arrows:"to"
       }];
    var options = {};
    var container = document.querySelector('.network');
    network = new vis.Network(container, data, options);
</script>

Graph 2. A directed cyclic graph
In this case, the arrows attribute on the can take the values “to”, “from” or “from to” determining the direction of the vector, which indicates the location(s) of the arrowhead.
This example also indicates how colors are set – the color object either takes a string color value (which controls the border) or can take an object of the form color:{background:"red",border:"maroon"} which specifies the component colors. In general, the border color is also the color of the corresponding vector away from the object. The text object is handled as a separate entity, but can be set for both color and font.
The vis.js library provides a number of standard shapes (along with means to create custom shapes). Some of these (like ellipse or box) place the text on the inside of the shape, while others (such as square or star) place it on the outside. The inner shapes adjust to handle text within the shape, and multiline content can be managed by using the ‘\r’ sequence to indicate a line break.
The options object lets the designer specify different groups that a given node or edge can be a part of, which makes it possible to create something analogous to CSS classes where several common traits are specific at once. This can also be used to set geometry, physics and layout characteristics that apply globally. These can be seen in Graph 3.

    var nodes = [
      {
         id: "source1:sales-row1", 
         label: "<Sales>\nRow 1", 
         group: "source1"
      },
      {
         id:"source1:sales-row1-revenue",
         shape:"box",
         label:"$20,125,776.00",
         group:"source1"
      },
      {
         id:"source1:sales-row1-region",
         label:"Northwest",
         group:"source1"
      },
      {
         id:"source1:sales-row1-period",
         label:"2017Q1",
         group:"source1"
      },
      {
         id: "source2:sales-row26", 
         label: "<SalesReport>\nRow 26", 
         group: "source2"
      },
      {
         id:"source2:sales-row26-revenue",
         shape:"box",
         label:"$20,125,784.00",
         group:"source2"
      },
      {
         id:"source2:sales-row26-region",
         label:"NW",
         group:"source2"
      },
      {
         id:"source2:sales-row26-period",
         label:"2017Q1",
         group:"source2"
      },
      {
         id: "source3:sales-row68", 
         label: "<Sales>\nRow 68", 
         group: "source3"
      },
      {
         id:"source3:sales-row68-revenue",
         shape:"box",
         label:"$20,125,794.00",
         group:"source3"
      },
      {
         id:"source3:sales-row68-region",
         label:"Northwest",
         group:"source3"
      },
      {
         id:"source3:sales-row68-period",
         label:"2017Q1",
         group:"source3"
      },
      {
         id:"canon:Sales_Report5",
         label:"<Sales_Report>\nSales Report 5",
         group:"canon"
      },
      {
         id:"canon:Sales_Report5-revenue",
         shape:"box",
         label:"$20,125,785.00",
         group:"canon"
      },
      {
         id:"canon:Sales_Report5-region",
         label:"Northwest",
         group:"canon"
      },
      {
         id:"canon:Sales_Report5-period",
         label:"2017Q1",
         group:"canon"
      }
    ];
    var edges = [
      {
         from: "source1:sales-row1", 
          to: "source1:sales-row1-revenue",
          label:"revenue",
          arrows:"to"
      },
      {
         from: "source1:sales-row1", 
          to: "source1:sales-row1-region",
          label:"region",
          arrows:"to"
      },
      {
         from: "source1:sales-row1", 
          to: "source1:sales-row1-period",
          label:"period",
          arrows:"to"
      },
      {
         from: "source2:sales-row26", 
          to: "source2:sales-row26-revenue",
          label:"netRevenue",
          arrows:"to"
      },
      {
         from: "source2:sales-row26", 
          to: "source2:sales-row26-region",
          label:"salesRegion",
          arrows:"to"
      },
      {
         from: "source2:sales-row26", 
          to: "source2:sales-row26-period",
          label:"period",
          arrows:"to"
      },
      {
         from: "source3:sales-row68", 
          to: "source3:sales-row68-revenue",
          label:"netRevenue",
          arrows:"to"
      },
      {
         from: "source3:sales-row68", 
          to: "source3:sales-row68-region",
          label:"salesRegion",
          arrows:"to"
      },
      {
         from: "source3:sales-row68", 
          to: "source3:sales-row68-period",
          label:"period",
          arrows:"to"
      },
      {
        from: "source1:sales-row1",
        to: "canon:Sales_Report5",
        label: "same as",
        arrows: "to"
      },
      {
        from: "source2:sales-row26",
        to: "canon:Sales_Report5",
        label: "same as",
        arrows: "to"
      }, 
      {
        from: "source3:sales-row68",
        to: "canon:Sales_Report5",
        label: "same as",
        arrows: "to"
      },
            {
         from: "canon:Sales_Report5", 
          to: "canon:Sales_Report5-revenue",
          label:"revenue",
          arrows:"to"
      },
      {
         from: "canon:Sales_Report5", 
          to: "canon:Sales_Report5-region",
          label:"region",
          arrows:"to"
      }, 
      {
         from: "canon:Sales_Report5", 
          to: "canon:Sales_Report5-period",
          label:"period",
          arrows:"to"
      }
    ]
    // create a network
   var container = document.querySelector('.network');
   var data = {
        nodes: nodes,
        edges: edges
    };
     var options = {
        mass:3,
        nodes: {
            shape: 'oval',
            size: 30,
            font: {
                size: 12,
                color: '#ffffff'
            },
            borderWidth: 2
        },
        edges: {
            width: 2
        },
        groups:{"source1":{
          color:{
             background:'red',
             border:'maroon'
                },
          shadow:{enabled:true,
                  color:'rgba(0,0,0,0.5)',
                  x:6,
                  y:6 
                 }
           },
           "source2":{
          color:{background:'blue',
             border:'navy'},
          shadow:{enabled:true,
                  color:'rgba(0,0,0,0.5)',
                  x:6,
                  y:6 
                 }
           },
           "source3":{
          color:{background:'green',
             border:'darkGreen'},
          shadow:{enabled:true,
                  color:'rgba(0,0,0,0.5)',
                  x:6,
                  y:6 
                 }
           }, 
          "canon":{
          color:{background:'gold',
             border:'brown'},
          font:{color:'black',size:14},
           shadow:{enabled:true,
                  color:'rgba(0,0,0,0.5)',
                  x:6,
                  y:6 
                 }
           }     
        }
    }; 
    network = new vis.Network(container, data, options);
    network.on('click',(obj)=>{document.querySelector('.report').innerHTML =obj.nodes[0]})

Graph 3. A Complex, interactive graph
The group’s section is broken into an object where each label name is a group name, which in turn can utilize most of the properties of a given node. In the case of this graph, each core object has an associated set of displayed properties but belongs to a different group.
This options object also includes the mass property, which is used to set the inverse gravity property which is used to determine how much space exists between nodes. The physics and layout options available can be quite sophisticated, including determining how quickly the display settles into its final state once rendered (or modified) as well as determining whether the positions given are based upon physics or a formal hierarchy (left to right, up to down).
In addition to the options, this illustrates how events are managed. There is a global event handler on the network itself which can be used to capture events of a certain type (such as the “click” event). When a resource is clicked on, a summary object is sent to the designated handler:

{
  nodes: [Array of selected nodeIds],
  edges: [Array of selected edgeIds],
  event: [Object] original click event,
  pointer: {
    DOM: {x:pointer_x, y:pointer_y},
    canvas: {x:canvas_x, y:canvas_y}
  }
}

This can then be used to determine both what was clicked on and where. In this case, when a node is clicked, the corresponding id for that node is displayed.
Network diagrams are both powerful tools for visualization of relationships and are absolutely required when dealing with highly referential data, especially in the semantic world. The (vis-js)[http://visjs.org) library is a fantastic library for building such graphs, and can be configured to handle a wide variety of storytelling and dashboard needs.

See the Pen Network Graph3 by Kurt Cagle (@kurt_cagle) on CodePen.

About the Author:

How WebRTC Has Changed Web Communication

May 11th, 2017

This is a guest post from our friends at XBSoftware.

From time to time the new web technology that promises us a brave new world appears on the horizon. A list of innovations that such technology can offer may vary from the revolution in communication technology to simplifying the cross-browser apps development process. The reaction of developers and users in such cases ranges from cautious interest to explosive enthusiasm. Howbeit, only the time can tell if any broad perspective awaits another invention of human genius. In this article, we’ll take a look at a technology that allows developers and users to look at online chatting applications from a new angle. We’ll talk about WebRTC.

WebRTC means Web Real Time Communication. This technology supported by such companies as Google, Mozilla and Opera was designed for creation real-time communication apps for browsers, mobile devices and Internet of Things (IoT). The first implementation of this technology saw the world in 2011. To understand what path has been passed since then, you can check the following article that contains the stats collected by Google that represent the current state of WebRTC: WebRTC: One of 2016’s Biggest Technologies No One Has Heard Of. To save you some time we’ll present some excerpts from this text:

  • Two billion Chrome browsers with WebRTC
  • One billion WebRTC audio/video minutes per week on Chrome
  • One Petabyte of DataChannel traffic per week on Chrome (0.1 percent of all web traffic)
  • 1200 WebRTC-based companies and projects (it was 950 it June)
  • Five billion mobile app downloads that include WebRTC

Everything looks impressive indeed. According to the Google Trends service, WebRTC is particularly popular in such countries as China, South Korea, Israel, Taiwan, etc. These guys surely know a lot about the rapid growth and keen interest in this technology from Chinese customers gives cause for reflection. Let’s take a look at WebRTC in more detail to understand better it’s distinctive features.

Real-Time Communications and the Price That We Have to Pay for It

So, what’s the main issue that WebRTC helps to solve? Plenty of existing communication protocols has led to the diversity of chatting software. It’s always good when there is plenty to choose from. But the absence of the possibility to exchange texts and make video calls between the different apps may become a little bit annoying. You have to be sure that all the participants whether it be friends, relatives, or colleagues, have the same communication app as you. And you have to download and install a new version of this app every time a developer make some changes in communication protocols. Well, the WebRTC technology is the recipe that can save you from this headache.

The primary goal its developers was to enable real-time voice and video communication without using extra plugins and add-ons. All you need is your web browser. You can open a web app that works as a calling point and initiate the connection with your interlocutor. The recipient, in its turn, should have access to the website that works as end-point and accepts the call. No more downloading, installing and upgrading any third-party plugins. This annoying practice can nullify the pleasure of communicating.

How WebRTC Works. The Basics

To provide a user with rich and high-quality real-time communication apps WebRTC needs to do the following work:

  • Get the access to the media stream (e.g. audio from your mic, or video from a webcam);
  • Gather network information such as ports and IP addresses and exchange this info with other apps;
  • Use signaling communication for error reporting. It is used for starting and finishing calls as well;
  • Provide users with the possibility to exchange info about video resolution, codecs, etc.;
  • Transfer audio, video, or any other data;

A developer can get access to WebRTC possibilities through simple Application Programming Interfaces (APIs). If you’re curious whether such apps can replace the “traditional” solutions such as Skype or not, you can check this YouTube video that describes the functionality of a WebRTC application.

At first sight, it may seem like WebRTC is more like a toy for geeks which cannot be used for creating business apps. But we can assure you that WebRTC developers are extremely serious about the security issues. Depending on the data type, WebRTC applications use the SRTP (Secure Real-time Transport Protocol) protocol for streams and DTLS (Datagram Transport Layer Security) protocol for other kinds of data. When you call someone and sell a request, SRTP’s work is to guarantee that the media channels are secured with the encryption keys. It guarantees data integrity as well. This protocol is used to confirm the authenticity of the message and protect its integrity. DTLS was created upon the stream-orientated TLS protocol. It guarantees full encryption with asymmetric cryptography methods, data authentication, and message authentication. According to the security standards, these protocols are enabled by default and all your data will be secured.

All your data can be transmitted via secured HTTPS connection. End-to-End Encryption (E2EE) between the peers works by default in all browsers that support WebRTC, so you can be sure that your peer-to-peer connection is safe.

WebRTC Future Perspectives

The most exciting part about WebRTC is the possibility of its implementation in the world of Internet of things (IoT). The future where almost each and every thing that surrounds you can be embedded with electronics, software, sensors, actuators, and the network connection is already at your doorstep. Since any surface such as your kitchen table can be equipped with display and sensor and can run a web browser, why not turn this thing into the chatting devices?

Imagine an app that allows you to subscribe to the services of a chef-cook that will share his cooking advice through your fridge real-time. Looks pretty neat, huh? Such apps can also be pretty helpful for lonely elders or people with some form of dementia for whom constant conversation is important, but the use of a smartphone can be complicated. In any case, the possibility of using WebRTC is limited only by the fantasy of web developers.

Conclusions

Probably, WebRTC is not one of those technologies that everybody’s talking about. But despite this fact, a number of distinctive features make it worthy of attention. The relatively low cost of development and attention to the security issues might be interesting for business. Modern WebRTC developer companies are already able to develop secure communication apps which functionality allows replacing the existing solutions. An average user will be pleased by the possibility of using video chat application without installing additional plugins or applications. The future of technologies in the form of Internet of things will bring us new and unpredictable forms of communication. And WebRTC has all chances to play the leading role in this new world.

About the Author:

What Does ES2017 Bring to JavaScript?

April 28th, 2017

JavaScript is undergoing a massive evolution and increasingly taking on characteristics that make it attractive as a full stack environment.

The ES2017 (ES8) stack is already now being implemented in both Node and several modern browsers, addressing many of the more complex issues of web development including both better ways of dealing with asynchronous coding, ways of decorating content to make them better to work with tools such as React, and ways of dealing efficiently with word level manipulations used for vector processing.

In this article, I will be exploring a few of these new features, and will use them as springboards for future articles about how such features as @observables play into the mobx and React environments.

## Async and Await

More than almost any other language, JavaScript has struggled with a seamingly simple problem – how do you keep interfaces responsive when you have to fetch (or send) contents over a data socket across the web? The first solution to that was to write polling routines to activate a setTimeout or setInterval call, then returning a flag state that indicated a given transfer was complete. With the advent of AJAX calls in the late 1990s, this functionality was relegated to a specific object, the XmlHttpRequest object, and later eventually subsumed in jQuery oriented `ajax()`, `get()` and `post()` functions.

This, in turn, introduced the notion of asynchronous callbacks into Javascript – passing a function as an argument to another asynchronous function with a predetermined set of parameters. Such callback functions would then be invoked once either the data had completed transferring or an error had occurred (in which cases a different function would be passed for cleaning up the action).

One problem became quickly evident with this approach. The resulting callback functions themselves often needed to push the resuting data to another function, which would require another callback, and eventually the resulting code became hideously deep and complex.

The first solution to this problem was to implement a construct called a promise. A promise was a deferred callback object that was implemented initially in ES2015, then refined in ES2016. A promise was a wrapper object that held the callback function(s). When the invoked asynchronous function completed, then it would return a resulting object that could then be passed into a new promise, resulting in a promise chain.

This construct was better, but could still end up being too verbose, especially when what you needed was data from multiple sources independently. The `async` identifier, along with the `await` keyword, is the ES2017 solution to that particular problem. As a simple example , first create a promise, in this case a promise of a function that returns a value after a specified number of seconds:

function resolveAfterInterval(x,t) {

return new Promise(resolve => {

 setTimeout(() => {

  resolve(x * x);

 }, t * 1000);

});

}

Here, the function resolveAfter interval takes two parameters, a value that will be squared, and the time in seconds. The function creates a new promise around a function (internally called `resolve()`) that in turn calls a setTimeout function. The `resolve` function is itself just a placeholder that returns whatever is passed into it, here the square of the number. The `t` parameter in turn is used to set the number of seconds before the function returns.

The `await` keyword is applied to a function or variable to indicate that it should await the completion of the promise before passing the results of that promise. If the await is passed on the variables, then the return statement is invoked once the last of the variables are known, here at three seconds.

async function distance1(x) {

var a = resolveAfterInterval(20,2);

var b = resolveAfterInterval(30,3);

return Math.sqrt(Math.pow(x,2) + Math.pow(await a,2) + Math.pow(await b,2));

}

distance1(10).then((v)=>console.log(v))

Note that that await serves very much the same purpose in an asynchronous function that yield does for a generator (and they use many of the same mechanics under the hood). Here, the output will return only once the longest promise’s interval completes, at 3 seconds.

This is a little different from the situation where the awaits is applied to the functions themselves:

async function distance2(x) {

var a = await resolveAfterInterval(20,2);

var b = await resolveAfterInterval(30,3);

return Math.sqrt(Math.pow(x,2) + Math.pow(a,2) + Math.pow(b,2));

}

distance2(10).then((v)=>console.log(v))

In this example, the first `await` won’t return until after two seconds, while the second `await` won’t return until three seconds after the first one is satisfied (or five seconds after the code starts). This occurs because the await acts like an asynchronous block – in the second example, the following statement won’t occur until after the initial function’s promise is returned, but in the first example, the return statement executes once the variables have been assigned.

Using await is actually quite valuable in situations where you want an action to occur once all of the data is available from all sources, but not a moment after. Ordinary chaining of promises is almost as bad as synchronous processing (since you’re dependent upon waiting for one promise to complete before a second one can start, as the second example shows), but with the `async` and `awaits` keywords you can reduce this wait only to that of the longest single process.

## Exponentiation and the Rest Operator

While on the topic of squaring values, another new feature in the ES2017 release is the exponentiation operator “\*\*”. Exponentiation can normally be accomplished via the Math.pow() function, but for complex expressions this notation can be cumbersome. By using the operator, you can replace the expression

var distance = (x)=>Math.sqrt(Math.pow(x,2)+Math.pow(x,2))

with

var distance = (x)=>(x**2 + x**2)**(1/2)

This can both reduce typing and make the code clearer. For instance, the first function in the previous section can be rewritten as:

async function distance1(x) {

var a = resolveAfterInterval(20,2);

var b = resolveAfterInterval(30,3);

return (x**2 + a**2 + b**2)**(1/2);

}

distance1(10).then((v)=>console.log(v))

It’s worth noting that the ** operator is not quite the same as the Math.pow() function – it has it’s own implementation. This means that if you overload Math.pow() (which might happen if you’re trying to extend Math to complex numbers), the ** operator will not also be overloaded. For instance,

Math.pow = (a,b) => a + b;

console.log(Math.pow(3,2))

=> 5

console.log(3**2)

=> 9

A much richer math package is available at https://mathjs.org. This will let you do things like mathematical operations on vectors and matrices, complex number mathematics, evaluation and solving of algebraic and linear equations, derivative calculus and big number calculations.

While on the topic of operators, one operator that was added in 2016 is worth bringing up as well is the rest operator (“…”). This bit of syntactical magic binds a sequence of parameters to an array of a given name. For example, consider a function that takes a set of strings and returns those as a sorted HTML list structure.

function list(listWrapper,itemWrapper,...items){

   return `<${listWrapper}>${items.map((item)=>`<${itemWrapper}>${

      (typeof item) == "string"?item:(Array.isArray(item))?

          `${item[0]}`:

          `${item.label}`

       }`).join('')}`;

}

document.write(list("ol","li",

{label:"Pinterest",link:"https://www.pinterest.com/"},

["Facebook","https://www.facebook.com/"],

{label:"LinkedIn",link:"https://www.linkedin.com/"},

"Twitter"))

The list() function takes the name of a list wrapping element, an element to wrap each item in the list, then an item itself, which can be a string, an array, or a string, and generates the corresponding output.

An codePen of this can be seen at

https://codepen.io/kurt_cagle/pen/bgZXwb

## Decorators

*Decorators are familiar to people who work with Babel, but these (and the associated code constructs that these enable, such as those used by mobx or similar libraries) have yet to make their way into broad implementation natively in most browser engines. As such, you will need to use Babel as a preprocessor for any of the following (or use the ES2015 implementation, discussed below).*

Functions are objects. This single fact opens up an entire world in which functions can be “decorated” in various ways in order to expose certain functionality. A decorator, in this context, is a function that wraps around another function in order to provide information to some other process. Decorators differ from ordinary functions in that they do not ordinarily change the result of the function, but rather invoke some additional action when the functon is called, such as adding an entry in a log or indicating that a given parametric class property is observable or not.

The use of such decorators has been around for awhile, and collectively is known as aspect-programming (or, sometimes, metaprogramming). They are, however increasingly showing up in Javascript typically at the point where classes and associated methods are defined. Starting with ES2016, transpilers such as Babel used the @ symbol to indicate such a decorator.

A (relatively) simple example of a decorator might be something like a @log decorator, which is used to identify when a given method is called in a class, along with the arguments applied to that method.

class Mat {

@log

add(a, b) {

 return a + b;

}

@log

subtract(a,b){

 return a - b;

}

}

var m = new Mat();

m.add(2,3)

m.subtract(2,3)

The two functions `add` and `subtract` do exactly what you would expect them to do. However, in both cases these methods have the @log decorator placed on it, which serve to add a log event every time each of these methods is invoked:

\> Calling "add" at Thu Dec 22 2016 19:02:16 GMT-0800 (Pacific Standard Time) with [2, 3]

\> Result = 5

\> Calling "subtract" at Thu Dec 22 2016 19:02:16 GMT-0800 (Pacific Standard Time) with [2, 3]

\> Result = -1

The log file gives the name of the method and the parameters being passed, along with the time stamp for when the method was called.

In order to create this particular bit of magic, it’s necessary to define the log decorator previously. This would likely be loaded in via an import of some sort from an established library module.The @log decorator itself is defined as follows:

function log(target, name, descriptor) {

var oldValue = descriptor.value;

descriptor.value = function() {

 console.log(`Calling "${name}" at ${new Date()} with arguments `,arguments);

 var output = oldValue.apply(null, arguments);

 console.log(`Result = `,output);

 return output;

};

return descriptor;

}

Here, the log function is passed a target (the specific function object, the name of that function, and a descriptor that provides relevant information about the function, such as its passed parameters). The old value of the descriptor (which is a function) is temporarily cached in a variable, a log description is sent to the console, and the function is then invoked with the `arguments` metavariable passed in through the context of the original function (arguments is an array-like object that holds the arguments of the initial calling function).

Given that you have the function and its associated arguments (and with some work the binding class or prototype) this can not only get information but can also be used to populate other control structures. As an example, certain libraries such as mobx make use of decorators to designate @observable variables. When the value of these change, notifications can be passed back to a reference broker object which will then update items that subscribe to that observable “variable”, without having to write code into the setter/getter directly.

This has incredible utility for React and similar libraries, as these will change UI only when observed variables change. Indeed, this is where the true power of decorators come in: the act of invoking methods can be passed on to specialized objects without the original author of those methods needing to know the internal mechanisms involved.

## Object Entries and Values

One JSON design pattern that can be frustrating with current functionality is when you have an object database that you want to use map() and related functions over. For instance, consider the following “database” with associated keys:

var employees = {

emp1:{

  name:"Jennifer Jones",

  department:"Analysis",

  role:"Detective",

  manager:"emp3"

  },

emp2:{

  name:"Tony Stark",

  department:"R&D",

  role:"Project Manager",

  manager:null

  },

emp3:{

  name:"Matthew Murdoch",

  department:"Legal",

  role:"Counsel",

  manager:"emp2"

  },

};

Currently in order to use the mapping functions of `forEach()`, `map()` and `filter()` (which are exceptionally handy for writing code) you have to convert the object map into an array using the .from() function, or you have to use a for ([key,value] of obj) type expression. ES2017 introduces the functions Object.values() and Object.entries() to retrieve the object values as a singleton array, or to retrieve all the name/value pairs as an array of arrays.

For instance, the following shows how you can use the `map()` function with the employees object database given above:

var list = Object.entries(employees).map(([key,value])=>

 `
  • ##${key} ${value.name} works for ${employees[value.manager]?employees[value.manager].name:’Company’}

` ).join(“\n”); document.querySelector(“##employees”).innerHTML = `

    • ${list}

`;

Here each entry is destructured into a key and value, then passed via an arrow function into a string template that indicates who works for whom. By the way, the expression:

${employees[value.manager]?employees[value.manager].name:'Company'

is a conditional expression that tests to see whether the array value employees[value.manager] is a link to another employee (as a string) or a null value, in which case a default value (“Company”) is provided.

The output can be seen in the following codePen:

See the Pen Using entries() to read a local data store. by Kurt Cagle (@kurt_cagle) on CodePen.

With this and the `Object.keys()` function, arrays and object maps are now on equal footing as for map/reduce type operations.

## Trailing Commas

Veteran programmers might have noticed what looks like a typo in the database given here. There is a comma following the final entry. Normally, this would generate an error, because the conventional interpretation of this is that a null value is now being passed as an entry.

As of ES 2017, however, that’s no longer the case. In both arrays and objects, if a comma is the terminating character in a list or object, it is simply ignored. This might seem like a trivial change, but one of the most common syntax errors that programmers run into when writing code is copying a block of code that’s part of a template into another part of the code and grabbing the trailing comma. Now, the Javascript processor simply ignores it.

**Note**: Forgetting a comma within a sequence or having two commas enclosing an empty sequence will still generate an error, however.

## Padding Strings

Part of ES2016 was the introduction of the `repeat()` function, which made it possible to repeat a particular string of characters a given number of times. While this has proven handy, two other string functions also provide significant utility: `[[String]].padStart()` and `[[string]].padEnd()`.

These function are used to add filler characters to the beginning and end of a given string. As an example, suppose that you wanted to have a list of files of the form filename001, filename002, …, filename010, filename011, etc. You would use padStart to create the indexed items.

for (var index = 0;index <=20;index++){console.list(("filename" + index.padStart(3,'0')))};

The first argument gives the total length of the string after padding is applied, while the second argument gives the padding character, by default a ‘ ‘ space.

A similar approach can be used to create inline comments of the form:

/******************* Here is a comment *******************/

The following extends the String prototype with makeLabel, which uses both padStart and padEnd:

String.prototype.makeComment = function (len,char="*",startChar="/",endChar="/"){

var str = ` ${this.substr(0,len - 2 * char.length - startChar.length - endChar.length)} `;

;

return startChar + str.padStart(parseInt(len/2)+parseInt(str.length/2),char).padEnd(len,char) + endChar;

}

To create a comment of 60 character, the following would be invoked:

"This is a comment".makeComment(60);

with the result being:

/******************** This is a comment *********************/

An example illustrating this can be seen in the following Codepen.

[[[pen slug-hash=’Xpxwvg’ height=’300′ embed-version=’2′]]]

The `padStart()` and `padEnd()` functions are just beginning to make their way into browsers now, but should be available in the most recent versions of node and related Javascript engines. Check out the Codepen example for appropriate polyfills.

## Summary

This article covered most of the recent additions, with two significant exceptions – the pair Object.defineProperties() and Object.getOwnedProperties() and the emergence of shared memory capabilities. Each of these are significant enough in their own right to be covered by separate articles, and won’t be covered here.

ES2017 is the culmination of an upgrade cycle that has significantly changed the flavor of the Javascript language, bringing it more in line with contemporary functional languages such as Haskell or Scala. The language that has emerged is becoming quite powerful and expressive. At the same time, there is a huge amount of innovation occurring with various libraries such as mobX, React and elsewhere, and these in turn are becoming grist for strengthening the core language to better reflect these innovations.

About the Author:

The Angular CLI: A Simple Way to Fire up an Angular 2 Project

April 18th, 2017

The Angular CLI is one of the easiest ways to begin a web development project in Angular 2. The CLI automates most of the typical project startup tasks and reduces bugs and delays by doing so. This makes for faster development, better products, and happier clients and users.

Let’s imagine a scenario where this will be useful. It’s Monday morning at 9:30AM. The Widget Company (your client) wants your agency to create a teaser website for their latest app. The client wants a mockup ready by noon and they want it done in Angular. This mockup needs a splash screen displaying their company logo and a brief “coming soon” blurb. Angular is a great tool to spin up pages like this. But the process of setting up all the dependencies and tooling can be a major pain. Let’s consider two different scenarios of how we’d start this project:

Scenario 1 – Manually Generating Everything

You first create a folder on your dev drive (or network share) to house the work for the mockup. You then create the package.json file with its settings and dependencies:

//package.json:
{

  "name": "TheWidgetCompany",

  "description": "Splash preview of new product for The Widget Company",

  "private": true,
  
  "scripts": {
    
    "start": "live-server"
  
  },
  
  "dependencies": {
    
    "@angular/common": "2.0.0",
    
    "@angular/compiler": "2.0.0",
    
    "@angular/core": "2.0.0",
    
    "@angular/forms": "2.0.0",
    
    "@angular/http": "2.0.0",
    
    "@angular/platform-browser": "2.0.0",
    
    "@angular/platform-browser-dynamic": "2.0.0",
    
    "@angular/router": "^3.0.0",
    
    "core-js": "^2.4.0",
    
    "rxjs": "5.0.0-beta.12",
    
    "systemjs": "0.19.37",
    
    "zone.js": "0.6.21"
  
  },
  
  "devDependencies": {
    
    "live-server": "0.8.2",
    
    "typescript": "^2.0.0"
  
  }

}

Then you have to create a tsconfig.json file and a systemjs.config.js file to handle the commonjs module loading.

//tsconfig.json:
{
  "compilerOptions": {
    "target": "ES5",
    "module": "commonjs",
    "experimentalDecorators": true,
    "noImplicitAny": true
  }
}
//systemjs.config.js:
System.config({
  transpiler: 'typescript',
  typescriptOptions: {
    emitDecoratorMetadata: true
  },
  map: {
    '@angular': 'node_modules/@angular',
    'rxjs'    : 'node_modules/rxjs'
  },
  paths: {
    'node_modules/@angular/*': 'node_modules/@angular/*/bundles'
  },
  meta: {
    '@angular/*': {'format': 'cjs'}
  },
  packages: {
    'app'                              : {main: 'main', defaultExtension: 'ts'},
    'rxjs'                             : {main: 'Rx'},
    '@angular/core'                    : {main: 'core.umd.min.js'},
    '@angular/common'                  : {main: 'common.umd.min.js'},
    '@angular/compiler'                : {main: 'compiler.umd.min.js'},
    '@angular/platform-browser'        : {main: 'platform-browser.umd.min.js'},
    '@angular/platform-browser-dynamic': {main: 'platform-browser-dynamic.umd.min.js'} 
  }
});

Running npm install in your terminal will install all necessary packages (assuming you’re running in a NodeJS environment). Then, you’ll use client-provided assets to generate a splash screen and wrap the design in a Bootstrap jumbotron class. Here’s what some of your components might look like.

//app.component.ts:
import {Component} from '@angular/core';

@Component({
    selector: 'app',
    template: `
        <div class="jumbotron">
            <img class="centeredImg" src="img/thewidgetcompany.png">
            <p class="text-center">{{blurb}}<span class="trademark">{{blurbTm}}</span>{{blurb2}}</p>
            <br/>
            <img class="centeredImg" src="img/system_log_out_T.png">
            <p class="text-center">{{product}}<span class="trademark">{{productTm}}</span>{{teaserDate}}</p>
        </div>
    `,
    styles: [`
        .jumbotron {
            background: #FFF;
        }
        .trademark {
            font-size: .83em; 
            vertical-align: super;
        }
        .centeredImg {
            display: block;
            margin-left: auto;
            margin-right: auto;
        }
    `]
})

export class AppComponent {
    name: string;
    blurb: string;
    blurbTm: string;
    blurb2: string;
    product: string;
    productTm: string;
    teaserDate: string;

    constructor() {
        this.name = 'Angular 2';
        this.blurb = 'A brand new addition to the On The Go';
        this.blurbTm = 'TM';
        this.blurb2 = ' suite of available products is coming soon:';
        this.product = 'Profile Control';
        this.productTm = 'TM';
        this.teaserDate = " - Coming Fall 2017";
    }
}

=========================================================================

app.module.ts:
import {NgModule} from '@angular/core';
import {BrowserModule} from '@angular/platform-browser';
import {AppComponent} from './app.component';

@NgModule({
    imports: [BrowserModule],
    declarations: [AppComponent],
    bootstrap: [AppComponent]
})

export class AppModule { }

=========================================================================

main.ts:
import {platformBrowserDynamic} from '@angular/platform-browser-dynamic';
import {AppModule} from './app.module';

platformBrowserDynamic().bootstrapModule(AppModule);

=========================================================================

<!DOCTYPE html>
<html>
    <head>
        <title>The Widget Company</title>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <link href="css/bootstrap.min.css" rel="stylesheet">
        <script src="node_modules/typescript/lib/typescript.js"></script>
        <script src="node_modules/core-js/client/shim.min.js"></script>
        <script src="node_modules/zone.js/dist/zone.js"></script>
        <script src="node_modules/systemjs/dist/system.src.js"></script>
        <script src="systemjs.config.js"></script>
        <script>
            System.import('app').catch(function(err){ console.error(err); });
        </script>
    </head>
    <body>
        <app>Loading...</app>
    </body
>
</html>

You wrap up finessing CSS to center the text and play with spacing, and finish cranking out a mockup in time to make the business lunch. The clients make their suggestions, and work on the rest of their site begins in earnest. But why did it take so long for the mockup to be done? It sure seems like a lot of work setting this up manually.

  • You created the package.json file manually.
    You typed in dependencies and other settings native to the package.json file. Any mispellings could cause problems at compile-time. Quite frustrating…
  • You “npm install”ed manually.
    If the package file has bad data (missing dependencies, spelling/syntax errors), this will cause trouble. But even if you typed perfectly, you can still have problems. Case in point: while writing this article (and creating the mockup myself), I actually had to delete and reinstall my node_modules folder due to failures while trying to npm install and compile. I didn’t want to spend much time troubleshooting bugs failing the build, so I grabbed a working package.json file and re-ran npm install. (The errors went away.)
  • You manually created application component templates.
    You will always fill in details on website components (unless you are creating a vanilla website, template, or mockup). But components have lots of areas for developer input, such as: import statements, component decorators, any component-specific styles, directive inclusions, and the actual component class itself (its variables and business logic). If you created all these on your own, all these will be places you’ll check for errors when your app doesn’t build (or generates exceptions while running).

Now let’s entertain a much easier development/setup scenario using the Angular CLI.

Scenario 2 – Using the Angular CLI

  • Type npm install -g @angular/cli if you haven’t already begun using CLI
  • Type ng new [name_of_project_folder] to create a project
  • cd into your project directory and type ng serve to start up your project

In 4 commands, you’ve gone from not having a project space to having a running template set up for your website, accessible on a localhost port. (My local build time was 3 minutes, not counting the time it might take to install the CLI client.) You’ll probably be ready for the lunch meeting by 11AM, if not sooner.

What did you not do?

  • You didn’t spend time manually building package.json or other usual supports
  • You didn’t have to manually start the npm install process for dependencies
  • You didn’t have to manually install a testing system as Karma is included
  • You didn’t have to manually create basic Angular 2 components

As an Angular 2 developer, having the structure already set for a project (and being given the basic components to get started) means I can get on with prototyping. I can get started as fast as I can type those commands and wait for the site to spin up. Not only that, but the CLI also provides for tools to generate additional components. If you’re in a real hurry to mock something up and don’t want to think about creating components yourself, try ng generate.

The CLI takes the process of creating a new project and reduces it to an IDE-esque level of simplicity. Detractors may complain that CLI “dumbs down” the prep work required to get a build going, but I would invite them to look at the cost savings involved. Even if only used for mockup projects, the CLI can severely cut down time and costs.

About the Author:

What’s Driving the Need to Learn JavaScript?

March 28th, 2017

The 2017 DevelopIntelligence Developer Learning Survey Report uncovered a number of insights and trends on how, what, and why software developers want to learn and be trained on in 2017. One of the major discoveries that stands out from the report is the enormous demand for training on JavaScript. 55% of survey respondents say they want more training JavaScript and 27% want training on ES6, the newest version of JavaScript. Our survey also uncovered a strong training demand for JavaScript frameworks, libraries, and related tooling–e.g., 42% desired Angular 2 training and 38% of developers reported a desire to learn React (a Facebook-driven UI library). What accounts for this robust training demand? Why is everyone either on or jumping on the JS train?

It’s worth considering a bit of history on the language. Brendan Eich invented JavaScript in 10 days around May 6-15 in 1995. The language, originally called Mocha, started as a simple client-side scripting language. It allowed developers to create simple interactivity in the browser. This was when the web was used by tens of millions of people vs. the billions who use it now. This web was originally invented to share simple documents over the wire. Consider what ESPN looked like in 1999:

And here’s what ESPN looks like in 2017:

We ask a lot more of the web than we used to. It needs to be able to do animations, video, social interactivity, eCommerce, and a multitude of other things that we do in our browser in 2017. JavaScript (and JavaScript developers) had to evolve a lot to keep up with this. Here’s a couple of the major milestones of that evolution:

Ajax – 2004

Ajax stands for asynchronous JavaScript and XML. Ajax allows developers to fetch data from a server without doing complete page refreshes. Ajax made things like Gmail and Kayak.com possible. It allowed browsers to feel much more like a desktop application instead of jumping from page to page. They could stay on the same screen but receive new data from a data source somewhere. This allowed for a much smoother and more interactive user experience.

Source: By DanielSHaischt, via Wikimedia Commons

Get JavaScript Training for Teams

jQuery – 2006

jQuery is a cross-platform JavaScript library designed to simplify the client-side scripting of HTML. jQuery made it much simpler for web developers to build applications that worked equally well on all browsers. This was the era when Internet Explorer had annoying quirks that other browsers didn’t. jQuery also made web development easier, in general, as it would give developers shorter/simpler ways of doing common tasks. They could get the same thing done with much less and much more readable code.

Here’s how to do relatively simple aJax request in jQuery:

$.getJSON('/my/url', function(data) { 

});

Here’s how you would have to write it in JavaScript

var request = new XMLHttpRequest(); 
request.open('GET', '/my/url', true); 

request.onload = function() { 
if (request.status >= 200 && request.status < 400) { 
// Success! 
var data = JSON.parse(request.responseText); 
} else { 
// We reached our target server, but it returned an error 

} 
}; 

request.onerror = function() { 
// There was a connection error of some sort 
}; 

request.send();

The source of this code is a site called you might not need jQuery. They are arguing that jQuery is being overused. This might be true but it’s worth considering how many lines of code developers can save.

MVC Frameworks (Backbone, Angular, etc.) – ~2009-2013

Web applications continued to get larger and more complex. jQuery works well for simple applications but quickly turns into spaghetti code as an app grows. The tooling needed to evolve to accommodate this.

This is where the MVC (Model-View-Controller) frameworks like Backbone and Angular came out of. MVC was an existing popular way to organize code projects (in other languages) and adhere to what’s known as the separation of concerns. One area of the code would contain the templates that users would see, one would contain the logic of what functions would be called when users did different actions, and another part would contain the data that ultimately drove the app. This worked and worked really well for a variety of web applications.

This short video shows the power of Angular 1. It allows for accomplishing many common tasks with ease:

While Angular shines in simple smaller apps, it struggles to perform well and be maintainable as it grows. This created the need for a better paradigm.

Component Frameworks (React, Vue) – 2013-Now

Facebook released the React.JS UI library in 2013. Facebook was struggling to make existing frameworks/libraries support their complex app so they rolled out their own. React (and similar libraries) break a web application down into components. Every little widget or separate area of an app/page is a component and, thus, developed independently. React handles the task of keeping the data in sync with the user interface (e.g the like button has the correct number of likes).

Component-driven development greatly simplifies the application development and makes it easier to avoid common types of bugs that plagued the MVC frameworks.

Here’s a great clip, from 2013, that explains the React paradigm from a high-level:

No wonder developers want to learn more JavaScript!

As you can see, JavaScript has come a long way. There are variety of powerful tools to build simple and complex applications. But the tools keep changing and evolving, forcing developers to constantly be learning new things. Developers need regular training in multiple formats to keep up with this landscape.

DevelopIntelligence javascript offers expert-led, hands-on courses on all of the latest JavaScript libraries and frameworks. Our Developer Survey Report goes even more in depth on the front-end training landscape and you can download it here.[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

About the Author:

Basics to Reading/Writing Cookies with JavaScript

January 17th, 2017

Cookies are relatively small text files that a web browser embeds on a user’s computer. Cookies allow otherwise stateless HTTP communications to emulate state (i.e., memory). Cookies are being replaced by somewhat newer technologies such as local storage and session storage; however, cookies are still widely used by many major websites today. For that reason alone, it’s a good idea for you to familiarize yourself with how cookies work. Additionally, it’s fun to see how you yourself can use JavaScript to read from and write to the your browser’s cookie API. In the following tutorial I’ll show you how to do precisely that. Let’s get cookie-ing!

First, let’s look at how to use JavaScript to read cookies: To achieve this, simply write document.cookie in your JS file or in your browser’s JS console. You can output the value to HTML or just simply log in the console. Here’s an example of what you might see if writing cookie data to HTML (Note: Your browser needs to have cookies enabled and values present for the demonstration to work):

See the Pen rjMjgp by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Yeesh.. what a mess! No worries, we can do some formatting. You’ll notice that there are = and ; interspersed throughout the cookie text. The = denotes key=value pairs while the ; delimits the individual pairs. So in order to clean things up a bit, you might write something like this:

See the Pen zNKZOj by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Much better!

Now, how to write cookies: To write cookies using JavaScript, simply use the same document.cookie property we looked at before, but this time set it equal to a key=value pair (as a string) using the assignment (=) operator. Like this:

See the Pen LxRWKV by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

See it? In green? I just added my email address as a cookie to your machine. Feel free to drop me a line! :) Of course you can delete it if you want…

Moving on, let’s say you want the value of just 1 key=value pair within the cookie. What to do..? Well, you could write a custom function to do something like this:

See the Pen zNKZdm by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

And from there, you can do all kinds of things depending on whether or not a certain key=value pair exists as a cookie on a user’s machine. One of the most common uses for cookies is keeping users logged in to a credentialed website (such as Facebook, Twitter, of YouTube) by placing a cookie on the user’s machine once successfully logged in. The logic goes something like:

    if (key=value exists) → show logged in content, if (key=value doesn’t exist) → prompt user to log in

that kind of thing.

And there you have it, a basic introduction to accessing and manipulating cookies using JavaScript. For more detailed information on cookies, you can always check out sites like MDN.

Thanks for reading!

About the Author:

EASY Speech Recognition and Speech Synthesis in JavaScript

December 19th, 2016

As a society, we’ve become increasingly intrigued by the concept of machines that can talk and listen. From fictional AI systems like HAL 9000 in 2001: A Space Odyssey (“I’m sorry, Dave. I’m afraid I can’t do that.”), to Apple’s Siri ,and Google’s new Assistant, our culture seems inexorably drawn to the idea of digital beings with ears and a voice. Implementing such sophisticated technology may seem far beyond the grasp of a beginner or even more experienced programmer; however, that assumption couldn’t be further from the the truth. Thanks to user friendly APIs found in modern browsers, creating simple speech recognition and speech synthesis programs using JavaScript is actually pretty straightforward. In the following tutorial, I’ll show you how to use JavaScript to access your browser’s speechRecognition and speechSynthesis APIs so that you too can create programs you control with your voice; ones that not only can hear you, but ones that can speak to you as well. Come, let’s have a listen…

Explore JavaScript Courses

To start, it’s typically a good idea to explore which browsers best support the technologies we’re going to be working with. Here’s MDN’s spec sheet for the speechRecognition API: https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition#Browser_compatibility. As you can see, it’s pretty much Chrome leading the way; however, Firefox has some capability as well. The same holds true for the speechSynthesis API: https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis#Browser_compatibility. Do note that Microsoft’s Edge browser enjoys some speech synthesis capabilities. David Walsh wrote a nice article on setting up the speechSynthesis API for cross-browser functionality; but, for simplicity’s sake and for the remainder of this tutorial, I’m going to assume the use of Chrome. 

So what does the code look like? A simple example of speech recognition code (written in JavaScript) looks like this (NOTE: You’ll probably have to open a new tab/window by clicking the “Edit in JSFiddle” link AND allowing your the browser access to your computer’s microphone (it should prompt you to do so).):

Try it out. Give the browser access to your computers mic and then try saying a few different words or phrases. If all goes according to plan, you should see what you say being written to the body of the html. If you’re having trouble getting it to work, try:

1) Making sure your computer has a mic and that your browser has access to it.

2) Making sure you have open only 1 application/tab/window that’s using the microphone.

3) Making sure to use Chrome as your browser and loading up the JSFiddle example in a new tab/window.

Okay, so the above example simply writes what the computer heard to the body of the html. That’s pretty interesting, but now let’s do something a bit more fancy; let’s tell the computer to do something for us! In the following example, trying opening up the Fiddle and telling the browser to change the background color of the HTML. The way I programmed it, you’ll have to say this exact phrase:

“Change background color to…” and then say any of the many colors recognized by CSS (e.g., “red,” “blue,” “green,” “yellow,”etc.).

Pretty cool, right? And really not all that difficult to pull off!

Now let’s look at giving the program a voice. I’m going to use Chrome’s default voice (yes, it sounds pretty robotic); but once you get the hang of it, feel free to read up on how to get and use different voices. Here we go; let’s see what it’s got to say:

Hear that? This time, the program audibly confirms that it’s changing the color of the background! Fantastic.

To recap all of this…

Speech Recognition

– The browser’s speechRecognition object has start and stop methods you can use to start and stop listening for audio input.

– The speechRecoginition object can react to for onend and onresult events.

– To get a string/text of what the computer heard, you can pass the onresult event to a function and then reference the event.results[0][0].transcript property.

Speech Synthesis

– The speechSynthesis object has a speak method that you can use to utter new SpeechSynthesisUtterances.

– You can pass a string (or number) value to the SpeechSynthesisUtterance constructor to create words or phrases.

    – pass that whole thing to the speak method and you’ve got a talking computer!

And there you have it. In this tutorial I’ve shown that it’s relatively simple to employ speech recognition and speech synthesis technology in your browser through the use of JavaScript. With this new tool set, my hope is that you’ll explore the wide array of possibilities that now exist. In theory, you can program your browser to execute most if not all of its functions at the sound of your voice. And you can make the browser say pretty much anything you want it to say. As usual, the limit is your imagination; so get out there and say something interesting! Better yet, have your browser say it for you. ;)

Explore JavaScript Courses

About the Author:

Functions as First-Class Objects in JavaScript: Why Does This Matter?

October 28th, 2016

Functions in JavaScript are first-class objects (or “first-class citizens”). Fascinating, but… what does that mean? Why does it matter? Read on and we’ll have a look!

We’ll start with the basics: What does first-class citizenship mean in general? First-class citizenship, within the world of programming, means that a given entity (such as a function) supports all the operational properties inherent to other entities; properties such as being able to be assigned to a variable, passed around as a function argument, returned from a function, etc. Basically, first-class citizenship simply means “being able to do what everyone else can do.”

In JavaScript, functions are objects (hence the designation of first-class object). They inherit from the Object prototype and they can be assigned key: value pairs. These pairs are referred to as properties and can themselves be functions (i.e., methods). And as mentioned, function objects can be assigned to variables, they can be passed around as arguments; they can even be assigned as the return values of other functions. Demonstrably, functions in JavaScript are first-class objects.

Let’s look at some examples/tests:

Can we assign a function to a variable?

See the Pen JRVgLw by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Yes we can!

Can we pass a function as an argument to another function?

See the Pen XjQvqY by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Sure enough!

But can we return a function… from a function?? (Hint: We already did, but… let’s see it again!)

See the Pen LRvwro by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Yep, piece of cake!

One can get pretty creative with assigning functions to variables and passing them around to other functions from which they can be returned. If you’re not careful (or maybe if you just want to have a bit of fun!), the rabbit hole can get pretty deep, pretty quickly! Consider this… a function can be passed to itself and even returned from itself!

Excellent! But okay… who cares if JavaScript functions are first-class objects? What does it matter?

The beauty of JavaScript functions enjoying first-class citizenship is the flexibility it allows. Functions as first-class objects opens the doors to all kinds of programmatic paradigms and techniques that wouldn’t otherwise be possible. Functional programming is one of the paradigms that first-class functions allow. Additionally, listening for and handling multiple events by passing callback functions is a useful feature within JavaScript and is achieved by passing a function as an argument to the document object’s addEventListener method. The process would not be nearly as elegant if functions were not granted first-class citizenship within the language. Furthermore, the practices of closure and partial-application/currying would not be possible within JavaScript if functions didn’t enjoy the status of first-class.

In summation, with functions being first-class objects within JavaScript, developers are able to do all kinds of interesting things and explore all sorts of programming paradigms that wouldn’t be otherwise be possible. It is in part due to this functional first-classness that JavaScript has become the powerful and prolific language that it is today.

Thanks for reading!