About the Author:

The Hero’s Journey for UX Design and Product Adoption

December 4th, 2017

What makes user experience (UX) design successful? Google it. Read books. Enroll in boot camps or formal educational programs. In the end, it all comes down to one word.

Adoption.

You’re not creating a product to sit on the shelf and look pretty. It has a purpose, but it can’t execute on that purpose without someone at the controls. Of course, this means that design should focus around the user, correct?

Yes!

And no.

“It’s not the customer’s job to know what they want,” quotes Steve Jobs. If the customer doesn’t know what they want, then how are you going to design something that they will use?

You know who else doesn’t know what they want? Heroes. Heroes just want to go about their business doing what they love to do. Then someone comes along and pushes them out of their comfort zones and before they know it, they’re on some epic journey. The hero’s journey.

The Hero’s Journey is a template for almost any story involving a hero who must embark on an epic journey to win an important victory and return transformed as an improved…human? alien? user! Yes, user!

The users who don’t know they need your product must embark on a journey of adoption much like the hero in an epic adventure story, only without all the dragons, orcs, and treacherous terrain. We can use the Hero’s Journey as a checklist for successful UX design.

The structure of what was originally coined as the monomyth has been described in different ways with as many as 17 phases. I bring it down to just five.

In most epic adventure stories, the hero:

  • is called to leave “home,”
  • expresses reluctance but goes anyway,
  • achieves success,
  • returns home, and
  • tells the story.

In most user adoption stories, the user:

  • must leave the “comfort zone,”
  • is reluctant but keeps trying,
  • achieves success,
  • returns to routine,
  • shares the solution.

Leave Their Comfort Zone

Pushing users out of their comfort zones involves enticing them. Your design strategy should find a way to make your product inviting. Consider how your product solves a real problem. The product was created for a reason. What is that reason and does it address some pain point for your target users?

Sharing design efforts with marketing can help reduce any miscommunication and align product design with market messaging. Release plans for target users will also tell you something about what users are expecting. Your focus here is to use whatever you know about the product, the market, and the user to design an inviting experience.

Reluctance

Even if you are successful at inviting users to use your product, they might still be reluctant. The more difficult the product is to use, the more reluctant your users will be.

Making it easy means an interface design that is clean and simple, and it has to work. Failure at any point is a good reason to set the product down and do something else. Try as we might to make things error-free, something will go wrong. Communicate with tech support or customer service to find out how they plan to help users through any issues. In this collaborative effort, you can decide how to build tech support into the product.

Success

You need your users to experience success as soon as possible. How can you design that into the product? What small win can your product offer in the first few minutes of use? How can your design highlight this? What groups should you work with to capture and leverage this quick win to make it more appealing to new users?

Return to Routine

If your design has all the components to get a user this far, then it solves a problem. Now you need them to keep using it. How does it work with what they already use? Is it flexible to work with different brands of the same technology? Is integration simple and flexible? Everything in modern technology is a conversation. Design your product to fit into that conversation.

Share the Solution

If your design gets your users interested in the product and using it every day, the final challenge is to design a way for them to share the solution. Your user experience design isn’t about focusing on one user; ultimately you want the largest user base you can get. How will your users spread the experience from one user to another and from those users to non-users?

First, you have to make it easy to share, but what about non-tech solutions that carry the story? What about the “backstory?” What is the history of the product and its developers? What values surround the use of your product? What is the big “why” behind the reason your product exists? Your product impacts the world at some level. Make that connection obvious to users and make it a story that they want to share.

Closing Thoughts

Successful UX design isn’t about a sleek user interface. UX design has three legs: function, form, and user. A sleek interface (form) is useless if it doesn’t work (function). And great form and function are useless if the user is not interested. Each of these legs of UX design is a series of moving parts that must all work smoothly together for the best adoption strategy. The best well-oiled design machine involves people who communicate across different functional groups, even if they exist in silos. Who needs to be trained? What meetings need to be scheduled? What team-building events need to happen?

Frodo didn’t save Middle Earth all by himself. Just as users are on a journey to work your product into their lives, you too are on a journey to bring people together and build the best solution you can for your users. In a way, you’re not really building a product—you’re building heroes. Are you ready for the journey?

About the Author:

7 Questions to Ask Before Choosing Your Technology Stack

April 7th, 2017

This is a guest post by appendTo founder Mike Hostetler. He is a technologist based in Chicago and works as Entrepreneur in Residence at Table XI.

Choosing the right technology stack is half art and half science. Getting that balance wrong can have a significant impact on your project, so it’s crucial you assess all the possible risks. At Table XI, we thoroughly vet each new technology before we decide whether it’s right for our clients. Here are the seven big questions we ask before adding a new technology to our mix …

What does the talent landscape for the technology look like?

The most common question we get from clients is whether they’re going to be able to easily find another developer who knows the technology stack. We always suggest they start by looking at how common the skills are — a tip is to use sites like Indeed’s Job Trends to see how many job postings list those skills.

Then look at what transferrable skills would allow developers to adopt the technology easily. When we started working with React Native, a new JavaScript framework developed by Facebook for building Android and iOS apps, we were able to quickly train three of our developers who had experience with JavaScript, which we wrote about here. Looking for transferrable skills can open up the talent pool significantly.

What is the culture around the technology?

Different developers are drawn to different technologies, and each stack has a unique community. .NET developers tend to work in more corporate environments, whereas Node.js developers tend to want cutting-edge workplaces. Developers joke that .NET conferences all start at 8:00 a.m., but Javascript conferences typically don’t start until 10:00 a.m. so everyone can recover from going out the night before. Understanding the culture around each technology will help you choose one with developers who fit at your company.

Who’s backing the technology, and why?

One of the best ways to determine a technology’s risk is looking at who’s responsible for managing and developing the technology. Microsoft backs .NET, Apple backs Swift and Facebook backs React. And then there’s the best-case scenario, technologies like Node.js that are backed by nonprofit organizations.

A corporate backer isn’t necessarily a bad thing, just make sure it’s in the backer’s best interest to do what you want. Because Microsoft makes money selling Visual Studio to .NET developers, it’s likely to keep .NET an attractive option so it can sell more tools. Then there’s Facebook. It doesn’t sell tools around React Native, so if React Native stops being valuable for recruiting developers, Facebook could easily cut its support. Minimize risk by choosing a technology with either a nonprofit backer or a corporation that has a financial stake in the technology’s success.

Whatever you do, make sure your technology stack is open-source. Closed-source technology platforms just aren’t taken seriously.

How mature is the developer tooling?

Think of technology the way you think of the iPhone. It took about three versions to really get it right. Software stacks work the same way. The earlier a technology is in its lifecycle, the less reliable it’s going to be. Ruby on Rails has gone through several years of iterations, so it’s low-risk. Rust, however, might be right for certain tasks, but it’s still maturing. Thoughtworks’ Technology Radar is a great tool for exploring where different technologies are on this curve.

It’s not just about making sure the technology is built to last. Mature technologies also have a full ecosystem of tools. Continuous integration, code analysis, bug tracking, all these tools make developing easier — and they only exist for technology stacks with a full market of developers.

The proliferation of those tools indicates how safe a bet the technology is. .NET is the Cadillac of development languages. It has leather seats, all the options, everything. Then there’s Node.js, which is a bare-bones Jeep — it doesn’t even come with doors. One’s not inherently better than the other, it’s all about the experience that best suits your business.

How easy is it to build and share solutions?

Maturity can also mean a great suite of third-party packages, or community-generated code that handles certain tasks. This makes development quicker, because programmers can find a ready-made solution. With almost 300,000 third-party packages in the NPM ecosystem, Node.js has the biggest package ecosystem across any technology, because it’s so easy to build and share things. It makes Node.js more likely to last, since developers will be drawn to a technology that has a full suite of easy-to-use solutions.

What are the maintenance needs?

Every technology requires basic hygiene. When you’re thinking about the costs, don’t think just about the build, but all the costs that will go into managing a solution over time. Technologies like WordPress do a lot of work to make updating easy. Others, not so much, especially when a significant amount of customization has been done.

Find out how the backer of the technology handles security issues and how often it pushes updates to get a sense of the resources it’ll take to keep things up-to-date. The last thing you want is an outdated version that leaves you vulnerable to security threats, so pick a technology with a maintenance process your business can handle.

What are the technology dependencies?

Most technologies build upon one another. Take Ruby on Rails. Rails is a framework that relies on Ruby, making it a secondary technology. To know the risk of Rails, you have to also know the risk of Ruby. And primary technologies like Ruby have dependencies too. You want to make sure each link in the chain is strong, and that links can be replaced should something go wrong.

The Heartbleed bug is a perfect example of how one weak link can take everything down. It was caused by a broken component in a library called OpenSSL — which happened to be one of the most widely used cryptographic libraries. When the Heartbleed bug was introduced, every technology that relied on OpenSSL was vulnerable.

To chose your technology stack wisely, ask these seven questions not just of your main technology, but of every technology it depends on, until you’re sure you have a chain that can support your business.

About the Author:

CSS: Simple Sticky Footer

September 22nd, 2016

The sticky footer…

The oh-so-sought-after expanse at the bottom of the page that contains contact information, site navigation, a call to action, or whatever else you might want to chuck in there. It’s the element that knows its place in the world (wide web) and embraces it by staying in it’s place. A well executed sticky footer encourages your site visitors to further engage; it encourages your site’s visitors to interact with your page’s content in a familiar and enjoyable fashion.

Fortunately, creating a sticky footer isn’t really all that difficult. In the following tutorial, I’ll show you a couple quick and simple methods for making a slick looking sticky footer, one that plays well with the modern web… one that deftly displays valuable information across varying screens and multiple devices. Best of all, I’ll show you how to create your footer in a relatively simple fashion in which you won’t need to mess with unnecessary libraries, fancy plugins, or less than desirable “hacks.”

Step 1: Behavior and Positioning

First things first, you’ll want to decide how your sticky footer will be displayed; i.e., how it will behave on your page. For the purposes of this tutorial, I’ll assume there are generally two choices for you with respect to your footer’s behavior: 1) your footer can stick to the bottom of the body of your page, changing position according to your page’s body’s height, or 2) it can stick to the bottom of your browser’s window, effectively rendering the footer as an omnipresent entity on the page. Both are relatively easy to achieve and have specific advantages and disadvantages when compared to the other. Ultimately the choice is  yours.

Here’s how to stick the footer to the bottom of the body:

See the Pen EgvKKR by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Things to note: 1) the body has relative positioning 2) the sticky footer has absolute positioning and a defined width 3) the bottom property for the sticky footer is set to 0.

Here’s how to stick the footer to the bottom of your browser window:

See the Pen QKrNEd by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Things to note: 1) the body still has relative positioning 2) the sticky footer now has fixed positioning and a defined width 3) the bottom property for the sticky footer is still set to 0.

With both approaches, using relative units (i.e., percentage or viewport units) will allow for your footer to respond to various screen widths.

Like I said, pretty simple It really doesn’t take much more than that to create a pure CSS, simple sticky footer. But of course, we want more… Let’s take a look at getting some content in there.

Step 2: Add Some Content

So you’ve got your footer… stuck to the bottom of your page or your browser’s window… Let’s flesh it out some more with some useful information. For the purposes of this demo, I’m going to go with a mock site navigation block, a contact information block, and a call to action button.

Here’s the populated sticky footer:

See the Pen pEAyEw by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Things to note here: 1) the sticky footer’s text-align property is set to center 2) div elements within the footer with display:inline-block are being used as containers 3) relative units and min- and max-widths are being used.

Aligning the text to the center and giving the containers a display of inline-block just creates a nice alignment for the content. The min and max widths further control spacing and wrap behavior; the relative units assist here as well. Next, let’s look at classing things up a bit.

Step 3: Add Some Effects

Things are looking good! You’ve got a well populated sticky footer with information balanced throughout. It’s easy to read and easy to access and it looks good across multiple browsers and multiple devices. Great! But let’s say we really want to draw in some visitors. What can we do? We can add some fancy schmancy effects to our footer, that’s what we can do! :) Let’s take a look at a few relatively simple effects that you can add to your footer in order to really draw attention to it.

If you want a relatively modest approach, there are shadow effects.

Shadow:

See the Pen bwApBw by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

A more modern approach may be to let the background shine through.

Transparency:

See the Pen RGkaKb by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

How about some interactivity?

Hover Transition:

See the Pen Xjadpq by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

Let’s really get their attention!

Animation:

See the Pen ozZxkO by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

And just to take it back to a world wide web of yesteryear…

Animated Background:

See the Pen jrLqmw by Develop Intelligence (@DevelopIntelligenceBoulder) on CodePen.

With effects, your own creativity is the limit!

Conclusion

In this tutorial, I showed you how to make a relatively simple sticky footer without needing to rely on any external libraries or unsavory methods. I showed you how to stick your footer to the bottom of your page or to the bottom or your browser’s window. I showed you how to get basic responsiveness going and how to fill your footer with content. We then looked at some neat effects that hopefully gave you some inspiration of your own. That concludes this tutorial on creating a simple sticky footer. Thanks for sticking with it!

About the Author:

Learn Smart, Not Hard: Applying Learning Research to Learning Programming

August 4th, 2016

Learning programming is a challenge. It’s difficult whether you’re just starting or simply picking up a new language or framework. What is the best way to take on this challenge? With such a variety of learning tools, techniques, and methods to use, which ones are the best?

This article will summarize a study on learning and studying techniques and give ideas for how to apply them to your programming studies.

The Study in Question

In 2013, a half dozen education researchers and cognitive scientists from different universities co-authored a paper called Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. As of August 2016, it’s been cited 472 times by other papers (according to Google Scholar).

This ‘meta-study’ explored several hundred education and psychology studies in order to review the effectiveness of common learning techniques. The learning techniques included things like highlighting, re-reading, self-explanation, and seven other common techniques that people use when learning (see page 3 of the study for the full list). The authors’ objective was to determine which learning techniques had the most scientific support. The authors found that 2 techniques have a ‘high utility’ across multiple learning contexts and 3 techniques have a ‘moderate utility’:

Results

High Utility – Most Helpful

  • Distributed practice
  • Quizzing (Practice Testing)

Moderate Utility – Fairly Helpful

  • Interleaving (Interleaved practice)
  • Self-explanation
  • Elaborative interrogation

 

The High Utility Learning Techniques

Learning Technique #1: Distributed practice

Distributed practice is the learning technique of spacing out study sessions over a longer period of time (months/weeks vs. days). All things equal, people retain information better when they’ve been learning it for a longer duration (calendar time not # of hours). Put simply, cramming doesn’t work as well as starting early.

Apply this to learning programming: If there’s a library or language you’ve been wanting to pick up for a while, start early. You’re better off spacing out the exposure to the new technology vs. trying to learn it all at once. If you’re just getting into programming, you’re better off doing little bits over a longer period of time vs. putting it off to cram and catch up later.

Learning Technique #2: Quizzing (Practice Testing)

Quizzing is the learning technique of testing one’s knowledge regularly. Quizzes tell you how much you know or don’t. This is obvious to anyone who has gone to school. It’s much more intriguing to know that getting the answers wrong on a quiz changes how you retain and retrieve that knowledge. To reiterate, a failed “retrieval attempt” changes the way the knowledge is stored in your brain. Researchers are currently not sure why this is. It may be that the retrieval process forces people to dig deep into their memory and scan through related information to find a piece of knowledge. This scanning through existing adjacent knowledge (aka ‘elaborative retrieval processes’) links your existing knowledge to the new piece of knowledge. Trying to remember something (even if you fail to) changes the way that memory is stored in your brain (see page 30 of the study if you want to explore this more).

Applying this to learning programming: Write code! Write code even if you fail at it. Try out that new method or library that you’ve been reading about. Coding is the equivalent of quizzing yourself on the knowledge. Don’t just passively consume articles or docs. Test if you can use it.  You’re better off reading something 5 times and testing your knowledge on it 5 times, than spending the same amount of time reading it 10 times. Testing your knowledge changes your grasp of that knowledge.

 

Moderate Utility Learning Techniques

Learning Technique #3: Interleaving

Interleaving is the learning technique of studying multiple things in a study session vs. studying one thing at a time. Multiple studies have shown that people retain more when the study session has a mix of subjects vs. rote drilling on one thing at a time. This effect is also observable in learning physical athletic skills.

Imagine that the letters in the following sentence represent different subjects. Studying ‘ABCBCACAB’ is better than studying ‘AAABBBCCC’.

How to apply this to learning programming: Mix it up when you’re studying. Mixed practice will feel harder than drilling on one thing, but you’ll retain more.

Learning Technique #4: Elaborative interrogation

Elaborative interrogation is the learning technique of thoroughly explaining why something is how it is. Exploring the why helps you learn things better. The following question prompts can be helpful for practicing elaborative interrogation:

“Why does it make sense that…?”

“Why is this true?”

“Why?” (beware of the the existential rabbit hole with this one)

Answering these questions when learning something new has been shown to cause better retrieval. The current theory is that elaborative interrogation links new knowledge with existing knowledge.

How to apply this to learning programming:

When learning something new, ask yourself ‘why’. Why does that exist the way it does? To borrow from the language of the study, ask yourself “Why would this fact be true of this [X] and not some other [X]?” Exploring and understanding why something is allows you to retrieve it more easily than simply knowing what it is.

Learning Technique #5: Self-explanation

Self-explanation is a learning technique that works much like elaborative interrogation. Self-explanation is the process of “explaining how new information is related to known information or explaining steps taken during problem solving”. Helpful prompts for self-explanation include:

“What is the main idea of _____ ?”

“How does _____ relate to _____ ?”

“What conclusions can I draw about _____ ?”

Self-explanation is different than elaborative interrogation in that the focus is not so much on the ‘why’. With self-explanation, the focus is on explaining what the facts mean to you and how they are related to your existing knowledge. However, both of those learning techniques work by connecting new knowledge to existing knowledge.

How to apply this to learning programming: When reading about something new, stop to think through what it means to you. How does the new thing, idea, or concept relate to what you already know? How does something from one language or framework relate to that of another? Leverage your existing knowledge when learning a new technology.

Caveats and Conclusion

This piece is meant to give you a quick review of learning techniques to help you study programming more effectively. These techniques are tools that have been tested in numerous experiments and learning contexts. However, they aren’t the only techniques and your mileage may vary with each. Furthermore, the authors of the study only explored 10 commonly used learning techniques. You may know some that work better for you.

To leave you with some easy takeaways, here are the 5 techniques briefly recapped:

Distributed practice – Start early and space out studying sessions.

Quizzing – Test your knowledge often.

Interleaved practice – Study a mix of content vs. drilling on one at a time.

Elaborative interrogation – Ask ‘why’ a new piece of knowledge is how it is.

Self-explanation – Explore how new knowledge is connected to existing knowledge.

About the Author:

History and Background of JavaScript Module Loaders

June 13th, 2016

Application logic for web apps continues to move from the back end to the browser. But as rich client-side JavaScript apps get larger, they encounter challenges similar to those that old-school apps have faced for years: sharing code for reuse, while keeping the architecture separated into concerns, and flexible enough to be easily extended.
One solution to these challenges has been the development of JavaScript modules and module loader systems. This post will focus on comparing and contrasting the JavaScript module loading systems of the last 5-10 years.
It’s a comprehensive subject, since it spans the intersection between development and deployment. Here’s how we’ll cover it:

  1. A description of the problems that prompted module loader development
  2. A quick recap on module definition formats
  3. JavaScript module loader roundup – compare and contrast
    1. Tiny Loaders (curl, LABjs, almond)
    2. RequireJS
    3. Browserify
    4. Webpack
    5. SystemJS
  4. Conclusion

The problems

If you only have a few JavaScript modules, simply loading them via <script> tags in the page can be an excellent solution.

<head>
 <title>Wagon</title>
 <!-- cart requires axle -->
 <script src=“connectors/axle.js”></script>
 <script src=“frames/cart.js”></script>
 <!-- wagon-wheel depends on abstract-rolling-thing -->
 <script src=“rolling-things/abstract-rolling-thing.js”></script>
 <script src=“rolling-things/wheels/wagon-wheel.js”></script>
 <!-- our-wagon-init hooks up completed wheels to axle -->
 <script src=“vehicles/wagon/our-wagon-init.js”></script>
</head>

However, <script> establishes a new http connection, and for small files – which is a goal of modularity – the time to set up the connection can take significantly longer than transferring the data. While the scripts are downloading, no content can be changed on the page (sometimes leading to the Flash Of Unstyled Content).  Oh yeah, and until IE8/FF3 browsers had an arbitrary limit of 2 simultaneous downloads because bad people back in the day were bad.
The problem of download time can largely be solved by concatenating a group of simple modules into a single file, and minifying (aka uglifying) it.

<head>
<title>Wagon</title>
<script src=“build/wagon-bundle.js”></script>
</head>

The performance comes at the expense of flexibility though. If your modules have inter-dependency, this lack of flexibility may be a showstopper. Imagine you add a new vehicle type:

<head>
<title>Skateboard</title>
<script src=“connectors/axle.js”></script>
<script src=“frames/board.js”></script>
<!-- skateboard-wheel and ball-bearing both depend on abstract-rolling-thing -->
<script src=“rolling-things/abstract-rolling-thing.js”></script>
<script src=“rolling-things/wheels/skateboard-wheel.js”></script>
<!-- but if skateboard-wheel also depends on ball-bearing -->
<!-- then having this script tag here could cause a problem -->
<script src=“rolling-things/ball-bearing.js”></script>
<!-- connect wheels to axle and axle to frame -->
<script src=“vehicles/skateboard/our-sk8bd-init.js”></script>
</head>

Depending on the design of the initialization function in skateboard-wheel.js, this code could fail because the script tag for ball-bearing wasn’t listed between abstract-wheel and skateboard-wheel. So managing script ordering for mid-size projects got tedious, and in a large enough project (50+ files), it became possible to have a dependency relationship for which there wasn’t any possible order that would satisfy all the dependencies.

Modular programming

Modular programming, which we explored in a previous post, satisfies those management requirements nicely. Don’t head out to celebrate just yet though – while we have a beautifully organized and decoupled codebase, we still have to deliver it to the user.
The stateless and asynchronous environment of the web favors user experience over programmer convenience. For example, users like it when they can start reading a web page before all of its images have finished downloading. But bootstrapping a modular program can’t be so tolerant: a module’s dependencies must be available before it can load. Since http won’t guarantee how long that fetch will take, waiting for dependencies to become available gets tricky.

Javascript Module formats, and their loaders

The Asynchronous Module Definition (AMD) API arrived to solve the asynchronous problem. AMD takes advantage of the fact Javascript is processed in two phases: parsing (interpretation), when the code is checked for syntax, and execution, when the interpreted code is run.

At syntax time, dependencies are simply declared in an array of strings. The module loader checks if it has that dependency loaded already, and performs a fetch if not. Only once all the dependencies are available (recursively including all dependencies’ dependencies) does the loader execute the function portion of the payload, passing the now-initialized dependency objects as arguments to the payload function.

// myAMDModule.js
define([‘myDependencyStringName’, ‘jQuery’], function (myDepObj, $) { ...module code...

This technique solved the file ordering problem. You could again bundle all your module files into one big fella – in any order – and the loader would sort them all out. But it presented other advantages too: there suddenly became a network effect to using publicly hosted versions of common libraries like jQuery and Bootstrap. For example, if the user already had Google’s CDN version of jQuery on their machine, then the loader wouldn’t have to fetch it at all.
Ultimately, AMD’s ‘killer app’ wasn’t even a production feature. The best and least expected advantage of using a module loader came from referencing individual module files during development, and then seamlessly transitioning to a single, concatenated and minified file in production.
On the server side, http fetches are rare, as most files already exist on the local machine. The CommonJS format uses this assumption to drive an synchronous model. The CJS-compatible loader will make available a function named require(), which can be called from within any ordinary Javascript to load a module.

// myCommonJSModule.js
var myDepObj = require(‘myDependencyStringName’);
var $ = require(‘jQuery’);
if ($.version <= 1.6) alert(‘old JQ!’);

CJS eliminates the cumbersome boilerplate syntax of AMD’s define() signature. Back-end pros who use languages like Python often find CJS a more familiar pattern than AMD. CJS can also be statically analyzed more easily – for instance, in the example above, an analyzer could infer that the object being returned from require(‘jQuery’) should have a property named ‘version’. IDEs can use this analysis for useful features like refactoring and autocomplete.
Since require() is a blocking function, it causes the Javascript interpreter to pause the current code and switch execution context to require’s target. In the example above, the console log won’t execute until the code from myDependencyStringName.js has loaded and finished.
In the browser, downloading each dependency serially as the file is processed would result in even a small app having unacceptable load times. This doesn’t mean no one can use CJS in the browser though. The trick comes from doing recursive analysis during build time – when the file has to get minified and concatenated anyway, the analyzer can traverse the Abstract Syntax Tree for all the dependencies and ensure everything gets bundled in the final file.
Finally, ES6, the most significant update to Javascript in many years, added built in support in the form of the new ‘module’ keyword. ES6 modules incorporate many of the lessons learned from both AMD and CJS, but resemble CJS more strongly, especially in regards to loading.

Reasons not to use a module loader

These days, modular programming and module loaders have become synonymous with rich web apps. But using modular programming does not necessarily require using a module loader. In my experience, only the issue of complicated module interdependence qualifies as absolutely requiring a module loader, but many projects have complicated loading infrastructure they just don’t need.
Adding any technology to your stack has a cost: it increases both the number of things that can possibly go wrong and that you need to understand. Many of the benefits of loaders are just that – benefits – and not requirements. Beware the benefits that sound like no-brainers – You Ain’t Gonna Need It, as it’s a subtle form of premature optimization.
Try running your project without a loader at first. You’ll have greater control over and insight into your code. If find you never need one, you’re ahead, and adding one later is not hard.
The same YAGNI logic applies to the features of whatever module loader you choose. I’ve seen many projects use AMD named modules for no benefit whatsoever (and there’s a substantial cost to it as well). KISS.

Tiny loaders

Early on, as AMD emerged as a leading client-side format, the module loading ecosystem exploded to support it. Libraries from this explosion include LAB.js, curljs, and Almond. Each had a different approach, but much in common: they were tiny (1-4kb), and followed the Unix philosophy of doing one thing and doing it well.

The thing they did was to load files, in order, and afterwards call back a provided function.  Here’s an example from the LABjs github:

<script src="LAB.js"></script>
<script>
$LAB
.script("https://remote.tld/jquery.js").wait()
.script("/local/plugin1.jquery.js")
.script("/local/plugin2.jquery.js").wait()
.script("/local/init.js").wait(function(){
initMyPage();
});
</script>

In this example, LAB starts fetching jQuery, waiting until it finishes executing to load plugin1 and plugin2, and waiting until those are finished before loading init.js. Finally, when init.js finishes, the callback function invokes initMyPage.
All these loaders use the same technical mechanism to fetch content: they write a <script> tag into the page’s DOM with the src attribute filled in dynamically. When the script fires an onReadyStateChange event, the loader knows the content is ready to execute.
LAB and curl aren’t actively maintained anymore, but they were so simple they probably still work in today’s browsers. Almond still gets maintained as the minimalistic version of Require.

RequireJS

Require appeared in 2009, a latecomer among the tiny loaders, but went on to gain the greatest traction due to its advanced features.

At its core, Require is not fundamentally different than the tiny loaders. It writes script tags to the DOM, listens for the finishing event, and then recursively loads dependencies from the result. What made Require different was its extensive – some might say baffling – set of configuration options and operating sugar. For example, there are two documented ways to kick off the loading process: either pointing an attribute named data-main of the script tag that loads RequireJS at an init file…

<script src=“tools/require.js” data-main=“myAppInit.js” ></script>
...or invoking a function named require() in an inline script...
<script src=“tools/require.js”></script>
<script>
require([‘myAppInit’, ‘libs/jQuery’], function (myApp, $) { ...
</script>

…but the documentation recommends not using both, without giving a reason. Later, it’s revealed the reason is neither data-main nor require() guarantee that require.config will have finished before they execute. At this point, inline require calls are further recommended to be nested inside a configuration call:

<script src=“tools/require.js”></script>
<script>
require(['scripts/config'], function() {
require([‘myAppInit’, ‘libs/jQuery’], function (myApp, $) { ...
});
</script>

Require is a swiss army knife of configuration options, but an air of automagical uncertainty hangs over the multitude of ways in which they affect each other. For example, if the baseUrl config option is set, it provides a prefix for the location to search for files. This is sensible, but if no baseUrl is specified, then the default value will be the location of the HTML page that loads require.js – unless you used data-main, in which case that path becomes baseUrl! Maps, shims, paths, and path fallback configs provide more opportunities to solve complex problems while simultaneously introducing unrelated ones.
Worth mentioning is possibly the most “gotcha” of its conventions, the concept of “module ID”. Following a Node convention, Require expects you to leave the ‘.js’ extension off the dependency declaration. If Require sees a module ID that ends in ‘.js’, or starts with a slash or an http protocol, it switches out of module ID mode and treats the string value as a literal path.
If we changed our example above like so:

require([‘myAppInit.js’, ‘libs/jQuery’], function (myApp, $) { ...

Require is almost certain to fail to find myAppInit, unless it happens to be in the directory the baseUrl/data-main algorithm returns. As close to muscle memory as typing the ‘.js’ extension is, this error can be annoying until you get in the habit of avoiding it.
Despite all its idiosyncrasy, the power and flexibility of Require won it wide support, and it’s still one of the most popular loaders on the front end today.

Browserify

Browserify set out to allow use of CommonJS formatted modules in the browser. Consequently, Browserify isn’t as much a module loader as a module bundler: Browserify is entirely a build-time tool, producing a bundle of code which can then be loaded client-side.
Start with a build machine that has node & npm installed, and get the package:
npm install -g –save-dev browserify
Write your modules in CommonJS format, and when happy, issue the command to bundle:
browserify entry-point.js -o bundle-name.js
Browserify recursively finds all dependencies of entry-point and assembles them into a single file:
<script src=”bundle-name.js”></script>
Adapted from server-side patterns, Browserify does demand some changes in approach. With AMD, you might minify and concat “core” code, and then allow optional modules to be loaded a la carte. With Browserify, all modules have to be bundled; but specifying an entry point allows bundles to be organized based on related chunks of functionality, which makes sense both for bandwidth concerns and modular programming.
Launched in 2011, Browserify is going strong.

Webpack

Webpack follows Browserify’s lead as a module bundler, but adds enough functionality to replace your build system. Expanding beyond CJS, Webpack supports not only AMD and ES6 formats, but non-script assets such as stylesheets and HTML templates.
Webpack runs on a concept called ‘loaders’, which are plugins registered to handle a file type. For example, a loader can handle ES6 transpilation (Webpack 2.0 handles ES6 natively), or SCSS compilation.
Loaders feed data into a “chunk”, which starts from an entry point – conceptually similar to a Browserify bundle. Once Webpack is set up, chunks are regenerated automatically as assets  change. This can be very powerful, as you don’t have to remember to edit chunks.
The feature that has everybody really excited is hot module replacement. Once Webpack is in charge of your chunks, while running webpack-dev-server, it knows enough to modify code in the browser as you change the source. While similar to other source watchers, webpack-dev-server doesn’t require a browser reload, so it falls into the category of productivity tools that shave milliseconds off your dev process.
Basic usage is beyond simple. Install Webpack like Browserify:
npm install -g –save-dev webpack
And pass the command an entry point and an output file:
webpack ./entry-point.js bundle-name.js
If you’re limiting use to Webpack’s impressive set of defaults, that command power always comes at a cost though. On one project, our team had several difficult problems – transpiled ES6 didn’t work after Webpack chunked it, and then SCSS worked locally but failed to compile in the cloud. In addition, Webpack’s loader plugin syntax overloads the argument to require(), so it won’t work outside of Webpack without modification (meaning you won’t be able to share code between client and server side).
Webpack has its sights set on the being next-generation compiler for the web, but maybe wait for the next version.

Screen Shot 2016-06-13 at 12.27.23 PM

Google Trends’ Take

Source

SystemJS

Wikipedia defines a polyfill as “additional code which provides facilities that are not built into a web browser”, but the ES6 Module Loader Polyfill which SystemJS extends goes beyond the browser. An excellent example of how agnostic modern Javascript has become about the environment it runs in,  the ES6 Module Loader Polyfill can also be used via npm in a Node environment.

SystemJS can be thought of as the browser interface to the ES6 Module Loader Polyfill. Its implementation is similar to RequireJS: include SystemJS on the page via a script tag, set options on a configuration object, and then call System.import() to load modules:

<script src="system.js"></script>
<script>
// set our baseURL reference path
System.config({
baseURL: '/app'
});
// loads /app/main.js
System.import('main.js');
</script>

SystemJS is the recommended loader of Angular 2, so it already has community support. Like Webpack, it supports non-JS file types with loader plugins. Like Require, SystemJS also ships with a  simple tool, systemjs-builder, for bundling and optimizing your files.
However, the most powerful component associated with SystemJS is JSPM, or JavaScript Package Manager. Built on top of the ES6 Module Loader Polyfill, and npm, the Node package manager, JSPM promises to make isomorphic Javascript a reality. A full description of JSPM is beyond the scope of this article, but there’s great documentation at jspm.io, and many how-to articles available.
Comparison Table

Loader CategoryLocal module formatServer filesServer module formatLoader code
Tiny loadersVanilla JSSame structure as local filesSame format as local filescurl(entryPoint.js’)
RequireJSAMDConcatenated and minifiedAMDrequirejs(‘entryPoint.js’, function (eP) {
// startup code
});
BrowserifyCommonJSConcatenated and minifiedCommonJS
inside AMD
wrapper
<script src=”browserifyBundle.js”>
</script>
WebpackAMD and/or CommonJs (mixed OK)“Chunked” – Concat and minify into feature groupsWebpack proprietary wrapper<script src=”webpackChunk.js”>
</script>
SystemJSVanilla, AMD, CommonJS, or ES6same as localSystemJS proprietary wrapperSystem.import(‘entryPoint.js’)
.then(function (eP) {
// startup code
});

Conclusion

Today’s plethora of module loaders constitutes an embarrassment of riches compared to just a few years ago. Hopefully this post helped you understand why module loaders exists and how the major ones differ.
When choosing a module loader for your next project, be careful of falling prey to analysis paralysis. Try the simplest possible solution first: there’s nothing wrong with skipping a loader entirely and sticking with plain old script tags. If you really do need a loader, RequireJS+Almond is a solid, performant, well supported choice. Browersify leads if you need CommonJS support. Only upgrade to a bleeding edge entry like SystemJS or Webpack if there’s a problem you absolutely can’t solve with one of the others. The documentation for these bleeding-edge systems is arguably still lacking. So use all the time you save by using a loader appropriate to your needs to deliver some cool features instead.

About the Author:

== vs. === in Javascript (Abstract vs Strict equality in js)

February 2nd, 2016

Testing for equality is fundamental in computer science. And while it seems like a straightforward concept (Thing A is the same as Thing B, or it isn’t) there are some subtleties that can, at first, seem strange. For example, it seems a common source of confusion for those new to javascript is whether to use == or === when making an equality comparison. This post will explain the difference between these two operators and how to decide when to use one over the other.

First, some terminology about Javascript string equals: Double equals is officially known as the abstract equality comparison operator while triple equals is termed the strict equality comparison operator. The difference between them can be summed up as follows: Abstract equality will attempt to resolve the data types via type coercion before making a comparison. Strict equality will return false if the types are different. Consider the following example:

console.log(3 == "3"); // true
console.log(3 === "3"); // false.

Using two equal signs returns true because the string “3” is converted to the number 3 before the comparison is made. Three equal signs sees that the types are different and returns false. Here’s another:

console.log(true == '1'); // true
console.log(true === '1'); // false

Again, the abstract equality comparison performs a type conversion. In this case both the boolean true and the string ‘1’ are converted to the number 1 and the result is true. Strict equality returns false.
If you understand that you are well on your way to distinguishing between == and ===. However, there’s some scenarios where the behavior of these operators is non intuitive. Let’s take a look at some more examples:

console.log(undefined == null); // true
console.log(undefined === null); // false. Undefined and null are distinct types and are not interchangeable.
console.log(true == 'true'); // false. A string will not be converted to a boolean and vice versa.
console.log(true === 'true'); // false

The example below is interesting because it illustrates that string literals are different from string objects.

console.log("This is a string." == new String("This is a string.")); // true
console.log("This is a string." === new String("This is a string.")); // false

To see why strict equality returned false, take a look at this:

console.log(typeof "This is a string."); // string
console.log(typeof new String("This is a string.")); //object

The new operator will always return an object and you will get the same results when comparing primitive numbers and booleans to their respective object wrappers.
Reference types
Speaking of objects, what happens if we want to compare reference types? Do abstract and strict comparison behave any differently when we are dealing with objects? Yes! There is another rule you need to keep in mind. When comparing reference types both abstract and strict comparisons will return false unless both operands refer to the exact same object. Consider the following:

var a = [];
var b = [];
var c = a;
console.log(a == b); // false
console.log(a === b); // false
console.log(a == c); // true
console.log(a === c); // true

Even though a and b are of the same type and have the same value, both abstract and strict equality return false.

So which one should I use?

Keep it strict. Using the strict equality operator by default will increase the clarity of your code and prevent any false positives caused by abstract equality comparison. When you need to compare values of different types, do the conversions yourself. The more explicit your code, the better.

Digging deeper

For more on this topic, take a look at the ECMAScript Language Specification. Also check out this nifty table showing all possible type comparisons. Finally, see this post by mozilla for a comprehensive discussion of equality and sameness.

About the Author:

How to Clear an Array in JavaScript

February 2nd, 2016

Arrays are awesome! They allow you to store multiple values in a convenient, indexed set. In JavaScript, arrays can be declared literally or they can be initialized using the Array constructor function. But wait… What if you want to empty an array instead of creating one? Hmm… perhaps not as straightforward. Have no fear, there are some relatively easy ways to go about emptying an array in JavaScript. In the following examples, I’ll examine three different ways in which you can clear an array. I’ll then present some quirkiness related to one of the methods, as well as how to address it.
Here’s our example array:

var arr = [1, 2, 3, 4, 5];
console.log(arr); //[1, 2, 3, 4, 5]

https://codepen.io/anon/pen/adjwde?editors=0012
*Note here, that our array could be populated with any type of data, not just numbers.
Now, to clear it
Method 1
This method couldn’t be simpler; just set the value of the array in question to an clear array.

var arr = [1, 2, 3, 4, 5];
arr = [];
console.log(arr); //[]

https://codepen.io/anon/pen/ZQjyWw?editors=0012
Method 2
This method for emptying an array uses a for loop (could be any kind of loop) and the Array.prototype.pop() method.

var arr = [1, 2, 3, 4, 5];
for (var i = arr.length; i > 0; i--) {
 arr.pop();
}
console.log(arr); //[]

https://codepen.io/anon/pen/VeBWjj?editors=0012
Method 3
This third method is perhaps less intuitive, however, it’s as simple as the first and actually produces a more robust result. Simply set the length of the array to 0.

var arr = [1, 2, 3, 4, 5];
arr.length = 0;
console.log(arr) //[]

https://codepen.io/anon/pen/GoBENm?editors=0012
Piece of cake! Three methods for clearing/emptying an array in JavaScript. Hold on… What was that quirkiness I was talking about earlier? Let’s have a look…
Method 1 is easy; just set the array to an clear array. However, you should be aware of the fact that setting an array equal to an empty one doesn’t affect all of the references to that array. Here’s what I mean….

var arr = [1, 2, 3, 4, 5];
var arr2 = arr;

Empty out the original array and…

arr = [];
console.log(arr); //[]
console.log(arr2); //[1, 2, 3, 4, 5]

https://codepen.io/anon/pen/adjwpb?editors=0012
Whoops! Emptying the original array by setting it equal to an empty array cleared it out, but it didn’t clear a reference to it. This could cause some issues! Methods 2 and 3 take care of the problem:

var arr = [1, 2, 3, 4, 5];
var arr2 = arr;
for (var i = arr.length; i > 0; i--) {
 arr.pop();
}
console.log(arr); //[]
console.log(arr2); //[]

https://codepen.io/anon/pen/WrKORy?editors=0012
and

var arr = [1, 2, 3, 4, 5];
var arr2 = arr;
arr.length = 0;
console.log(arr); //[]
console.log(arr2); //[]

https://codepen.io/anon/pen/ZQjyLP?editors=0012
As you can see, Methods 2 and 3 address the matter; emptying out the array and references to it. As to the question of which is the preferred method, there is a debate. Some claim performance is worse with Method 3 (discussion here). And some claim that readability is lost with Method 3. Still others don’t see these issues as drawbacks and prefer the simple elegance of writing length = 0.
Regardless of the pros and cons associated with each of the aforementioned methods, you now know three distinct ways in which you can empty an array in JavaScript. Additionally, you are now aware of a quirk associated with of one of the methods.
Use what you’ve learned. Go forth and clear those arrays!

About the Author:

Brainstorming a Creative Project Part 4: Conclusion

October 16th, 2014

So far in this series on creative project brainstorming, we have looked at the type of questions and exploration techniques you could use when you’re in the challenge phase of your project. In this last part, though short, we’ll cover how to close the project discovery and let you do the actual work.

Closing an exploration phase doesn’t mean the close of the entire project discovery. In fact, until you and your team have come to a final decision on how your initial goal will be reached, you’ll probably be closing many different exploration phases. To help with semantics, think of the discovery phase of your project as one big (opening –> exploration –> closing) group with many, many smaller, similar groups inside of it. It’s not uncommon to have an ‘open, explore, and closing’ group for a every single topic in your project.

Closing a topic is meant to be the end of the topic at this stage. Your best solution right now may need some modifications later, and that’s okay. It’s helpful to keep this fact in mind when you are brainstorming. It’s fairly common to revisit old ideas to make sure your new ideas are conflicting with them. Even if they do, it’s fair to start a new round of questioning and exploration on that topic. You may do this without even realizing it.

Closing Questions

You close a topic by asking closing questions. Remember how we talked about opening questions and that were intentionally vague and used to create discussions? Of course you do. Closing questions are the opposite of that. Questions like, “Which of these options is the best solution for this problem?”, are designed to get people thinking about a single solution, in this case from a list of choices, that they can agree on.

Idea_Pool

You and your team have already had many discussions on whether your ideas are viable and consistent with your goal. Closing questions should also create discussion, but of a different sort. Up until now, you’ve been asking questions to figure out aspects of the project. Now, you’re trying to figure out which of those aspect ideas are the best, or at least the best right now.

That’s really all there is to it, actually. Make sure you and your team are aware of when you have ‘enough’ ideas and are ready to close a topic. Otherwise you may have some confused people.

Series Conclusion

In this article series, we didn’t talk about anything technical or code related, but rather techniques that can be used to help a team come together and brainstorm on a creative project.

Vague opening questions can help you determine where you want to go with your initial idea and help you define your goal. Keep in mind that you need allow yourself and your group to have discussions to open up the exploration phase of the topic and get all ideas on the table, and I mean ALL. Make sure your team members’ voices are heard and instigate anyone that seems like they are holding back. There are (almost) no bad ideas.

If you follow these guidelines, you are going to have an innovative, user-friendly, robust application. I hope you take some ideas from this series on Brainstorming a Creative Project to apply to your own endeavor.

About the Author:

A Really Good Second Pair of Eyes

October 7th, 2014

This month (October) is my two-year anniversary at appendTo. Two years may not sound long to you but in the tech world it is a bit of an eternity. My wife Brittany still can’t believe I’ve stayed that long–most of my other jobs have been project-based and have lasted a year or less. When I started with appendTo my desire was to find a place to hang my hat for a while. And here I am, two years later, and the hat is still on the hook.

I flew to Tacoma with a co-worker, Matt, a few weeks ago, to help a large, enterprise company conduct an internal hackathon for their IT support team. We functioned as technical coaches, helping each of the five teams participating to make decisions about technical trade-offs (they had two days to finish their projects), and to help mitigate any technical obstacles that arose. It was a marvelous, albeit exhausting experience. When we met with this company’s leadership for the first time I was asked to describe what appendTo is all about, and this is what I said:

appendTo’s name is a play on the the jQuery function of the same name. In JavaScript we often want to take an element, say a hyperlink or a some emboldened text, and append it to some other element, like a paragraph. Our company has a similar relationship to the clients it engages: we append to that client, becoming a part of its team, to help accomplish its goals with our own expertise and values. We are very fast, and very effective.

I was then asked about my personal qualifications as a technical coach. Each team, in theory, could choose any technology stack for their project (though each project had to function on mobile devices), but in reality the company had a significant Microsoft infrastructure. Matt and I had spent a good deal of time discussing the potential technical avenues each team might take, and we had concluded that, given the time constraints and culture of the company, Microsoft-based, cloud hosted solutions were likely candidates.

Fortunately I have experience with the .NET stack, which put the client at ease. But I also stressed that my career has spanned many stacks, languages, frameworks and paradigms, and that I had sought from the beginning to be a generalist, not a specialist. To me, understanding problems is more important than particular solutions, which may be myriad. Problems are always contextual.

The leadership team then asked what I thought about my role in the hackathon. I replied: “I am going to be a really good second pair of eyes.”

It was a decent elevator pitch.

Each team ended the week with polished projects, and though we were all tired and giddy from the process, an intense camaraderie emerged. Both Matt and I lived up to the promises we made and the client was very pleased.

I’ve since reflected on the claims I made in our initial meetings (all of which I believe), and projected them onto the backdrop of my experiences at appendTo. I’ve worked with a lot of clients, on a lot of projects, with a lot of different technologies. I’ve produced code, written articles, coached a hackathon, given presentations, migrated a mountain of WordPress data (a task I would not wish on my enemies), created and contributed to open source projects, and so on. For each client engagement I straddled the line of being uniquely me, and being that second pair of eyes; of maintaining my own personal boundaries and standards, while appending to an existing team or management structure with their own goals and values. I’ve seen my co-workers do the same, time and again, in development, design, and management capacities.

There is a time for modesty, but this is not it.

appendTo harbors amazing people with amazing talents. Moreover, it treats its employees like entrepreneurs, free to grow their ideas and visions and passions while serving clients. It’s a petri dish for excellence. And this characteristic is intentional.

When I was first hired, I was told that appendTo was a place for recovering developers who had been abused by the corporate world. And though the journey is not always smooth (no good journey is!), the work I do here and the relationships I make are, and will continue to be, some of the most significant in my life. And for that I am both grateful and humbled.

*This blog post was previously featured on nicholascloud.com.

 

About the Author:

Brainstorming a Creative Project Part 3: The Exploration Phase

October 2nd, 2014

In Part 2 of this series, you learned what being in the challenge space means and about some of the overall techniques used while there. Now begins Part 3 which will take you deeper into the exploration phase of brainstorming your creative project by investigating on more granular level.

Types of Questions

We spent some time over the last two parts in this series talking about the questions that you and your team ask when trying to determine the goal of your project and when you are exploring the challenge space. It’s important to have a plan of attack when you are preparing your questions for the team, even if the plan is spontaneous. Knowing what kind of questions you are asking can lead to results faster than not. Let’s take a look at some common types of questions that relate to brainstorming a creative project.

Instigating Questions

Instigating questions, or opening questions as some call them, serve as that spark when you and your team are firestarting a topic. They are meant to be very open-ended to create a divergence in thought; to force you to think of all the ways you can accomplish the same thing. In web development, the number of ways to do the same task seems infinite and opening questions help you see that.

Directional Questions

For obvious reasons, questions that try to find a path or put you on a certain path are called directional (or navigational) questions. By starting with a vague question like, “What pages does our web application have?”, your team can get off track due to all the possibilities that could be. It’s important for whomever the team leader is to interject with a question that takes the currently discussed idea and finds out whether or not it’s viable. “Is this discussion going to move us along?” is an example of a directional question. It may seem simple, but it makes people think about whether or not the topic is helpful. The answer could be yes or no, and if it’s ‘no’, the team needs to either start a new topic or another directional question like, “Is the next item on our list still a possible solution?” is needed.

Exploring Questions

It’s time to fully explore any and all ideas during the challenge phase. Exploring is about following a line or flow of ideas to a solution, or at least a common endpoint. It’s also about testing and examining whether or not an idea is viable, kind of like the scientific method.

Let’s say you want to come up with a way to get a user from a landing page to the checkout process. That’s pretty open-ended so you could either take a few suggestions about the first step of the user interaction, or you could take suggestions on entire paths and examine it. You can do this by diving deeper into a subject and looking closer at how something works; not how something might work, but how it does work. You may simply ask, “How does it work?”, when a team member suggests an image carousel. If someone wants all the buttons in the application to have a hover effect, “What is the purpose of the effect?”, is a good way to examine the idea.

Another subtype of exploring questions is testing questions (or sometimes referred to as experimental). Conversely, we use these questions to define how something might work. Oftentimes you experiment with something directly after examining, but sometimes the idea is vague enough that you can go right into the experimental step. For example, if you want to experiment with the button hover effect in different ways, you might ask, “What types of styles make sense for this application?”, for a design experiment, and, “What else can this button do when a user hovers over it?”, for a functionality experiment.

After experimenting a little with each idea, it may be helpful to go back and examine them again depending on the time and budget for the project. Re-examining may prove beneficial.

Closing Questions

These types of questions are used to bring all the explorative ideas to a ‘close’, meaning, they should finalize a topic to a solution. You may have more than one solution if you need to have redundancy, but your team should still be converging on those solutions without any remaining ideas on how to solve the problem.

More on Exploration

Now that you’ve seen a few examples of the types of questions that are useful when brainstorming a creative project, let’s take a closer look at exploring your ideas as a team.

One thing you should always keep in mind is topic scope. It’s important to not venture out of the idea zone. If your team is on the topic of what kind of options a product has, don’t offer ideas on how to display those options. That is a separate issue and it has it’s own scope. Write it down and deal with it later. You may be thinking that the content of a dropdown is just as important as the way it’s displayed. I’m not saying that it isn’t, since there are plenty of cases where the two go hand-in-hand, but functionality, design, and content are three distinct things and should be separated accordingly. True, there may be a web of dependencies, but the topics are different and can be explored separately without affecting each other.

Relative Space

Sometimes when scientists or psychologists research something, they will categorize the flow of their methods into two categories: precision and ordering. This helps them determine what is more important: knowing in which order they did the experiments and/or what methods provided them with results. Sometimes it’s more important to know when something happened after or before something else rather than during the task. In other cases, the opposite is true and in rare cases, both are true. When you are exploring your creative idea with your team, you may find yourself developing a flow for a user to follow. In the case of a web application, this might be the path a consumer follows from the home page to the checkout page.

Think about what’s more important for your application. Do you care more about the number of steps it took to get to the checkout page with one or more products or are the products that are added to cart more important?

Below, we see a flow of generic steps that you want a user to take to reach your goal. In this case, the larger picture is more of a concern than what the user was doing in between the steps.

Ordering Focus

order_focus

In a precision focused approach, the basic steps are the same, but there are minuet steps within those larger steps that are important to us. In this case, the focus is more on what options are chosen rather than speed.

Precision Focus

precision_focus

To be clear, you do not need to decide if only one of these approaches is better for your entire project. Use them both! You may find that one topic needs to be thought of in an ordered sense, while another could use some precision thinking. Heck, use the ordered approach to get a grasp of the general idea and break into precision mode if you still have unanswered questions.

Part 3 Wrap Up

When you’re in the challenge space and the project needs to be explored, it’s important to understand the different types of questions you need to ask when properly examining and experimenting with your idea while keeping yourself and your team in line with the goals you’ve set.