Coding Katas are a way of developing your skills as a programmer. I thought it might be informative to tackle one of the classics as a blog post. Depending on how this works, I may or may not do another one quite so publicly. The rules I’m going to try to adhere by.
I will document what I am doing as I go.
This is not a pre-coded blog post. You’ll get to “see” me code as I go.
I will write all tests first.
I will only write enough code to make the current tests succeed.
Today’s problem:
This is a pretty classic coding problem that shows up in interviews, home-work assignments, and code katas.
As an interview problem, I find it lacking because it typically does not represent the kind of work you will be doing other than proving that you can solve problems in your chosen language. It is also a standard coding problem, meaning the person interviewing you is using the most obvious problem to see if you can code. Much like asking “What is your greatest strength?” and “What is your greatest weakness?” Finally, it takes longer to complete than I believe interview coding problems should. This is not to say, I don’t jump through these hoops myself.
As a kata exercise, it is good because there are several ways you might solve the problem. And for our purposes, it also demonstrates what Test Driven Development might look like using JavaScript.
So, what is the problem? Write a function that converts Roman numbers into Arabic numbers and throws an error if the Roman number is in an invalid form.
That sounds pretty easy. But the first question we need to ask is, “What, exactly are the rules for converting Roman numbers into Arabic numbers?”
The values for Roman numbers are as follows:
Roman
Arabic
I
1
V
5
X
10
L
50
C
100
D
500
M
1000
Repeating a number up to three times adds that value three times.
A Roman ‘digit’ can’t repeat more than three times. Instead the previous 1, 10, or 100 equivalent value is used to subtract from the next ‘digit’. That is,
Roman
Arabic
IV
4
IX
9
XL
40
XC
90
CD
400
CM
900
With the exception of the subtraction rule above, all values must decrease in scale from left to right and are added together.
Test 1
Since we will be using JavaScript, the testing framework we will be using is Jasmine.
Our first test will simply test that when we pass in “I” we get back 1.
1 2 3 4 5 6 7 8 9 10 11
describe('tests/roman-to-arabic/RomanToArabic.spec.js',function(){ var returnValue; describe('When I is passed in',function(){ beforeEach(function(){ returnValue = romanToArabic("I"); }); it('should return 1',function(){ expect(returnValue).toBe(1); }); }) });
And the code that passes this test is:
1 2 3
functionromanToArabic(romanNumber){ return1; }
You might think, what’s the point? Why just return 1 when you know you are going to have to do more? Well, when you are doing TDD, you have to work off of what you are testing for now, not what you might test for later. So, we return 1.
Test 2
From here, we can go a couple of different directions. How about testing for rule 3 next. How would we do that? To start with, we might just make sure that if we pass in IV, we get back 4. Now our code is getting a bit more complicated. But sticking with our TDD principles, we’ll just test for IV. Another happy path test.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
describe('tests/roman-to-arabic/RomanToArabic.spec.js',function(){ var returnValue; describe('When I is passed in',function(){ beforeEach(function(){ returnValue = romanToArabic("I"); }); it('should return 1',function(){ expect(returnValue).toBe(1); }); }) describe('When IV is passed in', function(){ beforeEach(function(){ returnValue = romanToArabic("IV"); }); it('should return 4',function(){ expect(returnValue).toBe(4); }); }) });
OK. What happens if IV shows up twice? That should be a failure. IV should never show up more than once. In fact, none of the codes that show up in rule three should show up more than once. Let’s make sure they don’t.
The tests for this will be pretty simple, and to save time, we will code for all of them at once. In fact, we are going to eventually need the conversion tables above, so let’s go ahead and put them in now.
Here is what our test file looks like now. Notice that we’ve put several tests in at once.
Notice that I didn’t have to write multiple tests to do this. I just created one test and iterated over it.
// This block is probably inefficient, but it is // easy to reason about. for(var prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } var rx = newRegExp(prop,'g'); if((romanNumber.match(rx) || []).length > 1){ throw'Poorly formed Roman number!'; }
describe('tests/roman-to-arabic/RomanToArabic.spec.js',function(){ var returnValue; var minusOneTable = { IV: 4, IX: 9, XL: 40, XC: 90, CD: 400, CM: 900 }; describe('When I is passed in',function(){ beforeEach(function(){ returnValue = romanToArabic('I'); }); it('should return 1',function(){ expect(returnValue).toBe(1); }); }); describe('When IV is passed in', function(){ beforeEach(function(){ returnValue = romanToArabic('IV'); }); it('should return 4',function(){ expect(returnValue).toBe(4); }); }); var prop; for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } describe('When ' + prop + prop + ' is added',(function(propCopy){ it('should throw an exception',function(){ expect(function(){romanToArabic(propCopy+propCopy)}).toThrow(); }); }).bind(this,prop)); } for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } describe('When ' + prop + 'I' + prop + ' is added',(function(propCopy){ it('should throw an exception',function(){ expect(function(){romanToArabic(propCopy + 'I' + propCopy)}).toThrow(); }); }).bind(this,prop)); } });
Test 5
Next, we need to make sure that none of the items in our base table show up more than 3 times. We’ll write a similar test to what we’ve already written with a twist. If any of those numerals show up more than 4 times, we know we have a problem either because they are out of order or because they show up all in a row. So, we only need to check the count. If there are four of any of them, we have a problem.
describe('tests/roman-to-arabic/RomanToArabic.spec.js',function(){ var returnValue; var minusOneTable = { IV: 4, IX: 9, XL: 40, XC: 90, CD: 400, CM: 900 }; var baseTable = { I: 1, V: 5, X: 10, L: 50, C: 100, D: 500, M: 1000 }; describe('When I is passed in',function(){ beforeEach(function(){ returnValue = romanToArabic('I'); }); it('should return 1',function(){ expect(returnValue).toBe(1); }); }); describe('When IV is passed in', function(){ beforeEach(function(){ returnValue = romanToArabic('IV'); }); it('should return 4',function(){ expect(returnValue).toBe(4); }); }); var prop; for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } describe('When ' + prop + prop + ' is added',(function(propCopy){ it('should throw an exception',function(){ expect(function(){romanToArabic(propCopy+propCopy)}).toThrow(); }); }).bind(this,prop)); } for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } describe('When ' + prop + 'I' + prop + ' is added',(function(propCopy){ it('should throw an exception',function(){ expect(function(){romanToArabic(propCopy + 'I' + propCopy)}).toThrow(); }); }).bind(this,prop)); } for(prop in baseTable){ if(!baseTable.hasOwnProperty(prop)){ continue; } describe('When ' + prop + prop + prop + prop + ' is added',(function(propCopy){ it('should throw an exception',function(){ expect(function(){romanToArabic(propCopy+propCopy+propCopy+propCopy)}).toThrow(); }); }).bind(this,prop)); } });
And the code to implement it.function romanToArabic(romanNumber){ var minusOneTable = { IV: 4, IX: 9, XL: 40, XC: 90, CD: 400, CM: 900 }; var baseTable = { I: 1, V: 5, X: 10, L: 50, C: 100, D: 500, M: 1000 };
// This block is probably inefficient, but it is
// easy to reason about.
for(var prop in minusOneTable){
if(!minusOneTable.hasOwnProperty(prop)){
continue;
}
var rx = new RegExp(prop,'g');
if((romanNumber.match(rx) || []).length > 1){
throw 'Poorly formed Roman number!';
}
}
for(var prop in baseTable){
if(!baseTable.hasOwnProperty(prop)){
continue;
}
var rx = new RegExp(prop,'g');
if((romanNumber.match(rx) || []).length > 3){
throw 'Poorly formed Roman number!';
}
}
switch(romanNumber){
case 'I':
return 1;
case 'IV':
return 4;
}
}
Test 6
So, before we move on to actually computing the value of the Roman number, we should ask ourselves if there are any other ways a Roman number could be passed in incorrectly.
The next one that occurs to me is this. If you have a number that contains a ‘digit’ from the minusOneTable, the character that is used to subtract should never follow the digit. For example, if “IV” shows up, there should not be an “I” immediately after it. That is, we shouldn’t see “IVI” anywhere in our string. So let’s add that to our tests.
// This block is probably inefficient, but it is // easy to reason about. var prop; var rx; for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } rx = newRegExp(prop,'g'); if((romanNumber.match(rx) || []).length > 1){ throw'Poorly formed Roman number!'; } rx = newRegExp(prop + prop.substr(0,1),'g'); if((romanNumber.match(rx) || []).length > 0){ throw'Poorly formed Roman number'; }
} for(prop in baseTable){ if(!baseTable.hasOwnProperty(prop)){ continue; } rx = newRegExp(prop,'g'); if((romanNumber.match(rx) || []).length > 3){ throw'Poorly formed Roman number!'; }
There is one more possible problem we could encounter. What if someone passes in a character that isn’t a valid Roman number character. We need to make sure that the only characters that show up are Roman number characters.
Since testing for all of the characters isn’t practical, we are just going to test for a few and assume that if this were a real business problem we’d go to the trouble of testing more. But, the basic test is going to look the same. Toss in bad characters in an otherwise valid string and make sure we throw an error.
// This block is probably inefficient, but it is // easy to reason about. var prop; var rx; // make sure minusOne only shows up once // and first character isn't also the last character. (IVI for example) for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } rx = newRegExp(prop,'g'); if((romanNumber.match(rx) || []).length > 1){ throw'Poorly formed Roman number!'; } rx = newRegExp(prop + prop.substr(0,1),'g'); if((romanNumber.match(rx) || []).length > 0){ throw'Poorly formed Roman number'; }
}
var included = ''; // make sure digits only show up 3 times
for(prop in baseTable){ if(!baseTable.hasOwnProperty(prop)){ continue; } included += prop; rx = newRegExp(prop,'g'); if((romanNumber.match(rx) || []).length > 3){ throw'Poorly formed Roman number!'; } }
// make sure only I, V, X, L, C and D are the only characters that show up rx = newRegExp('[^' + included + ']','g') if((romanNumber.match(rx) || []).length > 0){ throw'Poorly formed Roman number'; }
OK. I think that gets all of the validations except for ensuring that the numbers show up in numerical order. To make this work, we are going to need to work through the array and assign each digit a value. The test we need to write is going to put values in the wrong order.
Our test will put the next value up after the value.
// This block is probably inefficient, but it is // easy to reason about. var prop; var rx; // make sure minusOne only shows up once // and first character isn't also the last character. (IVI for example) for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } rx = newRegExp(prop,'g'); if((romanNumber.match(rx) || []).length > 1){ throw'Poorly formed Roman number!'; } rx = newRegExp(prop + prop.substr(0,1),'g'); if((romanNumber.match(rx) || []).length > 0){ throw'Poorly formed Roman number'; } }
var included = ''; // make sure digits only show up 3 times
for(prop in baseTable){ if(!baseTable.hasOwnProperty(prop)){ continue; } included += prop; rx = newRegExp(prop,'g'); if((romanNumber.match(rx) || []).length > 3){ throw'Poorly formed Roman number!'; } }
// make sure only I, V, X, L, C and D are the only characters that show up rx = newRegExp('[^' + included + ']','g'); if((romanNumber.match(rx) || []).length > 0){ throw'Poorly formed Roman number'; }
// substitute the minusOnes with tokens we can use to compute value. for(prop in minusOneTable){ if(!minusOneTable.hasOwnProperty(prop)){ continue; } rx = newRegExp(prop,'g'); romanNumber = romanNumber.replace(rx,minusOneSub[prop]); }
var romanNumberArray = romanNumber.split(''); var returnValue = 0; var lastValue = 0; var currentValue = 0; for(var numberIndex = 0;numberIndex < romanNumberArray.length;numberIndex++){ currentValue = compositeValueTable[romanNumberArray[numberIndex]]; returnValue += currentValue; if(numberIndex > 0 && currentValue > lastValue){ throw'Poorly formed Roman number'; } lastValue = currentValue; } return returnValue;
// switch(romanNumber){ // case 'I': // return 1; // case 'IV': // return 4; // } }
Step 9
And finally we add some test to verify that we get the right result.
What?! I thought we were done? Well, we are and we aren’t. We have code that works. But is it the best code we can write? Here is what I like about this code:
It is easy to reason about.
It allows me to add additional roman numbers by expanding my tables. No additional code would be needed. (And while it is hard to represent using Arabic letters, there are additional symbols.)
It works reasonably fast.
But, it does seem to me that we might make it a bit more efficient without sacrificing these advantages too much. And the great news is, since we have our test in place, we can refactor with impunity. No worries about breaking something because we will know as soon as we have and we can revert back to the code that was working.
Step 12
One pretty simple change we can make is that our error message is scattered throughout our code. Let’s make that a variable and just throw the variable.
The other thing I wonder about is how much of our validations can be combined? One check we might safely eliminate at this point is the check to make sure the minus values only show up once. If any of them were to show up more than once, they would either show up one after the other, which we still can check for, or they would be in the wrong value order which our final check will catch. So, let’s eliminate that code as well and re-run our tests to make sure nothing broke.
And now that we’ve eliminated that check, we can create one big string for checking conditions like “IVI” into one string and check it once instead of creating a new RegExp object multiple times.
Now that we have our IVI check down to one string, it occurs to me that we can combine this check with our invalid character check. So, let’s do that next.
And the last place we can optimize is our three digit check. But instead of limiting it to valid Roman numbers, let’s just check for any sequence of characters that repeats 4 times or more.
And once we know that is working, we can combine it with our other Regular Expressions.
We’ve cleaned up about all of the logic we can without hard coding the values. But I would like to combine the lookup tables next. I don’t like having three tables. Let’s see if we can pull those into one table.
The first step to doing this is combining the minusOneTable and baseTable into one table. To differentiate in the code that is using those tables, we’ll just look at the length of the property.
In the process of doing this, we notice that it would also make sense to combine the substitution table. And since the values we’ve assigned to the table aren’t even being used at this point, we’ll just use the substitutions instead of the values.
And if we make the value of each element in the baseTable an array, we could combine the values into that table as well. But that would over complicate our lookup logic when we compute the value. So, I think we’ll just leave that as it is.
There are probably a few other optimizations that could be made here. But I’m afraid that each would sacrifice either the flexibility. That is, we could hard code the regular expression validations, which would significantly reduce the number of lines of code. Or we would sacrifice the readability. Neither of which I am willing to do.
I’ve worked with Ext JS now for a total of 2.5 years. First with Ext 4.2 and now with Ext 6.x.
Here’s my experience, and warning, of why you should avoid this disaster of a framework.
Jack of All Trades
Master of none! One of the great selling points of using Ext JS is the fact that it comes with “Everything you need” to build a web application. That would be great if it were true. But the fact of the matter is, it comes with all of the features you need but the features are all only partially implemented. I’ve complained publicly several times that Sencha can’t possibly be testing the code they release because it only works in their demos. If you try to use a feature they have documented as being available, you are likely to find that the feature doesn’t actually work. How is it possible that you’ve written documentation for how something is supposed to work and yet you can release it without it working properly? I can understand fringe stuff getting by. We can’t think of every test. But when this happens over and over again, you start to wonder what exactly they are testing.
A Wolf in Sheep’s Clothing
When I first started with Ext, the only design pattern they had available was what they referred to as MVC. It took me two months of playing with the framework before I finally realized that what they were calling MVC wasn’t anything the Gang of Four would recognize as MVC. I guess if you have a View, a Model and a Controller, you can call it MVC? It doesn’t matter that the Models define records in a table or that the Controller is tightly coupled to your view.
Sheep Without Legs
OK. So when they introduced the MVVM architecture I actually started to have just a bit of hope. Yes, there were still some fundamental issues I have, but MVVM would make this tolerable. But here is the issue. Their idea of MVVM is that you would only need to implement it on a per page basis.
Let me try to explain.
Broken Data Binding
In my ideal world, when I build a new component, I would build that component using the framework the rest of my application is using. So my component uses MVVM. Sencha’s implementation gives you a View, ViewController, and ViewModel. Mostly this looks more like MVC if you ask me but whatever, it has two-way databinding, so we’ll call it MVVM for now. If you build a component that lives inside another component, the first thing you’ll discover is that binding only works from the top down. That is, I can bind data at the outer layer and it will get reflected all of the way in to the inner most component that uses it. But, if you change the data in the inner most component, it doesn’t reflect back up to the outer most component. I’ve written a hack for this, and there is no promise from Sencha that this will ever get fixed properly, so I guess my hack is safe.
Broken Controllers
But it gets worse. While child components can find data in models that are in parent components properly, they can’t find references to functions in controllers in the same way. This is particularly problematic if you write a component that is a container of other components. You would naturally want the child components to use the controller from the component that they were declared in. But if you have an outer component that has your container component as a child and then other components inside of that. The only way you can control what controller the child most components are going to notify of events is by wrapping the inner most components in their own component with their own controller. This gets to be awkward when all you want to do is provide an event handler for one control in a column of a grid control. Again, I have a monkey patch that fixes this, but why did I have to write it? This is just one specific example of my “Jack of All Trades” point that I started with.
We won’t even address the question of if this is really MVVM or not!
Never Use the .0 release
I think most of us now are generally conditioned to be wary of the .0 release of anything that hasn’t been developed using Open Source methods. There just haven’t been enough eyes on the project to ensure that everything works as it should.
But with Sencha, this extends to all of the patch releases at the very least and even into some minor releases.
While the 4.0, 5.0, and 6.0 releases were unacceptably broken, we find that every new patch or minor release that comes out afterward breaks something that was working. We always have to ask, “Can we live with this?”
All or Nothing
As I said at the beginning, Sencha gives you everything. That sounds good. You won’t have to go looking for a grid control, or many other common controls you might want to use.
But the bad news is, you can only use controls that were written to be used with Ext. Which other than what Sencha provides in the framework, doesn’t give you a lot of choices. Don’t go thinking you’ll supplement Ext with a selection of third party controls. It’s not going to happen.
Fences Protect AND Isolate
Up until this point in my post, no one can reasonably argue that anything I’ve said is actually a benefit. At this point we switch to points that may vary based on how well you know JavaScript, HTML, and CSS.
You see, the good news, and actually a major selling point to many people, is that you can write a web application using Ext without having to know much, if anything about HTML or CSS. And for that matter even the amount of JavaScript you need to know is relatively limited.
That’s the good news. The bad news is, if you know anything about any of these, you’ll probably end up frustrated by EXT. This is because Ext’s JavaScript controls most of the layout. So if you are used to going into developer tools to tweak the CSS and then applying that to your style sheet, you are going to be very disappointed. Pretty much nothing you do in developer tools is going to work as you would expect. And figuring out how to apply those to your code is going to be a lot harder than you are used to.
Their Way or the Highway
Once again, many people see this as an advantage. And once again if you aren’t familiar with how the rest of the JavaScript world does things, this is going to sound fine.
Sencha CMD
Everything runs through Sencha CMD. A tool for building all things Ext. If you want to bundle and minify your code, the standard way of doing this is by using “requires” statements in your code and then running Sencha CMD and have it figure out what you are using and put it all in one bundle.
The problem with this is that there are several much better ways of doing this that are available using Node and various NPM packages. Again, if you are a JavaScript developer, you are going to wonder what Sencha is thinking.
Ext.define()
Another place where proprietary shows up is in how Ext defines “Classes”. When it was first introduced, TypeScript was new. But now, we not only have TypeScript, which does much of what Ext does and some things it doesn’t, but we have an evolving JavaScript standard that I’m afraid Sencha won’t be able to keep up with. They already discourage the use of ‘use strict’;. Once again, there is only one place where this will get you in trouble, and the work around actually produces more efficient code. But still, the point is, Sencha is relying on ECMA Script 3 standards while the world has largely moved on to ECMA 2015 and beyond.
Anyhow, my point here is that Ext is not just a framework but also functions, largely, as its own language. Not quite as much a fork from the standard as Coffee Script, but also not nearly as close to the JavaScript spec as TypeScript. So while it is still JavaScript, if you are a JavaScript programmer, it isn’t going to feel quite like JavaScript to you.
Themes
The final place you will find “Proprietary” lurking is with the Themes. There are several really good CSS frameworks out there. Sencha uses none of them. And while the syntax they use for creating themes has been SASS up until Ext 6, now they even have their own proprietary SASS compiler. Watch out here because they are still using the SASS extensions so you are likely to make some assumptions here that aren’t true because, once again, they’ve only implemented enough of the SASS engine to do what THEY need to do.
VB All Over Again
Every time I hear someone praise how great Ext is, it is normally because it has everything you need out of the box and allows you to get stuff done quickly.
Basically the same argument for using Visual Basic back in the day. And yet I learned to never take a VB job because it almost every instance, while it was possible to write well structured code in Visual Basic, it was generally so difficult to do that the code I would be maintaining would need to be rewritten in order to make any sense of it. Ext suffers the same issue. There is nothing in Ext to force you to write well structured code. The code I have had to maintain has almost always followed every anti-pattern known to man. In this case, this isn’t Sencha’s fault directly other than the fact that the only reason my code tends to be cleaner than most is because I’m more likely to code a fix to an Ext bug than I am to work around the problem with an anti-pattern.
In comparison to other frameworks that are available, if all you want is a tool that will get you a semi working application quickly, and you don’t care so much about having to rewrite it when you need to change it in some way, Ext is your tool. If on the other hand, you care about design and you want to be able to maintain what you’ve written, you should look elsewhere.
Remember, if it sounds too good to be true, it probably is.
In my recent coding, I’ve discovered an even more simple way of dealing with this problem.
In the process, it removes the anonymous function and eliminates the linting error, ‘Don’t make functions within a loop’ You see, I’ve been experimenting with JavaScript bind().
And as it turns out, we can use bind in multiple situations, including dealing with the closure issue I mentioned a couple of weeks ago.
Over the last several years, I’ve had a chance to read a few Programming Resumes. Or, I should say, TRY to read a few resumes. But frankly, if the Programming Resume I typically see is common, everyone who reads my blog needs this advice. I haven’t seen a barely adequate resume in years.
I’m sick of it. Oh, it’s good for me of course. I know my resume is going to stand out as such a unique work of art compared to the others, that I will get a call back right away. After all, if the competition is so incredibly weak, I don’t even need to try.
On the other hand, as someone who has to read these resumes, I’d like to have something better.
And no, I’m not going to go over the standard “how to make your resume awesome” stuff because evidently most programmers can’t even get the basics down. Seriously!
The question comes up all the time, “How do I access JavaScript privates from my Unit Tests?” And invariably, the purist chimes in with the answer, “you don’t”.
But, isn’t the point of unit testing to allow us to test UNITs? Why artificially limit our ability to test units if we don’t need to? If we had the ability to create protected members, wouldn’t we tests those separately? So, what follows is how I surface my private JavaScript members so I can access them during tests without having to make them public during the run of my protection code.
Lean on JavaScript
My JavaScript unit testing framework of choice is Jasmine. Not so much because it does all I would like it to do or because there isn’t something ‘better’ available but because it has become the defacto standard for unit testing JavaScript and nothing else I’ve seen is significantly better. There is one part of this technique that is going to lean on the fact that I am using Jasmine, but I’m sure you can adapt it to your testing framework.
But first, let’s review how you would create private JavaScript members in the first place.
Creating Private Members
In standard ES5 code, a simple object might be defined using syntax that looks something like this. Recognize there are multiple ways to create objects and things that look like classes in JavaScript. What follows is just enough code to get the point across.
Note that our privateMember is used by publicMember but is not accessible from the outside. I’m also using apply(this) to pass the context to the privateMember function. This may not be necessary if you aren’t using this in the privateMember function and you could use privateMember.bind(this) to make this automatic. That’s one of the interesting things about JavaScript. There are always multiple ways to achieve the same goal. None of them particularly better than the other but some more standard than the other.
Notice that the only thing that actually makes our publicMember public is that I’ve attached the function pointer to this.
Exposing Private for Jasmine
The easiest way I know of to expose the private member variables for Jasmine is to conditionally assign the private members to this if jasmine is defined.
As long as you don’t use the jasmine global variable for something other than jasmine, this should work.
And now you can test your private functions.
What about Spys?
If you are testing your private functions on their own, you’ll probably have a need to place spys on them when you test the other functions in your application that call them. This is where things get just a bit interesting.
If we leave things as they are, and you place a spy on the function that we exposed, your spy will never get called. The reason for this is because of the way pointers work.
In our example above, our publicMember() function is going to call our privateMember() function regardless of how we manipulate the this.privateMember pointer. This is because, while the variables are pointing to the same function, they are still two different variables and, because of the way spys work internally, you’ll end up changing the this.privateMember variable without impacting the call to privateMember().
We need to write a little extra code in our if(jasmine) block to make sure that after we’ve exposed privateMember(), the now public version of privateMember() gets call by publicMember() instead of the private version of privateMember().
To do this we are going to need to play “towers of hanoi” with our variables.
The gist of what this new code does is that it captures the pointer to the privateMember() into oldPrivateMember. Once we have that, we can make this.privateMember point to the original privateMember and then make our original privateMember point to a new method that calls this.privateMember, which is what our spy will call if we’ve set one up.
The if(oldPrivateMember) stuff is just protection code to make sure we don’t do this more times than we need and end up calling this.privateMember up the call stack multiple times until we finally get to the privateMember function we ultimately want to call. Depending on how you implement classes, you may or may not need this code.
You see variations of the question, “Why does JavaScript loop only use the last value?” on StackOverflow all the time. At work, the guy that sits next to me just ran into the same issue. And the answer to the question requires a solid understanding of closures and variable scope. Something I’ve written about in the past. But, when I went back and looked at that article, I was surprised that I had not covered this particular very common topic.
So, here is the basic scenario. You have some sort of for/next loop that then calls some asynchronous function. When the function runs, what you see when the code runs is that the last value of the loop index is the value that gets used in the function for every instance that it gets called.
I’ve written about Agile and Scrum before and most of my regular readers know that I am a huge fan. But recently I am starting to believe the Agile movement is doomed. In fact, the most common response to my enthusiasm for Agile and Scrum is, “Yeah, we tried that once and it was a complete failure.” Which seems odd to me because in every instance where I’ve been able to implement it, it has worked beautifully.
So why would I say Agile Will Not Succeed?
The buzz around Agile has become so loud that Agile has moved from strictly a software development thing, to all corners of the business world. And yet, as much as I believe Agile is the right way to develop software, as a movement, it is doomed for failure.
In my job as a JavaScript architect, trainer and mentor, I’m often asked, “What’s your favorite framework?” Or “What is the best framework?” And it surprises people when I give them two answers to that question.
Right now, of the frameworks I’ve looked at, my favorite framework is React JS. But if I were picking a corporate framework, at this point I’d probably land on Angular 2.0.
But the question you are probably asking is , “Why two different selections?” And, I think a more interesting question would be, “How did you select which one to use?” In fact, when I was thinking about writing this post, I was going to title it “How to Choose a JavaScript Framework” but as I considered what I would actually say, I realized that the factors I would use really apply to any language and any time.
But an even more interesting question is this. What factors are essential when picking out a framework. If I ignored these questions, what are the cost? So, I give you…
I’ve written about Test Driven Development before. I’ve even written about 100% code coverage before. And I haven’t written much about it recently because I’ve been focused on JavaScript. But, I’ve been thinking about the 100% code coverage debate more and I have a few more thoughts on the subject.
You see, the more I practice Test Driven Development, the more inclined I am to believe that there are only three reasons for arguing against 100% code coverage.
There is Something Wrong with Your Framework
This will be the easiest one for most people to accept. It isn’t so personal.
You see, I’ve been learning React JS and, as I’ve mentioned before, I decided to learn React AND learn to test it at the same time. The thing that has impressed me from the outset is that ALL the code that I write is testable. Where a lot of other frameworks are testable except for the View, React JS is ALL testable.
And this got me to thinking, if all the code you write is testable, why wouldn’t you write tests? In fact, as I wrote in “Test Driven Learning, an Experiment”, the process of writing the tests as I go has helped me understand React JS better than if I had not.
But compare this to other frameworks where the View is basically HTML. There is no really easy way to write tests for HTML. At least, none that I know about.
And then there are frameworks that seem to do all they can to make it hard to test. When I was using Ext JS 4.x, I spent two years looking for a way to make my code testable without having to have the View rendered because the way they had implemented “MVC” made loading the view mandatory. Talk about tight coupling! Fortunately, now that they’ve implemented MVVM, if you do this correctly, it solves these problems.
Another place where I found testing difficult was with Angular 1. Most of Angular 1 is quite testable. It was created with testing in mind. But as I was trying to add a decorator to the UI Grid component, I found that testing the decorator was quite difficult. This, I believe, said more about how the UI Grid component had been created than about how the Angular framework was put together. But this just illustrates my point. Sometimes, the reason you can’t test has more to do with the tools you are using than any other reason.
Then again, the problem may be you.
There is Something Wrong with Your Code
Now, arguably, in my last example, the reason I was not able to test the decorator for my Grid was because I was missing some fundamental concept related to testing decorators in general or how that related to the Grid.
The reason I say this is because the one thing I’ve noticed the more I test is this. The more I practice TDD, the easier TDD becomes.
As I introduce testing into the organizations I work with and as I’ve grown in my own TDD skills, the one thing I’ve noticed is that when we start out learning TDD, it almost always starts out as DDT. That is, Development Driven Testing.
This is, of course, better than not testing at all, but if you wait until after you’ve written your code, or you develop your code without thinking about how you will test it, you will almost always end up in a situation where you will have to rearrange your code to make it testable. Untestable code is probably the single biggest reason why code doesn’t get tested.
If you were able to make yourself write your tests first, you would be much more likely to write test for everything you wrote.
This doesn’t help you though if you’ve been tasked with writing tests for all pre-existing code. Yours on someone else’s. In this case, the best help I can give you is to recommend the book “Working Effectively With Legacy Code” where Michael Feathers illustrates how to handle a lot of the common scenarios he has run into with various languages and how to untangle the mess so that it can be tested. I will admit it is a tedious read, but there really is no better resource on the topic.
Lack of Experience
The final reason you might want to think that 100% code coverage is impossible is that you simply don’t have enough experience.
As I mentioned above, my own experience has been that the more I practice TDD, the easier it gets. When I started out, I struggled to write test at all. Then I got to a point where I would at least attempt to write tests after I’d written some code. I’m now at the point where I’m writing tests as I code. Soon, I hope to achieve the ultimate goal of writing test prior to writing the real code. But even though I wasn’t writing the tests first, I can still say that the tests were driving my development because I knew at some point, I was going to have to test the code with unit tests.
But as I’ve monitored the noise on the Internet about using TDD or not. As people have discussed how much of their code should be tested. I wonder, “Just how long has this person been trying to tests?” Along with that, I wonder, “Do they even want to test?” My dad used to say, “It is amazing how much I don’t understand when it doesn’t fit my plan.” Let’s face it, for most programmers, writing tests is not nearly as much fun as writing the application. If this is true, then aren’t you already biased against writing tests for your application? Wouldn’t you much rather write the app and toss it over the fence for someone else to tests? I know I would.
Now combine that with the fact that testing is hard, and you have a recipe for excusing yourself from testing as much of your code as possible.
But, if you stick with it. If you make writing bug free code a personal challenge, you will find that the rewards are worth it.
What would it be like to be THE developer who was always working on new features because no one could find bugs in the features you programmed in the past? What would that do for your career?
The 100% Code Coverage Payoff
I want to conclude with another story that illustrates how writing tests paid off.
I’ve been working on a resource scheduling component for the last several weeks. The bulk of the logic is that if two resources are scheduled for the same time, I need to be able to display that there is a conflict. It sounds pretty straight forward until you look at all the various ways items can overlap. I’ve isolated the logic for this into a class that is quite testable and I had created a test suite with about 400 tests when I was told that along with that requirement, there were a particular set of conditions where what looked like a conflict wasn’t really a conflict. I needed to show that there was an overlap, but I need to display it in such a way as to indicate that it isn’t a conflict.
As I sat down to add in the new logic, I realized that the path I had been going down wasn’t going to work well given this new scenario. What I really needed to do is to do some major refactoring. In fact, you might even say I had to rewrite most of the code I had in place. Now, in the past, I would have been afraid to tear up all that I had done and start over because it would have meant I would have to retest all that I had already worked on … manually! But, since I already had tests in place, I was able to 1) commit what I had done so far to version control so I could get it back if I needed and 2) rip up what I had done, rewrite and refactor so that it would work well with the new requirement and 3) retest with the tests I ALREADY had in place. I’ve added another 100 tests for the new scenarios and I’m pretty confident that the code I’ve written does what it should and doesn’t do what it shouldn’t.
And that whole refactoring exercise took less than 7 hours.
Over the last several months we’ve looked at several different aspects of how JavaScript deals with objects. A few weeks ago, we looked at JavaScript Types and noted that many of the types are actually objects, while not all are. We’ve also looked at JavaScript Objects and JavaScript Object Fields. This has all been foundational information you need to understand prior to understanding how JavaScript Prototypal Inheritance.
No Classes
If you are coming from an object oriented background, the first thing you need to understand is that JavaScript doesn’t have classes. Even though the class keyword was introduced in ES2015, there are still no classes. All the class keyword does for us is formalizes what we’ve been doing for years while making JavaScript feel more like the other languages we know.
I’m not going to spend a lot of time dealing with ES2015 syntax here for several reasons. First, it isn’t fully implemented in the browser eco system yet. Second, most of what we do as programmers is maintain existing code. There is a lot of existing code that doesn’t use ES2015 yet. Third, ES2015 hides what is really going on. I want you to understand how JavaScript works, not just be able to churn out code.
So, if there are no classes, how does JavaScript achieve inheritance? By using the delegation pattern.
Delegation
In the object oriented world that you are probably coming from, you’ve probably heard the phrase, “Favor composition over inheritance.” What they are really saying is, “Favor delegation over inheritance.” So, this shouldn’t be a particularly new concept. When you create a class that contains other classes, once the class is instantiated, when we need to call a function that the top level class doesn’t implement, we pass it on into an object that is contained by the top level object. This is delegation.
Now, remove the classes. All you have left are the objects those classes would have created. This is JavaScript. But, instead of leaving the delegation to you, they’ve provided a default delegation mechanism called the prototype. In fact, if you’ve ever inspected a JavaScript object in the debugger, you’ve probably seen this field hanging off your functions. The other place you’ll see evidence of the prototype is in the __proto__ field that hangs off of every object.
Default Inheritance
Whenever you create a new object using either an object literal, or a function (or the class keyword) the prototype field automatically points to the default empty object. It is this default object that gives all of our other objects the behavior of an object. Without this, none of our objects would have a default toString() implementation, for example. It is the default object that gives all other object their object-ness.
Constructors
Once your head stops spinning, come back and check this out. While we no longer have classes, we still need some way of stamping out objects that all look the same. We already looked at one way of doing this when we discussed JavaScript Objects.
And for most of the code we write, this is a perfectly adequate way of creating a constructor. By attaching the functions to the function’s prototype field, we can apply the functionality one more level up the tree, which gives us a certain amount of flexibility. The same code above could be written as:
Notice that we didn’t attach someProperty to the prototype. We want the state information attached to our object. If you did attach it to the prototype, all it would do is give the object a default value of ‘A’ but as soon as we assign ‘B’ to it, the property gets shadowed anyhow. If you were to Object.define() someProperty so that it had a setter, which would remove the shadowing, you would also change the value for every instance of the object A when you changed it from any instance. I suppose if you wanted to implement something that looked like a static variable, this is something you might attempt.
The key to remember here is that anything you do to the prototype is going to impact all current and future instances of the object.
JavaScript Prototypal Inheritance
By now, I hope you understand that all inheritance happens by delegation through the prototype. The next obvious question would be, “How do I make one JavaScript ‘class’ inherit/delegate to another ‘class’?” One way you might be tempted to implement inheritance is by assigning prototypes.
1 2 3 4 5 6 7 8 9 10 11 12 13
functionA(){ }
A.prototype.foo = function(){ }
functionB(){ }
B.prototype.bar = function(){ }
B.prototype = A.prototype;
But all this does is make B inherit from the same thing A inherited from. Not exactly what we wanted to see happen.
OK, you say. I know what to do, I’ll just create a new object of type A and assign THAT to the prototype of B.
B.prototype = new A();
You’re closer and it may work a lot of the time, but if your A function that you are using to create that other object does anything, you may end up not doing what you expected. For really simple objects, this will work, but it is a dangerous habit to get into.
What you really want to do is to use the Object.create() function. This creates a new object without calling the constructor function. No side effects.
1
B.prototype = Object.create(A.prototype);
But, what if that A constructor function did something important? In your B constructor function, you call the A constructor function passing it the current this pointer.
1 2 3
functionB() { A.call(this); }
If B takes parameter than need to be passed on up to A, you can pass those additional parameters after this in your call to call().
And that is how we make JavaScript inherit one object from another. It is a lot of work. This is why ES2015 introduces the class and extend keywords. They do a lot of this work for us.