This will be a short post. I am writing this to document how I created a Node.js server that can act as an image proxy. I needed this to get around a limitation in HTML5's canvas implementation that prevents getting a loaded image's binary data if that image is from a different web domain. This function is very handy though if you're building an image editor so I had to find a work around.
- I wanted to understand Javascript and just didn't see how using a "simpler version" (my own thoughts) would make my life easier in the long run.
- If I DID use an intermediate language, I wanted to be able to dump it at any time and not feel like I was forced to continue using it.
- Putting one more thing with bugs in between myself and my code seemed fool hardy.
- It's less verbose Javascript, not a different or simplified language.
- A couple of shortcuts that enable you to use list comprehensions rather than error prone for statements.
- CoffeeScript compiles to pretty awesome Javascript. I wouldn't have any concern dumping CoffeeScript at any time because of this. It would also have put some great conventions in my Javascript that I could follow.
- Eli Thompson is always right. (You should read his blog. He's smart: http://eli.eliandlyndi.com/)
What's GeekRations?
Tonight I launched my latest project, GeekRations (check it out at http://www.geekrations.com). It's a gift of the month club for geeks that pulls weird and off the wall gifts from the hidden nooks and crannies of the internet and delivers them to you monthly. I originally envisioned it for people like myself who love receiving packages in the mail just for the surprise of what's inside. It also makes for an awesome gift for that geek in your life you don't know how to buy for.
Where We're At Right Now
Currently, GeekRations is taking emails from interested prospective customers. As soon as we're ready to start shipping gifts you'll be notified where you can sign up for the service. Visit http://www.geekrations.com and sign up to be notified once we're taking orders!
Geeky Details
GeekRations is a lean start up in the purest sense of the word. The purpose of the landing page was to see if anyone even cared about this business idea. Apparently people do, so the business idea will be moving forward. Furthermore, GeekRations has an A/B test running on the splash page wording. One of them is pretty straight faced and very plain in describing our service while the other tries to be a little looser and silly. I will reveal the results of which one wins once I feel I've aggregated enough data that I can tell which is the clear winner.
- Just don't test it. You use Javascript templating and keep the interactions simple enough that it's low risk and has never proven to be a huge issue.
- Write a javascript unit test, write your jQuery code, then verify your jquery interactions using jQuery to test the DOM
- Write some jQuery, load the web page, and manually test it each time you make a change
- Write your web page and test it using Selenium after the fact
- Just don't test it. It would be valuable for you but you just don't have the time.
A friend (James Thigpen) issued a challenge to me today... Let's try to do the String Calculator Kata without a single if statement. My last blog post was about wanking code (aka code cuddling) so this seems an appropriate balance. ;D
First off, I love beautiful code and have been known to fixate on it so this article is a formalization of what I think to myself every time I start to get religious over coding quality.
Code quality is an oft talked about yet poorly defined topic amongst programmers. Ask 100 different developers what "quality" means to them and you'll receive 100 different answers. Responses will range from "Quality code is code that can be easily changed and understood" to "Quality code is hard to define but following the SOLID principles is a good start" to "it's more of an art that's hard to define." Ok, but maybe you're thinking that these are too abstract and should refer to reducing costs and reducing bugs. Sure. Maybe. Ultimately, however, all of these definitions of "quality" skirt the elephant in the room.
On commercial projects, high quality code will help enable my company to maximize its profits.
When I get in a heavy debate over whether or not someone is really "unit" testing or just "integration" testing, nowadays I ask myself (and then my sparring partner) "Is this why we can't deliver software?" Put another way, "Is this the most critical obstacle in the path of my company making more money over the short and long term?" The answer is usually no.
When it is no, I have to suck up my ego and walk away from the discussion since I've admitted there's limited value to be had. Note, there's some value, especially if my purpose is getting on the same page as my team.
When the answer is yes, as it oh so very rarely is, now I can make a bold statement if I can concretely share WHY this is more seriously affecting the performance of the company over every other concern. Here's an example:
Imaginary dev: "I don't have time for automated testing so get off my back about it."
Me: "Automated testing is the single most critical thing we can be doing to drive our company's profit because every bug we miss is a bug our customers have to catch and their time is so limited that they cant possibly catch them all. That means the bugs will make it to our customers who will slowly lose faith in our product with every issue they find. We can't afford to manually test so we absolutely have to run automated tests."
While perhaps not bullet proof, that's a strong argument. What would an even stronger counter argument look like?
Imaginary dev: "If I can do what's worked for me in the past and just get this feature done by this hard deadline our customer will pay us a $3 million bonus. Our business customers have already decided and agree that even if we do nothing but review the code I have written for two weeks after the deadline, it will still have been extremely profitable for us to take this measured risk."
Do you believe in "quality" so much that you would ask your team to not let one of the devs on the team you've seen consistently pull in results to bring in a $3 million pay day? If you do, what's your number? If you don't have one, then you're not in this business for the business.
Hi, my name is Justin. I'm a recovering code wanker.
How I Came Across A Real Live Data Scientist!
Answers Are Easy, Asking The Right Questions are Hard
Challenges that led me here
- Heated arguments at work regarding how much TDD is enough and how little is too little. How do we find common ground?
- An acknowledgement of technical debt and a confusion about how to leverage it. How much debt is too much?
- Being labeled as pedantic and a zealot. Is a Zero-Defect Mindset ever worthwhile? When?
- Learning exercise in how we can gain concrete insights using our intuition in a methodical fashion. How can I communicate abstract ideas without concrete evidence in a rigorous manner?
This article represents my lessons learned from this exploration.
Making the Abstract Concrete
It was a normal day at work, myself and another co-worker were strongly and passionately arguing for the benefits of strict, pure, clean TDD against a couple of other equally passionate co-workers who were sold on the idea of everything in moderation. Having just completed a four month full time Agile immersion with an amazing albeit very idealistic consultant, his ideas about a zero-defect mindset and the idea that it was practically achievable were seductive. I had entertained my own idealistic fantasies for a while never really thinking they could or should be taken so seriously.
It was liberating.
Also, it was isolating. Having these thoughts, and that excitement placed me on one extreme side of a continuum with many of my other teammates on the other side or somewhere in the middle, nearer to the side of limiting TDD in the name of practicality. Conversation after conversation, debate after debate, we ended in the same place, perhaps even galvanized a bit by the disagreement and a bit further from finding common ground.
I finally came to understand that regardless of what I knew to be right, everyone on my team had their own perception and their own knowledge of what was right as well. That's not sarcasm. In social interactions there are multiple realities and all of them need to be appreciated and considered valid enough to be worth understanding.
How could I model my perception of reality in some sort of a concrete way that would enable me to make rigorous (albeit somewhat subjective) predictions? How could I ensure my mental model was at least self-consistent and work-able? Like self-respecting geek, I decided the best way to model uncertainty was to run thousands of simulations and projections of reality to see what lessons could be gleaned.
Finding Common Ground in a Common Purpose
The first decision I had to make was figuring out the underlying metric I would use to compare the two development methodologies. Having been just recently introduced to systems thinking and the Theory of Constraints, I thought a great start would be to use the value throughput of the companies.
But what is value? When we speak of delivering value to our business customers what is it we are actually delivering? In discussions with my team, we decided that business value is best seen as the present day value of your company were it to be valued by an external party. For the purposes of the simulation, I assume the value delivered by completed stories to be equivalent to some randomly assigned numbers provided by a value distribution and assigned without regard for feature size. That's right, it means a feature that takes next to nothing to develop may create an enormous amount of value for the company.
For further assumptions and specifics of my model, read on.
- User Story- In this simulation, a User Story is the smallest unit of work that the Product Development Team can work on that provides the slightest bit of business value. They also have an associated size.
- Business Customers- Generates a random set of randomly sized (to a discrete distribution) stories each iteration. Their value and size are also randomly assigned upon creation.
- Product Backlog- Repository for all stories. New stories are all added as a top priority in the order delivered. Bugs are randomly dispersed into the Product Backlog when they are received.
- Product Development Team- Anyone and everyone responsible for getting the release out the door. This includes programmers, testers, technical writers, etc., etc. The Product Development Team iterates over the Product Backlog and works to complete stories. They also are the ones deciding the cost of the various stories. Over time the speed of their work can go up if a range (minimum velocity and maximum velocity) > 1 is specified on construction. The function which controls the team's performance improvement is an "Experience Curve" as documented here: http://en.wikipedia.org/wiki/Experience_curve_effects Without getting too into it, this experience curve essentially models the decreasing cost of development over time.
- End Users- Who the Product Development Team releases to. Because the Product Development Team includes *everyone* needed to release the software, the End User may receive the software immediately afterwards. End Users discover bugs in the software. This is currently set to a constant rate per story per iteration. So if the defect rate is 1%, then a team with a hundred stories complete can expect to have, give or take, one story per iteration reenter the Product Backlog as a new Bug Story. The size of the Bug Story is randomly determined based upon a discrete bug size distribution.
- Bug Story- A Bug Story is a story that is focused on fixing a defect in the software. These stories are unlike normal stories in that they have no real value for the team and thus don't improve throughput. A Bug Story actually represents more of an opportunity cost as valuable work could be done in its place if the Bug Story hadn't needed to be written.
- Support Team- Who the bugs are reported to. Currently only really used to track the total bug count. Could be used in the future to eliminate bugs due to "user error".
- Lowering the defect rate, even at the cost of reduced performance, results in higher value throughput in the long term. Lowering the defect rate in the short term however is hardly ever optimal.
- A higher defect rate results in a much higher spread of possible value throughput... in other words there's a higher variance in what you can expect in terms of value output from a product development team.
- Every development team has a point where the highest possible testing and quality rigor begins to outperform the less rigorous teams. The trick is identifying where this begins to happen for your particular company or project.
- How do we find common ground? Share our assumptions and make them explicit. Codify them so that they can't be conveniently shifted when the arguments get uncomfortable.
- How much debt is too much? I didn't model technical debt in terms of needed refactoring... just in terms of defect likelihood. Too much debt is so much that you spend most time paying maintenance costs than delivering value.
- Is a Zero-Defect Mindset ever worthwhile? When? Yes it is, when you have set a goal of a sufficiently large life time for your product.
- How can I communicate abstract ideas without concrete evidence in a rigorous manner? Hopefully I just did.