The best session of CodeMash 2015? It's all about the bliss.

by Seth Petry-Johnson 14. January 2015 19:40

2015 was my fourth CodeMash. Or maybe my fifth; the awesome sauce tends to run together a bit in my mind.

In previous years my mind has been expanded, contorted, exhausted and sullied by all manner of educational days and "socially lubricated" evenings. I've walked away with lots of new ideas and technical goals, but this was the first year that my spirit was moved by a session. For the first time in a long time, I remembered why I love programming so much.

I went into Cori Drew's "Geek Parenting Lessons Learned... so far" session hoping to learn a few practical tips for raising kids that enjoy, or at least understand, programming. Cori's 11-year-old daughter Katelyn is pretty damn amazing and I figured I'd learn a thing or two.

And I did learn a thing or two, but honestly the value of that learning is eclipsed many times over by what I felt during the session. I felt joy, and hope. 

Joy? Hope? At a technical conference?

Yup. Cori's passion for development was on full display, as was her love of the developer community and the enjoyment she gets from sharing those things with her daughter. She was truly infectious and reminded me of the early days of my own career when I'd work way into the night on a coding problem, searching for elegance amongst all of the curly braces and HTML tags. She reminded me how it felt to discover an aptitude for expressing myself through code patterns and syntax, and what it was like to discover a community of other people like me. She reminded me that what drew me to this thing was the coding itself, not the financial rewards or "leadership opportunities" that become the focus of a maturing career. She reminded me how much I love the craft of programming and sharing that craft with others.

And more than that, she was a she, talking about her own positive experiences and about a young girl finding a bit of that same joy.

You see, I spent the latter part of 2014 seeking out women to follow on Twitter. That gave me a front-row seat to GamerGate and a bunch of other proof that women in tech have a much different experience than my own. And this saddened me; it sucks to see people struggling to get in, or remain in, the industry I love. Specifically, it made me sad for my daughter's chances at following in my footsteps. Can I really encourage her to explore an industry that at best treats her as an outsider, and at worst will threaten to rape or kill her just for having an opinion?

So I feel like Cori's talk was exactly the bit of perspective I needed. It was refreshing and inspiring to see a woman sharing a positive experience. She reminded me that we're fighting this fight for a reason, so that eventually we won't have to fight any more. And so I shared in her joy, hopeful that budding programmers like Katelyn will do amazing things not only with the tech, but with the culture as well.

As usual, CodeMash was great. Good content, great people, lots of fun. But the best thing it gave me was the most unexpected: a vision of a future where my kids can, if they choose, follow a love of programming out in the open, surrounded by a supporting community, and not hiding in the back of the room on BBS systems like I did.

I'm looking forward to 2016, CodeMash, but you have some work to do. The bar has been set pretty high.

Six years in; do I get a gold watch yet?

by Seth Petry-Johnson 9. December 2014 18:02

This week marks my six-year anniversary with Heuristic Solutions. My longest stint prior to this was 3 years, so this feels pretty notable to me.

Sadly, no one brought me any cake today and I haven't received a gold watch yet. I'll attribute the lack of cake to the fact that I work from home and cake doesn't travel well in a USPS AirMail envelope, but last I checked watches won't spoil if left in the sun.... Anyways, I guess I'll have to mark the occasion myself by reflecting on my experiences here and turning them into some career advice tidbits. If you're looking to find a great place to settle down and grow, here's my advice:

Work for people you trust

It's a scary thing to not trust the people you work for, or to not know the content of their character or the direction of their moral compass. By contrast, it's incredibly liberating when you DO trust those people; it frees you up to take risks, to let your guard down, to align your interests for the betterment of both parties. 

Seek purpose, mastery and autonomy

Immediately leave any job where you don't have a sense of purpose, you're not encouraged to seek mastery in your craft, and you have no sense of autonomy in your work or environment. A programmer with all 3 of those is truly livin' the dream. A programmer without purpose, without growth or learning, and with no freedom of thought or expression is miserable.

Favor "career collaboration" over "employment negotiation"

Three times over the past six years I've considered leaving, and all three times I decided not to. Each of those decisions was made following an open, honest and transparent conversation with my boss about my concerns. And I'm not talking about the "give notice and then take the counter offer" game, I'm talking about a collaborative discussion about goals, wants, desires, frustrations and intentions.  

It's hard to express why this is so significant, but it makes me feel less like a "resource" and more like a valued employee. And feeling valued is one of the reasons I continue to stick around.

It's not about the tech

By and large the reason that software projects succeed or fail is People and Process. I think you'd be hard pressed to find a project with great people and good processes that failed just because someone used the wrong data binding library, or used inheritance over composition, or put their braces in the wrong place. It's important to stay abreast of new technologies, and it's important to maintain a wide breadth of experience, but New and Shiny != Success. Find a team with the right principles and you'll be successful with pretty much any tech stack*.

* Unless that tech stack is VB.Net, which was built with sin and unicorn tears to maximize Eye Bleedage and WTFs Per Hour. Nothing but darkness awaits you there.

If you never fail, you're not trying hard enough...

The great thing about working for people you trust is that you can take risks. You can try out a new role, roll the dice on an ambitious architecture choice, or push your team in a new direction. 

I've stayed in the same company for six years only because my role has changed significantly over time, and my role has changed because I wasn't afraid to try new things. If you're too busy covering your butt to try new things, you're in the wrong place.

... but if you fail often, you're not managing expectations

One of the most important skills I've developed over my career is "managing expectations". I can think of a number of projects where we missed some financial or calendar target but the client was still happy because we'd proactively managed surprises and given them options for managing change. Likewise, there were times that we hit the budget and deadline but the client wasn't happy because reality differed from their expectations. Success and failure have more to do with communication than they do with burning the midnight oil to release on time. 

Manage expectations by putting yourself in someone else's shoes, looking at the situation from their vantage point, and then doing whatever you can do to make them feel informed and empowered. This skill is crucial for building trust and credibility with a client or an employer.

No job is perfect, but you can always make it better

No one is going to call you up out of the blue and offer you your dream job. Ain't gonna happen; every job you get offered will have parts that are awesome and parts that suck. 

But that doesn't mean you can't make it better; I've been successful in my career, and at Heuristics particularly, by actively working to improve the things I didn't like. My team wasn't writing enough tests so I started writing helpers to make it easier; the tests weren't getting executed so I set up continuous integration; we weren't having frequent enough "lunch and learns" so I started scheduling them. 
 
If there's some aspect of your job that you don't like, change it! Worst case scenario is that you're unsuccessful, in which case you're no worse off than you are now. But far more likely your efforts result in a real and positive impact and you inch that much closer to creating your dream job. 
 
Remember: you can change the place you work, or you can change the place you work!

In conclusion

I feel blessed to have found a place so welcoming to me. I don't know what the next 6 years will bring, but I'm excited to find out.

Disorganization, procrastination, and the "zen desk"

by Seth Petry-Johnson 15. November 2014 16:14

I've been in my new house for just over a year. I have a nice little home office but the previous decor was terrible, so the first thing I did after moving in was rip up the carpet, re-paint, and buy new flooring.

You'll note that I said "buy" new flooring, and that I said nothing about installing new flooring. That's because I didn't actually get around to installing the floor; for a number of reasons, I just put that aside, set up my office on top of the bare wood subfloor, and went to work. Classy, right?

An interesting thing happens when you don't have a floor in your office. Since the floor looks like crap, there's not a lot of incentive to make anything else look nice. And over the course of the following year I worked in some pretty nasty conditions: papers everywhere, cables strung here-and-there, miscellaneous junk and empty Amazon shipping containers littering my bookshelves. It's very much like living in a "broken windows theory" experiment.

Fast forward to May of this year when I started taking over a lot more of the technical analysis tasks for my team. These aren't tasks that I necessarily enjoy doing, at least not for a sustained period of time, and I've found myself struggling to stay focused. Just moving my gaze from one end of my desk to another would turn up any number of bills to pay, still-barely-edible food items to snack on, or dusty/dirty items that suddenly needed cleaned right now who cares about that analysis deadline there are dust bunnies on the monitor!!!!!!

And then, just when I was most primed to be affected by it, the esteemed Cory House tweeted this:

And suddenly I realized that by maintaining such a disorganized and messy office I was making it that much harder to stay focused and on task. I was basically surrounding myself with a thousand and one disruptions and making it so damn easy for my procrastinating brain to sabotage me. I was drowning, and it was my own hand holding me under the water.

And so I fixed it. I asked my wife to disappear with the kids for a weekend day, hired a little help and laid down the new floor, assembled a new desk, and organized the s!*t out of everything. I went so far as to hide every cable and piece of non-essential equipment out of view* so that my workspace is clean, uncluttered, and totally non-distracting.

Was my life magically transformed into a utopia of "Getting Things Done"-ness or was I equipped with the powers to call forth intense focus on command? Well, no. But I do feel less stress and I have been able to focus a little bit better. I can walk into my office, take a deep breath, and feel more relaxed than I ever could before. And that's meaningful.

So what about you? What does your home or office look like? You may not be in control of life, but you can at least be in control of your environment. Try it.

* It took a ton of work but I was able to hide from view 3 USB hubs, 1 router, 1 network switch, 2 external hard drives, 1 Vonage box, 5 USB charging stations, 8 USB cables, 6 network cables, 3 standard power cables, 4 surge protectors and more wall-wart power supplies than I have fingers to keep track of. It often also holds at least 1 cat and averages less than 1 pair of pants per workday because, well, WFH and YOLO.

Hide Browser Link ["arterySignalR"] traffic from Fiddler

by Seth Petry-Johnson 14. July 2014 19:09

Visual Studio 2013 added a new feature called "Browser Link" that allows Visual Studio to communicate with linked browsers in a two-way dynamic data exchange. 

This is a great feature, but it's implemented by a super-chatty SignalR script that is dynamically injected into the website. This can be a problem if you're trying to use Fiddler to monitor or debug some traffic - the requests you care about can easily get lost in a sea of ".../arterySignalR/poll?transport-longPolling..." entries:

 

So how do you clean up Fiddler, without disabling Browser Link? 

Fiddler Custom Rules to the Rescue

Fiddler supports custom rules that can be easily extended to hide "noise" requests like this. To hide Browser Link traffic:

  1. Open Fiddler. If not already installed, chastise yourself and wonder how you made it this far in your career. Install it posthaste.
  2. Click Rules -> Customize Rules
  3. Search for "OnBeforeRequest" and add the statement shown below
  4. Close and restart Fiddler, bask in the sudden peace and tranquility of a "arterySignalR"-less proxy session.

Core principles: your compass in the storm

by Seth Petry-Johnson 30. April 2013 04:33

Software development can be chaotic. We often need to make decisions based on missing data (or data we know is likely wrong), and it's difficult to ask outsiders for advice because the "right" answer is often context-dependent. In essence, successful software development depends on repeatedly selecting the least bad option from a set of imperfect solutions.

In practice, this means that developers cannot simply memorize solution patterns or "recipes". If I say "authentication" and you immediately think Forms Auth, then you're short-circuiting the selection process without evaluating the options. Same thing if I say "shorter feedback" and you immediately think "two week sprints". You can't make a good decision without evaluating your options, and just because you choose Solution A on a similar problem a month ago doesn't make it the appropriate solution to today's problem.

"Been there, done that" is not a decision making process! 

Making decisions is hard. The deeper you analyze a problem the more variables you identify, and the more variables you identify the harder it is to reason through the myriad ways they interact. It's so much easier to look at a problem, wait a few nanoseconds while the pattern matching functions of your subconscious mind do their magic, and then do the same thing that you did the last time you had a similar problem. After all, you tell yourself, it's the "pragmatic thing to do" because you don't have to "waste time" on analysis or research. "The devil you know", and all that.

Not so fast.

Pattern matching is a great heuristic for quickly identifying potential courses of action, but not for selecting the best one. Making the best possible decision requires greater attention to detail and greater appreciation of nuance. If you get the details wrong then it might seem like a good decision at a high level, but eventually you'll suffer death by a thousand papercuts. [Or you'll go broke under technical debt, etc. Insert your favorite metaphor here]

So how do we select from that set of imperfect solutions?

The key to making good decisions is to articulate your core values and principles, and then use them to derive a solution. Rather than memorizing specific solutions, memorize the steps you follow and the questions you ask to arrive at a solution.

For example, at Heuristic Solutions we have identified four core values that guide everything we do:

  • Understanding: we can't be successful unless we know what "success" looks like
  • Predictability: surprises are disruptive; we value procedures that minimize their impact
  • Productivity: success requires efficient operations
  • Quality: we value doing it right the first time; re-work is anathema to us

When making a decision, we frame it in context of these values to better see the trade-offs at play. For example, a low degree of Understanding means we can't be very Predictable, so we do more up-front analysis when predictability is crucial. When Productivity is necessary then we invest in Quality so that we can preserve velocity over time. 

This process forces us to consider those pesky (yet all-important) details specific to each situation. Sometimes this leads us to take radically different approaches to similar problems, but in each case we know we're maximizing for the things that truly matter to our success.

What are your core values?

What matters most to your organization? If you haven't already articulated your core values, take a minute to do so. Do you care about speed to market? What are you prepared (or not prepared) to sacrifice to get it? What does "quality" mean to you? How important are estimates to your planning process or stakeholders? Is it more important to maximize developer productivity, or team productivity?

When you're done, write them on your team board. Repeat them out loud each time you make a decision. Have discussions about which values are more important in each scenario, and then brainstorm ways to maximize those specific values. 

One parting word of advice: don't be afraid to follow your values, even if they contradict "best practices". While it's never a good idea to blindly ignore prevailing wisdom, realize that only YOU can fully appreciate the nuances of your specific situation. Core values are your compass, and by trusting them you allow yourself to select the best possible solution for this specific decision, ignoring "one size fits all" advice that might otherwise get in your way.

(Of course, if you frequently find yourself ignoring best practices then you might be thinking your situation is more unique than it really is. More on that in a later post!)

Bottom line: articulate what really matters to you, and then consciously and intentionally use those values every time you make a significant decision. You might be surprised at where this process takes you.

Architecture and design are negotiable; clean code is not

by Seth Petry-Johnson 29. November 2012 04:42

In a perfect world, each and every feature we build would be lovingly crafted, properly factored, elegantly architected and fully tested... and we'd have enough budget for all of it.

I'm not lucky enough to live in that world. My job is to help clients use their limited budgets in ways that maximize their overall business objectives. Sometimes that means minimizing software maintenance costs, other times it means getting an imperfect feature into production as fast as possible. These types of decisions always involve trade-offs, often sacrificing some sacred agile calves on the altar of "getting it done".

So what's a pragmatic craftsman to do? How can we intentionally leverage technical debt to meet short term goals and still maintain a high bar of general quality in our code?

The principle I use in these situations is that architecture and design are negotiable, but clean code is not. This is best explained by breaking it down into component parts:

  • Architecture is negotiable. Not every project needs an n-tier separation of concerns. Not every project needs DI/IOC. Same for message buses, impersonation frameworks, 2nd level caching and so on. These things are often valuable and should not be forsaken lightly, but they do have costs. A pragmatic craftsman should be able to articulate those costs and weigh them against their value over time.
     
  • Design is negotiable. By "design" I mean the low level feature code. Sometimes you can get away with a switch statement instead of a Strategy, or tight coupling or low cohesion or large method bodies. Same for violating SOLID principles. I'm not saying do these things lightly, but be pragmatic about it. Learn to identify scenarios when their benefits will be realized, and scenarios when they won't.
     
  • Clean code is NOT NEGOTIABLE. Sacrificing architecture or design can be forgiven if you make it easy for future programmers to clean up and improve. This means that no matter how "dirty" your architecture is, be damn sure your code easy to read, clearly communicates your intent, and documents WHY you've made the decisions you have.

But aren't architecture and design part of "clean code"?

Absolutely. Clean design trumps comments explaining bad design every day of the week. But you will face scenarios when you have to trade away something in favor of something else (time to market, hitting a budget, risk aversion, etc). This blog post is all about identifying those parts of clean code that you can give up, and which parts you should die on your sword to keep.

Here are some of the things I consider inviolable and strategies for preserving them:

  • Code should always be easy to read and understand. I don't care how nasty the architecture or design is, I don't care how stripped down the feature is, and I don't care what your budget is. You should always make it easy for the next programmer down the road to understand your intent (what you mean the code to do) and your implementation (what the code actually does).
     
  • The messier the architecture or design, the more you should document with comments. Well written clean code doesn't need a lot of "here's what I was thinking" commentary. That commentary is much more valuable when you're taking shortcuts and incurring technical debt, because it can make it much easier for someone to pay back that debt later. My rule of thumb is, do it right yourself or provide clues to help the next dev make it right later. (With a huge preference on the former!)
     
  • Keep your eye on the prize. In other words, always have an idea of what the end goal would be. Think about what you wish you could be implementing and try and "lean" the code in that direction. For example, think about what the very first step of refactoring would look like. Can you go ahead and take that first step now?

Closing thoughts

Like many of my posts, I'm talking about edge cases. Most of the time we should be striving for clean code including architecture and design. Too many programmers (and clients!) are far too quick to take shortcuts, and this post is NOT about taking more of them.

But if you've fought the good fight and tried everything else first, and you still need a shortcut, then be sure you do it cleanly and in a way that can be easily fixed later.

Happy coding!

JOIN-less lookup fields using enums and metadata attributes

by Seth Petry-Johnson 19. September 2012 18:10

One of the projects I work on contains a large database with a lot of lookup fields containing status codes, record types, processing flags, etc. A great deal of these are implemented in a typical normalized fashion with two tables and a foreign key relationship:

Pretty standard stuff, right? Sure, yet at the start of a new development phase last year I decreed Thou Shalt No Longer Do This!  

What's the big deal with lookup tables?

On this project (and on many of my others) I had noticed the following patterns:

  1. The vast majority of the lookup tables contained a single "Name" field containing a human-readable description of that status code or record type.
  2. Because the database is so large, a typical query might need to do five or six joins just to get the names of the lookup values.
  3. The values in the lookup table rarely changed. When they did change, it was always as part of a scheduled release.
In short, we were paying a performance penalty on each and every query to obtain unchanging metadata about a small, discrete set of known values.
 
In addition, dealing with these joins by hand was an annoyance whenever we needed to write manual T-SQL queries or express ad-hoc queries directly against the Linq to Sql data context. 

There's Got To Be A Better Way! ™

The solution that we implemented, and that we're still using nearly two years later, is simple:

  • All lookup-style data (status codes, record types, etc) have a corresponding C# Enum
    • A custom Attribute associates each value with a human-readable string
    • A custom Attribute associates each value with a database key representation
  • There are no lookup tables or foreign keys.
    • The domain model contains properties of the Enum types
    • In the database, each lookup field is a string, not an integer foreign key
    • When we write to the database, we convert the enum into its database representation and store that value
    • When we read from the database, we convert the stored string into an enum instance
  • The parsing and conversion is handled via extension methods:
    • String.ToEnum<T>
    • Enum.ToDescription()
    • Enum.ToStringConstant()
A picture is worth a thousand words here:

Was it worth the effort? 

After nearly two years of use I'm pleased to say that this pattern has served us well. The extension methods make the lookup values easy to use, avoiding joins improves system performance, and storing strings (rather than foreign key integers) in the tables makes the raw data a little bit easier to use. 

Of course, your mileage may vary. This technique isn't appropriate if your lookup values are dynamic (rather than a fixed set) or if you need to track a large amount of metadata in the lookup table. But if your project has the same characteristics that mine does, I recommend you give this a shot.

Happy coding!

 

Appendix: the source code

I slopped the code for the attribute classes and extension methods onto my Github repo.

Avoid heroics; real value comes from discipline

by Seth Petry-Johnson 1. August 2012 19:11

Spend any amount of time in this industry and you'll eventually end up playing the hero. Maybe you meet that deadline by pulling a 70-hour week, or you fix that production issue by editing a script or database procedure directly on the server. You shipped the product, you fixed the bug, you "got the job done". You're a hero, right?

The only problem is, heroic behavior is dangerous. I've played the hero enough times to know what happens after the dust settles:

  • You pull a 70-hour week and hit the deadline, but the code sucks. It isn't tested, it has bugs, or it just feels like a half-assed feature. 
  • You hot-fix a file on the web server, but forget to update source control. The next deployment replaces your fix and re-introduces the bug.
  • You hot-fix the database server, and the next deployment crashes because a table or column already exists.
  • You burn out, lose focus, and make stupid mistakes.
The common pattern here is that you've achieved a short-term goal at the cost of highly unpredictable future results. Someone, somewhere, will have to clean up the mess when it catches them by surprise. 

In other words, you've created bad technical debt that is unintentional, hard to manage, and hard to quantify.

So what's the solution? 

It's certainly easier said than done, but the solution is to stay disciplined and stick to your process

If that process says you write tests first and get QA feedback before committing to trunk, then that's what you need to do... even if it means missing a deadline.

If that process says you must create a formal release package to modify the production database, then that's what you do... even if it means taking longer to fix the bug.

Discipline yields predictability by forcing you to be proactive. It helps minimize future surprises and prevents you from becoming overly reactive, which can often lead to a cascading series of errors when you start jumping from fire to fire.

When to cheat

There are obviously exceptions. If the server is totally down, and you know of a quick fix to bring it back online, then maybe you should fix it. But if you've internalized these principles then you'll feel real damn uncomfortable doing it, and that discomfort will remind you to take the necessary "after-action" steps to pay back that technical debt immediately after the crisis passes.

Remember kids: avoid heroics. Real, lasting value comes from staying disciplined... especially when you feel pressure not to. 

Test Data Setup: Staying clean, DRY, and sane

by Seth Petry-Johnson 24. July 2012 18:27

There are many good reasons to avoid hitting a database in your tests. I agree with all of them, and I try my best to avoid doing it.

However, some tests do need to hit the database. Even the most dependency-injected and mock-infested system should hit the database when testing the data access layer... after all, what good is a test suite that doesn't test any of your actual data access logic? And if you're smart and follow the testing pyramid then you'll have some integration and acceptance tests that need a database as well.

In "Rules for Effective Data Tests" I mentioned some strategies for setting up those data tests. This post expands on those ideas and shows how to keep your setup code clean, DRY and maintainable.

What's so difficult about setting up a data test?

First, a definition. When I say "data setup" I'm talking about anything you do in the body of a test [or a setup method] to create the database records needed for a given test to execute.

While similar to the setup of a "true" unit test, interacting with a Real Life Database™ makes things a little more interesting. Some of the challenges we have to overcome are:

  • Test residue: Unless we delete it, data created by each test remains in the database when the test exits. At best this just wastes space; at worst, it starts to interfere with other tests. (See here for a common solution to this problem) 
  • Database constraints: Foreign key constraints are a real pain. When setting up test data you need to create the entire data graph to satisfy the database constraints, regardless of if those relationships are actually relevant to the test.  
  • Verbosity: Because of the foreign key issues mentioned above, setting up data tests requires more code than setting up a unit test. This makes tests harder to write, harder to maintain, and harder to keep DRY. 
  • False negatives: The more complex the setup, the greater the change that tests will fail not because your application logic is wrong, but because you screwed up the setup. 
  • Painful to debug: Debugging a data test is more difficult and time consuming than a unit test. Not only does the test take longer to run, but debugging it often means poking around in both the application debugger and a database tool.
A daunting list to be sure, but it's manageable.

Characteristics of good setup code

The primary contributor to the quality and maintainability of your data tests is the setup code; the easier it is for someone to understand the specific scenario you are creating, the better equipped they are to maintain that test.

Conversely, the harder the scenario is to understand and maintain, the less value that test will provide over time. Tests that contain an unintelligible jumble of setup code have a very real risk of being deleted (rather than fixed) if they ever break due to new code changes.

So what is "good" setup code? It should be: 

  • Highly expressive (high signal-to-noise ratio). Readers should be able to very quickly understand the scenario(s) you are creating without mentally parsing code. 
  • Highly reusable through the use of default values. If I just need to create a Person, let me call "CreatePerson()" and fill in the details for me. 
  • Easily customizable to each test's needs. Since the customized data are usually very relevant to the test at hand, it should be easy for a reader to spot them.  
  • Maintainable; databases change, and its not uncommon to add a new required field. The fewer changes you need to make to existing test code to support these changes the better.
These characteristics aren't specific to data tests, of course. They apply equally well to setup code of any kind.
 
So what happens when we apply these principles? Read on for specific suggestions...

Data Helpers: the Object Mother pattern for DB entities

The Object Mother pattern describes a special kind of factory class that encapsulates the instantiation of an object (or group of objects) in a specific state, usually mirroring a common scenario in the underlying domain. For instance, you might have an Object Mother that creates an Order object, adds some Order Items and marks it as Shipped. The goal is to turn a complex initialization process into a one-liner so that it is easier to read and maintain.

We can use this same approach in a data test, except that instead of constructing an object in code we need to create one or more records in the database. I call these classes "Data Helpers" and they generally:
  • Are static classes: These classes have no need to ever be mocked out, and making them static makes them easier to invoke in your tests. Omitting the need to instantiate them increases the signal-to-noise ratio and keeps setup code lean.
  • Follow a naming convention: It's important that other developers can discover and use your helpers, so follow an obvious naming convention. I recommend:
    • Put all Data Helpers in the same namespace
    • Name according to the primary entity being created. OrderHelper, CustomerHelper, etc.
  • Create a single "primary" entity: I find that Data Helpers are best focused around a single primary entity, such as an Order. It's fine if they create child or related data for the primary entity, but they should avoid creating a large number of collaborating entities. See below for how to use "scenario" objects for more complicated setups.
  • Treat performance as an important, but secondary, concern: Data Helpers provide their primary value by reducing the cost to create and maintain data tests, so whenever "speed of execution" and "ease of use" are at odds with each other, favor ease of use. That doesn't mean you shouldn't care about performance, and in fact you should care very much. Just not so much that you erode the overarching goal. You can easily offload the performance hit to the CI server.  (You do have a CI server, right?)
The methods exposed by a Data Helper class should:
  • Use optional parameters for as much as possible: A primary benefit of Data Helpers is dramatically increasing the signal to noise ratio within setup logic. Callers should only have to specify the specific values that are significant to their test; all other properties should be created using reasonable defaults.
  • Are semantic: Don't be afraid to create highly specialized methods, such as CreateOrderWithBackorderedItems(), which usually just delegate to a more general method with a specific combination of arguments. This can dramatically improve maintainability; if you add a new field to the database, and you can easily infer the correct default value based on the semantics of the method call, then you can implement that new field in the helper method without touching any of the existing tests.
  • Return the created entity: The caller probably needs to know about the data that was created, so return the entity object that you just created. 

Data Scenarios: a bunch of Object Mothers working together

Data Helpers are great when you need to create test data, especially if you want to specify a few key properties and use defaults for the rest.

But what if you want to create multiple related entities, or you want to reuse a specific setup in multiple tests? For instance, you need to create a Customer, with completed Orders in the past, and an in progress Order that's ready for checkout. In these cases, I create a special type of Data Helper that I call a "Data Scenario". 

Scenario objects have these characteristics:

  • Create a large or complex set of data: Just like Data Helpers reduce individual object setup to a one-liner, Scenarios reduce multiple object setup to a one-liner.
  • Model real-world scenarios: The whole point of a Scenario is to encapsulate realistic data patterns that might exist in production.
  • Expose a smaller set of configurable defaults: Scenarios tend to expose fewer arguments than Data Helpers because they are better suited to creating general purpose groups of data rather than highly-specific records.
  • Are often used in fixture-level setup: A common pattern is for a group of tests to share a Scenario object that is created in the test fixture's setup routine, and then provide test-specific adjustments to the Scenario via inline Data Helper calls. 
  • Are instantiated, not static: Scenario objects are NOT static methods of a helper class. Instead, they are objects that get instantiated and perform their data manipulations in the constructor. This allows Scenarios to be created, manipulated and passed around as needed.
  • Expose pointers to the interesting data: A Scenario object should contain public properties containing references to the entities it creates (or at least their IDs). This allows test code to further manipulate the Scenario data or to make assertions against it. 

Common objections to these techniques

Some of the specific objections that I've heard are:

  • It takes a lot of time/code to write/maintain helpers: Yes, on a complex system you'll end up with a decent amount of non-production code implementing these helpers. And yes, it requires an investment of time to get started. But I've been using these patterns for two years on a large application and I'm absolutely convinced the effort is justified. Once you get a decent library of helpers set up it becomes really, really easy to write tests... sometimes even easier than setting up expectations in a true unit test!
  • The tests take a long time to run: Yes, they do. You should do your best to avoid hitting the database except when necessary, and you should lean on your CI server to run the whole suite for you. If you can find a way to test the data access code without hitting the database, I'll eat my hat.
  • Its hard to know what helpers exist: True, if you're not the author of the helpers then they are harder to use. That's why it's so important to follow good naming conventions. You can also, you know, talk to your teammates if you create a new helper or wonder if one exists.
  • I don't wanna: If you don't care about testing the data access code, or you don't care about writing good tests, then I got nothin'. Go play in traffic.
Let's face it: data tests suck, but they are a necessary evil. The goal is to maximize their value while minimizing their cost, and that's what these techniques do.

Closing thoughts

In my experience it works best to think of Scenarios as the broad context in which a test will execute; they create all of the background data that is necessary for a test to run, but isn't very significant by itself. Data Helpers are used to create specific data records that are significant to a specific test. Used together, they create a very rich language for setting up your test data in an easy to write, easy to read, and easy to maintain form.

I've been using these techniques on a multi-year, multi-developer, multi-hundreds-of-thousands-LOC project and I am convinced that they are directly responsible for allowing us to maintain high test coverage on a very data-intensive app. 

Happy testing!  

Defensive Programming: Avoid Tomorrow's Debugging, Today

by Seth Petry-Johnson 18. July 2012 04:38

Just as I was trying to write a good intro to this post, Jimmy Bogard tweeted:

I've felt that frustration myself many times. I work on large software systems and often have to troubleshoot hard-to-replicate, data-specific defects given only an error message and limited access to the production environment. Turning this limited data into an actionable bug report can be very, very difficult.

This experience has shown me that there are two types of programmers: those that intentionally craft code that it is easy to debug, and those that don't. Programmers that don't do this are, unfortunately, incredibly common and incredibly costly to an organization. Don't be that guy/gal whose code everyone hates to debug!

This post explains some coding techniques that will make your systems easier to troubleshoot and less costly to maintain. Use them; your team will love you for it!

What does "defensive programming" look like?

"Defensive Programming" refers to a collection of coding techniques that decrease maintenance costs by surfacing defects as early as possible, and by making them easy to troubleshoot. There are many articles on this topic, some arguing for and against it, and I encourage you to read them for additional insight.

Specifically, defensive programming means that you:

Write clean, simple, intent-revealing code

This is a universal requirement, I don't care if you're coding defensively, offensively or somewhere in the middle. The easiest defect to fix is the one that never occurs, and simple code is less likely to contain defects than complex code, so keep your designs as simple as possible.

(If you don't agree with this statement, stop reading and go play in traffic... your team will thank you!)

Assume inputs are tainted until proven otherwise

Most applications need data to function and many programmers make assumptions about their data, such as "this string will never be empty" or "this value will always be positive". 

Unfortunately, that string can be empty in some cases, and that value will be zero at some point in time. If you don't validate your assumptions before using the data then you risk intermittent, hard-to-troubleshoot errors. 

Therefore, do sanity checks on your input BEFORE you use it. Use a "design by contract" tool like Code Contracts for .NET if you can, or do it manually if you must. In any case, validate your input before you use it and display a helpful error message if validation fails. (See below for more on helpful exceptions)

In addition to making these errors easier to diagnose, treating all input as potentially hostile is also a security best practice. Sanity check your data and make both your teammates AND your security team a little happier!

Fail early, with useful messages

This is as important as it gets.

Imagine you get an error report that says "Sequence contains no elements". What do you do next? If you're lucky enough to get a stack trace then you can trudge through the code looking for the offending line, but what happens if the offending line contains multiple statements chained together? 

Now imagine the error report says "Could not obtain order items for order 1234; sequence contains no elements". You haven't looked at a single line of code yet, and you already have way more information about the problem!

Same goes for null reference exceptions: Would you rather see "Object reference not set to an instance of an object" or "Cannot calculate sales tax for order 1234; Tax Calculator object was null"?  

The key principle here is that you should anticipate errors that might occur and throw exceptions that provide key debugging info directly in the error message:

  • Help the programmer locate the statement that failed and understand WHY it failed.
  • Include key pieces of data needed to reproduce it: order ID, customer ID, etc. (Obviously, be careful not to expose identifiers that could compromise the security of your system!)
Ask yourself, "if this occurs in production 6 months from now, what pointers would I need to zero in on the problem?" and then include those pointers in the exception. 

Use "fail safe" default values, where appropriate

In many cases, invalid data may not necessarily require an exception. For example, ask yourself these questions about each variable or statement you write:

  • Can I treat null strings the same as empty strings?
  • Can I treat null sequences (lists, arrays, etc) the same as empty sequences?
  • If a string parsing fails, can I substitute a default value instead of throwing an exception?
If the answer to any of these questions is "yes" then use the null coalescing operator or conversion helpers to convert null or invalid values into something less "exception prone". I rarely need to differentiate between null and empty sequences so I've written an .ToEmptyIfNull() extension method that I use whenever I need to iterate over a collection. Major reduction in null reference exceptions for negligible effort.
 
Of course, sometimes you DO care about differentiating between null and empty, or ensuring a parse succeeds. In those cases just throw a helpful error message (see above) as soon as you detect the problem. 

"Future proof" your program flow

I've seen a lot of defects occur when business conditions change, and something that "could never happen" when the code was written suddenly becomes possible. 

Examples:

  • When you write a switch statement, always include a default branch. It's better to have the default branch throw an exception like "not implemented condition 'FOO'" than silently fall through and cause a potentially harder-to-debug error.  (Of course, you do your best to avoid switch statements, don't you?)  
  • When you have a chain of if/else-ifs, always include an else branch. If it should never be reached, throw an exception that explains the conditions that occurred and why you expected them to never happen.  
  • If you're dealing with combinations of different states or variables, and certain combinations "should never occur", go ahead and handle those combinations anyway. It's better to throw an exception you can control than to let the system fail on its own.  (For example, "Order 123 has status SHIPPED, but IS_CANCELLED was true; is the update service malfunctioning?")

Go, make the world a brighter place! 

Using these techniques can help you avoid errors in production and can make it easier to resolve errors that do occur.  Using them will bring joy to the hearts of men and will make you beloved amongst your teammates. Use them; do it for the children.

Seth Petry-Johnson

I'm a software architect and consultant for Heuristic Solutions.

I value clean code, malleable designs, short feedback cycles, usable interfaces and balance in all things.

I am a Pisces.

Month List