New Blog



For those of you that have been following this blog for a while, I wanted you to know that I have created a new blog that I will be posting to more frequently.  You can find it here.

Hope you enjoy!

Data Integration Tests and Transactions - Part 2

2014-05-25_data_integration_tests_and_multiple_data_framework_transactions

Last time I talked about how to use the TransactionScope class to handle the rollback of any changes made during a data integration test. This time, I would like to talk about another issue that will eventually come up using transactions: using Entity Framework code-first in combination with any other data access framework while leveraging TransactionScope.

In our case, we use Dapper to insert data before our tests and to assert things about the state of the database after exercising the code-first Entity Framework data layer functionality we are testing. Below is our example.

private TransactionScope _scope;

[SetUp]
public void SetUp()
{
    _scope = new TransactionScope();
}

[TearDown]
public void TearDown()
{
    _scope.Dispose();
}

[Test]
public void test_to_demo_ef_and_dapper_connections()
{
    var person = new Person {FirstName = "Todd", LastName = "Meinershagen"};
    var sql = "INSERT INTO dbo.Persons (FirstName, LastName) ";
    sql = sql + "VALUES (@FirstName, @LastName)";

    using (var connection = new SqlConnection("a connection string"))
    {
        connection.Execute(sql, person);
    }

    var db = new PersonContext();
    var matchingPersons = 
        from p in db.Persons
        where
            (p.FirstName == person.FirstName) &&
            (p.LastName == person.LastName)
        select p;

    matchingPersons.FirstOrDefault().Should().NotBeNull();
}

Both Dapper and EF establish their own connections, and we would expect that each connection would take part in the ambient transaction being created during the [SetUp] of our test fixture.

Normally, this is not an issue, but when the tests are run, we get a message similar to below:

MSDTC on server 'servername' is unavailable.

This can occur for any number of reasons such as the following:

  • Opening multiple connections with same connection string to SQL Server 2005.
  • Opening multiple nested connections with same connection string to SQL Server 2008.
  • Opening multiple connection to two different SQL Server 2008 instances.

In the case of our tests, we are making two connections (one for EF and one for Dapper) to SQL Server 2008 using the same connection string. Based on the guidance above, this should not force our system to escalate to MSDTC.

So, what is happening here?

After searching diligently on the internet (thank God for the internet!), I found an article explaining that Microsoft cleverly adds information to a code-first connection string to allow Microsoft to collect statistics from those using Azure and Entity Framework to determine what percentage use code-first as opposed to database-first. (Why Microsoft, why?) This is supposed to have been fixed in EF 6.0.

So, instead of using a connection string as you specified:

Data Source=(local);
Initial catalog=LocalDb;
Integrated Security=True;

The system uses the following for EF:

Data Source=(local);
Initial catalog=LocalDb;
Integrated Security=True;
Application Name=EntityFrameworkMUE

So, what does this mean?

Unfortunately, this causes our system to see the two connections (Dapper and EF) as two different connection strings and therefore, it looks like you are connecting to two different data sources which escalate to MSDTC. What a pain!

So, how do we get around this issue.

One way would be to use EF for both production and test code, although this robs us of the benefit of quickly setting up data using a light-weight framework like Dapper. Another option is to modify our test project’s configuration file by explicitly specifying the Application Name for our connection string. The system will then see the two connections as the same. No more escalation to MSDTC!

Hope this helps.

Data Integration Tests and Transactions - Part 1

2014-05-25_integration_tests_and_transactions

I have been writing some integration tests in .NET lately to specify behavior for my data layer. The issue that always comes up is how to make sure that each test is completely isolated from other tests. This requires each test to initialize needed data and at the end to clean up any data that was created so that other tests are not impacted by it. In the past I have set up compensating queries to delete that same data on tear down.

This can create one of two problems:

  • maintainability - it is hard to maintain this logic going forward
  • reliability - it doesn’t guarantee successful rollback of the initial inserts because the query could possibly fail leaving the tests in a position for unsuccessful future tests against the database

So, what do you do?

I have been using the handy TransactionScope class to allow any connections to participate in the ambient transaction and then dispose of the transaction without committing on tear down of the fixture.

In the example below, I have used NUnit, but this could work with other testing frameworks quite well.

[TestFixture]
public class MyFixture
{
    private TransactionScope _scope;

    [SetUp]
    public void SetUp()
    {
        _scope = new TransactionScope();
    }

    [TearDown]
    public void TearDown()
    {
        _scope.Dispose();
    }

    ///<summary>This is a silly sample test for display purposes only.</summary>
    [Test]
    public void given_context_when_something_happens_should_have_expected_outcome()
    {
        ExecuteSomeLogicForInsertingDataForContext();

        RunSomeActionToMakeSomethingHappen();

        AssertThatSomeExpectedOutcomeOccured();
    }
}

You could make this an abstract base class and make the SetUp and TearDown methods virtual if you would like to reuse this across any of your data test fixtures. As long as you don’t call _scope.Complete() the changes you have made should be rolled back/aborted on the disposal of the transaction.

Hope this helps!

New Blog Series

2014-01-20_new_blog_series
As the new year is upon us, I have decided that I need to more consistently contribute to the community. And what better way to do that than to write more consistently on my blog.
In order to produce something a few times every week, I have decided to explore the tools that I use every day in greater depth - to look further into the nooks and crannies and make sure that I truly understand the capabilities of the tools that I leverage.
This idea came from the book, “The Passionate Programmer” by Chad Fowler. In it, he suggests that,
Our industry tends to practice on the job.
He then relates our craft to the art of playing music, and,
Musicians are paid to perform in public—not to practice.
So why would we expect to do it differently in our profession.
It takes hours of practice in order to learn our craft and, unfortunately, too many are practicing on the job these days, because it is acceptable. You cannot learn everything that you ought to know as a software craftsman by merely writing code for production software. Focusing on certain aspects is needed in order to increase the quality and resilience of what we produce.
One of the areas that musicians focus on developing is physical/coordination or focusing on fundamental technical aspects of playing an instrument. They do this by playing scales in all of the range of the instrument, building the muscles in their lips or gaining callouses in their fingers, practicing dynamics with their diaphragms, etc. They cannot always be playing nice sounding music - they have to play simple, focused, seemingly monotonous exercises to focus on these fundamentals.
How can a software craftsman do the same? They can focus on looking at the full range of functionality available to them within a certain language, platform, or tool. And they can practice using these tools in private exercises known as katas or personal projects. That is what I seek to do with this series.
For starters, you may not have noticed it, but I am writing this blog now entirely in markdown syntax. In this post, I learned the following:
  • Images - add an exclamation ‘!’ at the beginning, follow with brackets [alt text] with some alternate text, open parentheses (url “optional title”) with a url and a title in quotes, which is optional.
  • Strong Emphasis - add two astericks ‘**’ or two underlines ‘__’ on both sides of a word/phrase to emphasize it in italics.
  • Block Quotes - add a greater than sign ‘>’ to the left of each sentence and it will stand out as a quote.
  • Unordered Lists - add an astericks ‘*’ before each line which is translated into an unordered list item in HTML.
  • Inline Links - add brackets [text] including any text to display and follow up with parentheses (url “optional title”) with a url and optional title in quotes.
You can find out more about markdown syntax by looking at this post from the Daring Fireball blog, which does a good job of explaining the basic features of markdown.
Hope it helps!

New Library for Non-Deterministic Testing

I have currently been working on creating a system-level, feature test for some functionality that our company is developing to allow hospitals and health care providers to request a batch of eligibility requests for patients using a combination of Visual Studio 2012, SpecFlow 1.8 and FluentAssertions.  

In doing so, I ran into the problem of non-deterministic time.  Once a user uploads a file of requests, there is not a determined amount of time for that request to get a response from a third-party provider that we use to fulfill the request.  

So how do we handle the fact that we need to make some assertions at the end of the upload?  What would be nice would be to either retry assertions that fail or keep trying an assertion in the event of failure for a given timeout period.  In this case, we need to know that we can get a response within 30 seconds.

Enter James.Testing.  (Check it out when you get a chance.  You can download the dependency to your project from NuGet.org here.)

James.Testing is a library of test utilities named after the author who wrote the book of James in the Bible.
"Dear brothers and sisters, when troubles come your way, consider it an opportunity for great joy. For you know that when your faith is tested, your endurance has a chance to grow." (James 1:2-3)
It's a fairly apt description of what testing ought to do for our applications as well.
Below is a description of the supported features.

Action Extensions

Executing an Action with Retries

Many times in integration tests, there is a non-deterministic time period between executing some initial action and asserting your expectations for the outcome. In this case, it would be nice to have a method for automatically having the test retry your action for a number of times even though the assertion fails. This method also supports setting a wait time in between retries so that you don't overload your system.
Example:
var counter = 0;
Action action = () => counter++;
action.ExecuteWithRetries(times, waitTimeInSeconds);

Executing an Action with a Timeout Period

In other cases, you might want to execute a given action for a given time period. For instance, if you have a requirement to expect a response within 30 seconds, you can set the maximum timeout period to 30 seconds for your assertions. This method also allows a user to set a given wait time between executions of a given action.
Example:
var counter = 0;
Action action = () => counter++;
action.ExecuteWithTimeout(timeoutInSeconds, waitTimeInSeconds);