I've used JetBrains' TeamCity continuous integration server on a couple of projects that I've worked on recently, and I really like it. It's more approachable and flexible than the TFS Build system, which is kind of hung up on the whole MSBuild thing, and to be honest, if I wanted to write all my tasks as mind-bogglingly verbose XML, I'd use NAnt. TeamCity plays nice with all manner of build and test tools, version control systems and so on. It's also free to use the Professional Edition; you only need to buy a license when your requirements exceed the generous limits (20 build configurations etc.). The only problem is that there is no hosted, as-a-Service provider as yet, so I decided to see if I could get it running on one of the new Windows Azure Infrastructure-as-a-Service VMs.

Initial setup

Getting things started was easy enough. I went to the Azure portal and created a new VM from the default Windows Server 2012 image; I used a small instance, with a single core and 1.7GB RAM. After waiting a few minutes while it was provisioned, I logged in via Remote Desktop, disabled Internet Explorer Enhanced Security Configuration, downloaded the TeamCity 7.1 installer and ran it. I set up the build server and a build agent on the same machine, just to see how it would cope with the limited resources. To make it accessible from the internet, I configured it to run the server on port 80; I had to open that port in the Windows Azure management portal and in Windows Firewall, as all ports except the RDP one are closed by default on Azure VMs.

It ran right away with no problems, but using its local storage engine, which is not recommended for production; you get a message box telling you to configure a proper database. Here begins the fun.

Database setup

TeamCity supports MySQL, PostgreSQL, Oracle and SQL Server. That's the order of preference: the documentation recommends using MySQL unless you absolutely can't for some reason. I guess this is due to its Java heritage. I could have set up another VM running Linux and MySQL, but that's getting kind of expensive, so I decided to have a go at getting it working with a Windows Azure SQL Database (formerly known as SQL Azure). This is completely unsupported by JetBrains, but as we all know, unsupported doesn't mean impossible.

I created a 1GB SQL Database through the portal, and followed the instructions for setting up an external database. The JTDS driver didn't seem to get on with Azure for some reason, so I used Microsoft's native JDBC driver. That allowed TeamCity to connect to the database, but when it tried to run its migration to set up the schema, incompatibility disaster struck. Windows Azure SQL Database requires a clustered index on every table. Most databases have a primary key on every table, which includes a clustered index, but the TeamCity database has a couple of tables with no primary key. The database setup appears to be compiled into the application somewhere; I couldn't find any SQL scripts in the installation, so I couldn't tweak anything.

I tried manually modifying the database to include clustered indexes and continuing the setup, but it wasn't having it, so I went to plan B. I installed TeamCity on my workstation and set it up with a database on my local SQL 2012 instance, which worked without any problem. Then I used the SQL Azure Migration Wizard to copy that local database up to Azure. The Migration Wizard is a fantastic tool which automatically makes the necessary changes to schema as it copies the database, including creating clustered indexes. It also copies data using BCP, so all the standard configuration and user data was copied too.

Collation hell

Once I'd copied the database up, I tried again with the Azure-hosted installation, and it mostly worked, but a couple of features kept crashing. Checking the logs revealed that old chestnut, the collation error. I'd created the Azure database with the default collation, SQL_Latin1_General_CI_AS, but TeamCity wanted Latin1_General_CI_AS and wouldn't budge. So I dropped the database and recreated it with that collation, and ran the Migration Wizard again. Happy with the new collation, TeamCity started working perfectly.

I hope that JetBrains will consider making the few changes necessary to support Windows Azure SQL Database as a database server in the future. There is an issue for it on the TeamCity YouTrack, but currently there appear to be no plans to address it.

I may put a script-dump of the database with the relevant changes somewhere for people to use to avoid this slightly tortuous route. The only issue is that you have to create the user account on the local database, but I could create one with a default administrator username and password. Let me know in the comments if you'd find such a thing useful.

Setting up a build

I wanted to test the system with a proper project with a good number of unit tests, so I used Simple.Data. TeamCity will pull from GitHub, and you can configure it to check for updates regularly. It will connect using HTTPS with your username and password, but I prefer using SSH, so I installed msysGit on the Azure VM and configured it with an SSH key. I'd set TeamCity up to run with a local user account, so I just copied the .ssh folder into that user's home folder. With that set up, it was able to pull the Simple.Data repository with no problem.

The main Simple.Data project has a bunch of unit tests and integration tests that all use NUnit, for which support is built-in to TeamCity. The unit tests all ran fine, but the integration tests for SQL Server and SQL Compact 4.0 required some additional setup. For the SQL Compact tests to run, I had to install the redistributable on the VM, which was straightforward enough. For the SQL Server tests, I created another Azure SQL Database and tweaked the database creation script (which runs for every test run) to work; again, mainly to do with primary keys and clustered indexes, but also removing references to file groups and options that are not valid in the Azure environment.

The last problem was managing the connection string. I don't want to leave a valid connection to an Azure database in the source code of an open source project. Fortunately, TeamCity can set environment variables for the duration of a build/test cycle, so I tweaked the test code to check for an environment variable and use that as a connection string if it found it. That meant I could put the connection string in the build configuration. I also set one test to conditionally ignore if the environment variable was detected, because it was to do with named connections and wouldn't work.

With that done, I had all but three tests passing; the three failures were due to the Microsoft SQL Server CLR types assembly not being in the GAC on the VM. I copied the relevant assembly into the project and set the reference to Copy Local=true, and voila! All 800 tests pass!


Considering that the spec of the Azure VM is considerably below the recommendation for TeamCity, and that the server and a build agent are running on the same box, the performance is surprisingly good. A build takes less than a minute and a half, and a good chunk of that is running the integration tests. In terms of using the TeamCity web application, the performance is fine; having the database off the server probably helps there. I've enabled guest access, so you can take a look for yourself: teamcity.cloudapp.net (there's a "Login as a Guest User" link at the bottom of the login window).

Production use

I'm intending to run TeamCity in Azure for proper production use, including continuous deployment to an Azure hosted service, in the near future. There's only one change I intend to make to the setup I've got now, and that's to have a separate build agent, which shouldn't be very difficult to set up.

When I shared my experience with an Azure mailing list I'm on, someone else said they had got TeamCity running on an Ubuntu Linux VM with it's own MySQL database, and a Windows build agent, so that's another option, and of course, if your project is entirely Linux-based, then you can have a Linux build agent too.