Last night I put an experimental project up on GitHub: Beeline. It tries to improve the performance of "getting data out of a database and turning it into JSON". Some explanation is provided in the README on that repo, but here is some more.

By CSIRO, CC BY 3.0

The "Problem"

For a while now, I've been building applications using "microservice" patterns, which means, erm, whatever you want it to mean. I want it to mean "deconstructing a larger application handling multiple concepts into lots of little applications, each one handling a single concept". One thing I do try to stick to is that each service has its own data store, and the only way of accessing that data store is through the service (although for reasons of economy when I'm in the early stages of development, that can mean multiple Postgres or MSSQL databases on a single server; the point is that I can very easily scale out later). These services are responsible for receiving data, processing it accordingly, storing it somewhere, and then later retrieving it on demand.

It's that retrieving it on demand bit that I was thinking about. For a lot of cases, we read data far more than we write it. In .NET Core, at least, there are two primary ways of getting data out of databases that I use: Entity Framework Core, and Dapper. Both of those (in very different ways) run queries against the database using some underlying ADO.NET implementation, and map the values into C# objects. Then, if your service exposes an HTTP interface, you serialize those objects to JSON, probably using JSON.NET, and write them to the response.

The Idea

As I've been following the progress of .NET Core 2.1 and C# 7.2, I've been seeing lots of stuff about the new types and language features that are designed to improve performance at a very low level. This is very much a work in progress, and there are more things in the pipeline (totally intended), but the next versions to be released include things like: Span<T>, a struct (i.e. no heap allocation) abstraction over collections of data, which can be .NET Arrays or unsafe pointers to unmanaged memory; various new places you can put the ref keyword, like on returns and local variables, to reduce copying of values in method calls; and some neat new stuff around Buffers and Memory generally.

Anyway, here I am, learning all this stuff about reducing heap allocations and memory overhead and whatever, and I'm thinking to myself, when I do one of those queries against the database using Dapper, and it creates all those model objects so I can pass them to JSON.NET to get serialized, that's a bunch of allocations happening right there. And the fields getting returned from the database have generally got the same name as the JSON property I eventually want, because that's how Dapper works. So can I cut out that step, and just take the values directly from a DbDataReader and write them as UTF8 bytes to a Stream or some memory, and if so, how much performance would I gain.

Beeline is the experiment I wrote to test the viability of the approach, and now I've added a BenchmarkDotNet test to see whether it actually makes that much difference.

The Implementation

Beeline works by constructing a serializer that takes a DbDataReader, reads the data from it, and generates JSON as UTF8 bytes, with no intermediate objects, strings or boxing. For primitive values like int, float, DateTime, etc., I've used the new UTF8Formatter type from the System.Memory package in CoreFX 2.1. To avoid creating lots of strings, I rent a char[] array from the ArrayPool class which is already in .NET Standard 2.0, copy directly into that from the DbDataReader, and then use the new overload on Encoding.UTF8.GetBytes that takes a ReadOnlySpan<char> to copy to a Span<byte>. In the current implementation, I'm not escaping control characters in the string (e.g. '"' and newline), so that's bad and should be kept in mind when looking at the benchmark below.

Because this is about proving the concept and initial testing, there are some optimizations I could still do. I haven't actually used any ref parameters or returns with Spans, although I think that's going to yield negligible results in this case; we're way above the timings where that matters.

Also, the current RowSerializer just makes a bunch of anonymous delegates[1] and then calls them in a for loop for every row; next thing I'm going to play with is using LINQ expressions to build and compile a single delegate that can be called once per row, which should be a bit faster, although I'm not sure how much. But that's what benchmarking is for!

The Benchmark

The test is very basic. Using a SQL Server database, it creates a table with three columns, and inserts 1,000 rows. The benchmark then tests how long it takes to fetch 100 rows from that table and write them as JSON (with camel-case property names) to a MemoryStream. There are implementations of the benchmark for Beeline, Dapper and Entity Framework Core, using EF as the baseline.

Want to see how Beeline blows the competition out of the water? Here we go:


BenchmarkDotNet=v0.10.11, OS=Windows 10 Redstone 3 [1709, Fall Creators Update] (10.0.16299.192)
Processor=Intel Core i7-4770K CPU 3.50GHz (Haswell), ProcessorCount=8
Frequency=3417965 Hz, Resolution=292.5717 ns, Timer=TSC
.NET Core SDK=2.2.0-preview1-007860
  [Host] : .NET Core 2.1.0-preview1-26102-03 (Framework 4.6.26102.02), 64bit RyuJIT
  Core   : .NET Core 2.1.0-preview1-26102-03 (Framework 4.6.26102.02), 64bit RyuJIT

Job=Core  Runtime=Core  

Method Mean Error StdDev Scaled Gen 0 Allocated
Beeline 422.5 us 6.500 us 5.762 us 0.45 7.3242 4.11 KB
Dapper 477.0 us 5.679 us 5.312 us 0.51 9.7656 10.45 KB
EntityFramework 928.9 us 9.476 us 8.864 us 1.00 47.8516 37.87 KB

So, yeah, it turns out... not that much of a raw performance gain compared to Dapper. It's maybe 10% faster. They're both around twice as fast as Entity Framework Core, and although I'm sure if somebody looked at the code for that implementation there are some tweaks that could improve it, EF is not "built for speed" in the way that Dapper is. Beeline does 2 fewer Gen0 allocations than Dapper, which is nice, but I'm not sure it makes much of a difference in the real world (and BTW, I'm impressed with the general low-allocation nature of Dapper & JSON.NET). The biggest gain is in the Allocated column, where Beeline allocates less than half the memory of Dapper/JSON.NET, and nearly 90% less memory than EF Core/JSON.NET.

OK, And?

Well, not a complete waste of time. Aside from the additional optimizations, I want to build single-endpoint ASP.NET Core apps for each of the implementations and then load-test them to see if there's any significant difference in latency, performance, and concurrency or scalability between the three techniques. I have no idea what to expect from that, but I'll write another post with the results when I have them.

For now, though, I'm just going to keep using a mix of Dapper and EF Core for my various projects, and I suggest you do, too.

Additional Spec Data for Nerds

I ran this benchmark on my home PC, and some of the specs are in the BenchmarkDotNet results (which are output in Markdown: nice touch). Additional info: I've got 32GB of RAM, of which 8GB is allocated to the Hyper-V VM running Docker for Windows. I'm using SQL Server 2017 for Linux, running in a Docker container. Everything is running off a SATA SSD.


  1. Interesting aside: you can't use Span<T> as a generic type parameter, because it's a "by-ref stack-only type"[2], so I had to declare a good old-fashioned delegate type instead of using a Func<>. ↩︎

  2. I totally know what "by-ref stack-only type" means, but it's too complicated to explain here. ↩︎