In-Memory OLTP Q & A: Myths and Realities

Since SQL Server In-Memory OLTP (formerly known as project code-named “Hekaton”) was unveiled at PASS last November, there has been a lot of conversation and buzz about it. Some of the discussion was well informed and accurate, and some… not so much. With this series, and with online documentation at http://technet.microsoft.com/en-us/library/dn133186(v=sql.120).aspx, we will be giving you much more full and complete information than we’ve been able to previously. In the meantime, we’ve heard lots of the rumors, myths, and misperceptions going around out there. In this post, I’ll identify some of the more common ones, and discuss the realities behind the myths.

Myth #1: In-Memory OLTP is a recent response to competitors’ In-memory offerings

Reality

Project code-named “Hekaton” was started around 5 years ago in response to business and hardware trends. We started an incubation project in partnership with Microsoft Research to imagine what a database engine, designed from scratch for today’s hardware realities, would look like. The In-Memory OLTP feature is the outcome of that incubation.

Myth #2:

In-Memory OLTP is the same as the old SQL 7.0 feature DBCC PINTABLE; allow pinning buffer pool pages or tables in Memory.

Reality

In-Memory OLTP uses a completely new design built from the ground up to optimize for efficient In-Memory data operations. Data in memory-optimized tables is not organized in pages, and does not use the Buffer Pool. By dispensing with data structures and other infrastructure which is there to facilitate paging subsets of data between disk and memory, we can get to a much leaner and more efficient data engine while still retaining the essential characteristics of the data engine.

Myth #3:

In-Memory databases are new separate products

Reality

Unlike most or all of the other in-memory database products on the market, In-Memory OLTP is a feature fully integrated into SQL Server 2014. This means that no separate installation is needed. There is no need to learn different tools. Most importantly, it allows an incremental investment strategy, where you selectively move tables to the most appropriate storage for the data represented by each table. In addition because it is built into core SQL Server you can leverage other SQL Server functionality in addition to In-Memory OLTP.

Myth #4:

You can run an app on In-Memory OLTP with NO changes whatsoever

Reality

There are at least some changes, at minimum changing the schema to be In-Memory ready. While we are working towards this ideal, in reality the application will typically need some code modification. In many cases, these changes can be isolated in Stored Procedures used to access the database, and the actual application need not change. In future releases of SQL Server, we plan to increase our compatibility, and thus reduce the need for modifications, but there will always be a need to at least change the schema to designate the memory optimized tables.

Myth #5:

All applications will get 100X Improvement with In-Memory OLTP

Reality

Results will vary depending on the application and the level of effort chosen and applied. Many applications see very significant performance gains, on average 10X, with results up to 30X. Much like optimizing hardware configurations, the results you get vary significantly depending on how much effort you put into the project. You can do a very simplistic, and low cost project, defining hot tables as memory optimized, and doing no other changes, or you can rewrite stored procedures to be natively compiled, investing more effort, and resulting in much better results. It should also be noted that not all applications are appropriate for In-Memory OLTP technology. In some cases, other features such as columnstore will be more successful.

Want to learn even more about In-Memory OLTP? Download the SQL Server 2014 preview and get started today or see more blogs in the series introduction and index here!