As part of our SQL Server 2014 In-Memory OLTP blog series we asked early adopter Rick Kutschera from bwin.party, to blog about his solution experience with SQL Server 2014 In-Memory OLTP. Read on below to hear more the experience from Rick.
The whole world right now revolves around the cloud, of which one the major architectural aspects is focused on scale-out. However, there are some areas where scale-out is not an option, and even more unfortunately there are architectures where scale-out in parts of the design causes a massive need to increasingly scale-up in other areas. One of these scenarios is the ASP.Net Session State Server.
When you take a website as a typical example of a scale-out model you have a number of servers in the webserver farm. It is perfectly flexible, with one exception: If you need to transport certain data points from one request a user makes to the next (e.g. the user id of a logged in user in a webshop) this model naturally comes with a limitation. As the subsequent requests can hit different servers in the farm you need to coordinate this special “session data” in a central location. As you can see in the picture, no matter how big your farm is, there is always one database server to coordinate it.
Microsoft offers three models within its Session State architecture to meet the needs for this kind of application scenario: 1. An InProc version, which is only good for very small farms, 2. A version based on AppFabric, which in turn needs a database as its controlling instance, or 3. A SQL Server based approach, which is the implementation I will focus on from now on.
SQL Server is a good bet for a scenario like this. It is a proven technology, which can handle large amounts of data and it provides full ACID compliance if needed. The problem though is that SQL Server, along with other relational databases, is designed in a way where heavily accessed data on a page or small number of pages can be a barrier to scale and performance. Looking at the Session State, what you typically find aligns with this situation which is a relatively small number of entries in the table (<100.000) that are updated quite heavily. The reason relational databases have a problem with this is a concept called latching, which helps SQL Server ensure the physical consistency of modified data pages. Without delving too deeply into the details of that let me just say that the solution, due to latching, does not scale too much more than 10.000 Requests/Sec. (Max. 15.000 if you use a lot of tricks.) That number might sound like a lot, but it really isn’t in today’s high-speed world. So what to do about it?
When Microsoft first introduced me to the In-Memory OLTP engine (project codenamed “Hekaton”) as part of SQL Server 2014, I immediately saw the potential for a real fix to my dilemma. A highly scalable, latch-free, memory-optimized, processor-optimized SQL Server. Since the In-Memory OLTP engine was integrated as part of SQL Server, I was quickly amazed with the relative ease in working with it. Due to this integration the syntax was very close to what we were used to and most important the client access layer was identical, meaning that we were able to swap out the “old” SQL Server Session State with an In-Memory OLTP version without the IIS webserver ever noticing the change.
With this ease of migration there are some details to consider. The In-Memory OLTP engine is not only latch-free, but also lock-free. This means that some of the normal rules of pessimistic databases just don’t apply. In SQL Server when two transactions want to update the same record one will wait until the other is finished and eventually both will succeed. This is not true with the In-Memory OLTP engine where the waiting (caused by locks and latches) has been removed for pure speed, which leads one of the transactions failing with a Write/Write conflict in the end. When implementing it this somewhat feels like deadlock handling. Additionally, In-Memory OLTP is limited in some areas when it comes to surface exposure. The most crucial of those was the fact that In-Memory OLTP, memory-optimized tables do not support BLOBs, which are an essential part of the Session State DB. Overcoming those areas does pose a certain challenge, but with a little ingenuity it is a workable solution. (Stay tuned for a detailed post about our way to store BLOBs in the In-Memory OLTP engine).
So how does the “new” version of our implementation of Session State look? Well… exactly as the old one did, and that’s the beauty of it! The only difference when using the In-Memory OLTP engine is that you no longer max out at 10.000 Requests/Sec. With the current hardware you are easily able to scale up to 250.000 Requests/Sec, based on a box that costs <30.000 USD. Also I have a hunch that with the upcoming Brickland chipset we will be far in excess of the 1 million request mark. Additionally this ability to handle much more load enables us to consolidate Session State machines together. In the past every web farm had its own server, which was necessary due to the limitations described above. With In-Memory OLTP one server can handle all web farms traffic by itself, which reduces the TCO significantly (less maintenance overhead, less hardware, less power consumption, etc.)
Will this be enough for the future? I don’t know… But I am convinced that almost all webpages out there will survive for a long time before the Session State Server is a bottleneck again. (And if you should be the one with the exception to that give me a call. I am sure we will find a solution.)
Rick Kutschera, Manager of Database Engineering