As we started working with early adopters on the SQL Server 2014 In-memory OLTP engine (project codenamed Hekaton) one of the first scenarios we ran into involved a session management database in SQL Server. The application was a high volume website storing ASP.net Session State in the database and running into latch contention and locking problems due to the number of concurrent users supported. As this database was a central, and mission critical, single point of scale for all web traffic it had become a performance bottleneck for the business.
As we continued to work with other customers, a number had mission critical databases storing some form of session state information. Moving the critical tables and workload suffering from latch contention and therefore scale issues into the In-memory OLTP engine has produced excellent results. Below we provide further information around this common application scenario which has seen success in being migrated to In-memory OLTP.
Typically for session state related architectures there exists a large (scale-out) number of webservers with a few tables responsible for servicing the number of users on the site. The workload pattern would be:
- As users come into the site, information is inserted into these tables and then a significant amount of updates to their session information may occur with point-lookup queries.
- As user scale and threads increase the amount of latching and locking on these tables increases which becomes a barrier to scale. As these tables are in the critical path of the web user experience (typically linking login to state info which can be displayed on a website) the time it takes to update and query these tables is critical to the user experience.
span style=”text-decoration: underline;”>Bottlenecks Experienced:
em>Latch contention:In some cases latching due to monotonical incremental keys (last-page insert) was a barrier to scale, other latching on intermediate pages also became an issue. This only allowed a certain amount of user traffic to be active at one time. The ability to scale and fully utilize the hardware resources were application bottlenecks.
Throughput/Latency: In other cases, while the application requirements were still around throughput and the number of concurrent users, the latency per business transaction also had an effect on the overall experience. Here reducing latency and increasing throughput was a primary goal.
In-memory OLTP Migration Considerations:
- Memory-optimized tables do not suffer from latch contention as the core In-memory OLTP engine uses lock-free algorithms and structures which have no latching. Migrating the tables with latch hotspots to memory-optimized tables eliminated this scale bottleneck. Of note, the optimistic concurrency model is a new paradigm which applications will have to consider.
- Durability of data. The In-memory OLTP engine supports full ACID compliance and table durability by logging content into the SQL Server database transaction log. It also provides the table level configuration for creating tables as SCHEMA_ONLY, or non-durable. On recovery only the schema will be recovered and data lost. If the data in these tables is transient, for example a staging table where if the data was lost it could be easily recreated, creating the table as SCHEMA_ONLY can significantly reduce the IO impact of the workload, dependency on disk drives, and improve overall transaction throughput.
- In some cases the tables contain large object (LOB) datatypes or a row size > 8060. Memory-optimized tables do not support LOB columns and only row sizes under 8060 bytes. Application design patterns for this would include either splitting tables into memory-optimized and disk based tables containing the LOB data or larger datatype columns or splitting up the large rows into multiple rows. For examples of this please reference: Implementing LOB Columns in a Memory-Optimized table. A primary consideration would be how often the LOB data is accessed and how critical it is to the overall performance. If frequent, as is the case with session information, splitting the rows into multiple rows in the memory-optimized table is typically best for performance. In a number of cases, we have seen users have the ability to split a single large row row into multiple rows (i.e. varbinary) and then “re-assemble” the data out to the application as needed.
- Performance of Transact-SQL calls. Reducing the time of each business transaction was also an important goal in terms of overall performance. Being able to utilized memory-optimized tables and move the T-SQL code into native compiled stored procedures increased performance and also allowed for reducing the latency of a single transaction execution, critical factors in improving the overall application scale. A number of customers used native compiled procedures for their implementation.
As I mentioned we were testing with some customers with this scenario. Here are some early results on what customers have achieved:
- As you may have observed from our earlier guest blog by Rick at bwin located here, using In-memory OLTP was able to provide them with gains of 16x (from 15,000 to 250,000 batch requests/seq) with no application code changes. For further information reference our earlier blog post on the subject.
- Another customer implemented their own session state management and has achieved gains of 8x (non-durable tables) and 3x (durable tables) in their current testing. Adding additional users at scale is expected to help improve performance to even greater gains.
In conclusion, do you have a session management database which is becoming the bottleneck for your application? Unable to scale-up and fully utilize your CPU and hardware resources? If so, hopefully this helps provide you with a solution!
Interested in more patterns where we have seen success with SQL Server 2014 In-memory OLTP? Stay tuned for more blogs around patterns and a follow-up whitepaper which will include more scenarios and considerations to help you migrate to SQL Server 2014 In-memory OLTP.
SQL Server 2014 CTP1 is available for download here, or you can find the index to the complete blog series here.