Our Guest Star series continues with Matt Parks, one of our first CRM MVPs and a key figure in the CRM community – IlanaS
When
The question boils down to variations of, “Can this server handle xx users?”
Invariably the poster will include some basic server specs (processor, RAM, Disk Space, and etc.) and the number of users. However, they almost always leave off the information that is very important to answering the question. As I will try to explain here, you must know much more than just the number of users to correctly understand your implementation and what it can handle.
The Factors
OK, so what else do you need to know? Well, the first thing that comes to mind is the number of records in your database. Microsoft CRM is based on
With these reduced record counts, the SQL optimizer has little difficulty processing the queries quickly. Over time, the number of records in a CRM application will grow (or at least, we all hope it will grow or the implementation was most likely one of those “failed” CRM projects we all hear about). As the tables grow, the efficiency of the queries becomes more critical. When accessing tables with a few million rows, an inefficient access path could result in table scans that eat up a lot of resources and kill response time.
The next thing to consider is the transaction rate for various transactions. This is the load that is put on the server. This can be tough to determine though. Unless you have an existing CRM system to model from, there will be some guesswork involved here. But, it is critical to have a good understanding of these transactions when doing capacity planning. Reads are obviously a lot less taxing on the system the Inserts or Updates. A user who does 5 Read transactions put less strain on the server than a user who is actively entering new records (like a call center operator). Therefore, you have to try to profile your user population.
Another factor to consider is concurrency. If you have 300 users, but they are spread across 24 hour x 7 days a week scheduling, that is about the same transaction load as a 100 user system that has everyone working the same 8-5 shift. There is also the reality that, even if you have 100 users, they are not all “on” the system at the same time.
If you are using dedicated hardware, those are the main considerations. There are lots of companies using Microsoft CRM that are not using dedicated hardware. The SBE version that is running on
Transaction Testing
Modeling the transaction mix is probably the most complex part of scalability planning for a system. As mentioned earlier, there are many variations to consider. However, what happens if you just need to know if the system will handle the transaction model you have come up with?
Well, Microsoft has provided a tool that you can use for performing your own performance and load testing on Microsoft CRM using Visual Studio Team Test. Naveen Garg introduced the CRM performance and test toolkit in his article on May 31.
The tool includes a variety of pre-configured test cases. These test cases can be adapted to represent your own needs, but they represent the basic functionality of the system: CreateNewAccount, UpdateContact, AddActivityToAccount, etc. Each of these tests mimics several screens in Microsoft CRM to complete each task. For example, the UpdateAccount test performs the following steps:
- Start at Sales homepage
- Select Accounts / My Active Accounts View
- Enter a random string and then click “Find”
- Open an account and edit data
- Save and Close account
Each of the tests, on average, executes about 10 HTTP requests that are needed to make up that test.
Team Test lets you simulate multiple Web users and to execute a random set of tests for each user. The test mix is controlled by allocating a percentage that each test will be executed compared to the other test. “Percentage” Is a bit of a misnomer though; it is more like a weight. You could have 50 tests in your load test configuration. One could be set to execute 20% of the time and the other 49 10% of the time. This just means that the first test case will execute (on average) 2 times more than the other tests.
This weighting capability makes it easier to model your transaction mix though. Suppose you had some best guesses as to what each user would do over the course of a week. You might create some data such as the following:
Scenario (Test Script) | Count |
CreateAccount | 5 |
CreateContact | 10 |
CreateLead | 25 |
CreateOpportunity | 3 |
… | |
UpdateContact | 12 |
When you use Team Test and open up the load test configuration, you can allocate the transactions using the numbers above instead of trying to figure out what “percent” each of these represents. When the test executes, Team Test will convert these numbers into relative percentages and use those to allocate the transactions across the simulated test users.
The toolkit also includes a utility that lets you load up records into the database. You define the transactions you want and how they are related. Then the utility will generate records in the system using random data, based on the configuration that you specify. This is a useful way to get the volume you need for your testing.
Our Results
At Avanade, we are implementing Microsoft CRM almost exclusively in the enterprise space. While we are selling the application to our prospects, we get the “question” ourselves. Knowing that Intel was conducting a scalability test, we decided to run our own test with a different transaction mix and a different database population to see what differences we found.
In our test, we used a quad, dual-core 64-bit SQL server that has 16 GB RAM. We were able to simulate 3,000 concurrent users completing over 22,000 Business tests per hour. The transaction throughput is similar to the results that Intel recently published. They were around 15,000 tests for 2,000 users.
To me, one interesting thing about our test, compared to the Intel test. In both cases, new indexes were added to the database. However, we found different indexes in our test than Intel used. Some of them were the same (or at least very similar), yet others were quite different. This was the result of the different database load profile and different transaction mix being tested in the two tests.
Look for a white paper soon providing more specifics about the testing we completed.
Do I Have to Do This?
No.
But if you have asked the “question”, that means you have concerns. You can contact TPAG at Microsoft and they can provide you some guidelines based on your expected transaction mix. However, if you are really concerned about a specific hardware platform, then you should consider conducting your own test to verify it will work. Figure out the transaction mix that best represents your expected work volume, load up some data and have at it.
See you on the newsgroups and please, don’t ask the “question”.