This is the second article in our series on improving .NET application performance and in this article we’ll focus on the upfront phase of the development process, looking at design considerations.
In our introduction article we touched on setting performance objectives so you have something to measure and compare against. Ask questions like; How fast is fast enough? What are your time and throughput constraints? How much CPU, memory, disk and network I/O is acceptable for your application to utilize? These are important items that your design should be able to accommodate.
In this article we’ll describe a number of design principles that will help design application that meet the performance objectives that you have defined.
The design principles below are proven to work well:
- Design coarse-grained services to reduce the number of client-service interactions and help to abstract service internals from the client providing a looser coupling between the client and service. If you already have fine-grained services, consider wrapping them with a facade layer to help achieve the benefits of a coarse-grained service.
- Minimize round trips to reduce call latency by batching calls together and design coarse-grained services that allow you to perform a single logical operation by using a single round trip. You can reduce communication across boundaries such as threads, processes, processors, or servers by applying this principle and is particularly important when making remote server calls across a network.
- Minimize the duration that you hold shared and limited resources such as network and database connections by using a principle that acquires resources late and releases them early. Releasing and re-acquiring network and database connection resources from the operating system can be expensive, so a recycling plan to support “acquire late and release early” enables you to optimize the use of shared resources across requests.
- When certain resources are only available from certain servers or processors, there is an affinity between the resource and the server or processor. While affinity can improve performance, it can also impact scalability. Carefully evaluate your scalability needs. Will you need to add more processors or servers? If application requests are bound by affinity to a particular processor or server, you could inhibit your application’s ability to scale. As load on your application increases, the ability to distribute processing across processors or servers influences the potential capacity of your application.
- If your application uses a lot of client-service interaction, consider pushing the processing closer to the client. If the processing interacts intensively with the data store, you may want to push the processing closer to the data.
- Pool shared resources that are scarce or expensive to create such as database or network connections. Use pooling to help eliminate performance overhead associated with establishing access to resources and to improve scalability by sharing a limited number of resources among a much larger number of clients.
- You can reduce unnecessary processing by using techniques such as caching, avoiding round trips, and validating input early.
- Avoid blocking when accessing resources to prevent blocking of queued resource requests. Blocking and so-called hotspots are common causes of contention. Blocking is caused by long-running tasks such as expensive I/O operations. Hotspots result from concentrated access to certain data that everyone needs.
- Use efficient practices for handling data changes. When a portion of data changes, process the changed portion and not all of the data, so perform updates incrementally. Also consider rendering output progressively. Do not block on the entire result set when you can give the user an initial portion and some interactivity earlier.
- When processing multiple independent tasks, consider executing those tasks asynchronously to execute them concurrently. Asynchronous processing is mainly effective when used for I/O type tasks but has limited benefits when the tasks are CPU-bound and restricted to a single processor. In fact, single-CPU multithreading tasks actually perform relatively slow because of overhead caused by thread switching.
In our next article we’ll focus on Application Performance tips and best practices.
See also part 1 – Improving .NET Application Performance