Improving .NET Application Performance Part 6: Threading

NET_v_rgb_2This post is a part of a series of posts on .NET Application performance. The first five articles are also available on our blog: Part 1Part2 ; Part 3 ; part 4 ; part5

In our last article we discussed some guidelines to optimize Garbage Collection using the Dispose and Finalize methods. In this article we’ll look at best practices for using threading. The .NET Framework offers various threading and synchronization features. Using multiple threads can and will impact application performance and scalability.

Managed Threads and Operating System Threads

The CLR exposes managed threads, which are different from the Win32 threads. The logical thread is the managed version of a thread, while the Win32 thread is the physical thread that actually executes code. If you create a managed thread object and then do not start it by calling its Start method, a new Win32 thread is not created. When a managed thread is terminated or it completes, the underlying Win32 thread is destroyed. The Thread object is cleaned up only during garbage collection.

Be aware that poorly-written multithreaded code can lead to various problems including deadlocks, race conditions, thread starvation, and thread affinity. All of these can impact application performance, scalability, resilience, and correctness.

Threading Guidelines

Below we’ll outline some of the most important threading guidelines that impact system performance.

Minimize Thread Creation

Threads use both managed and unmanaged resources and can be expensive to initialize. If you spawn threads indiscriminately, it can result in increased context switching on the processor. The following code shows a new thread being created and maintained for each request. This may result in the processor spending most of its time performing thread switches; it also places increased pressure on the garbage collector to clean up resources.

private void Page_Load(object sender, System.EventArgs e)


  if (Page.IsPostBack)


    // Create and start a thread

    ThreadStart ts = new ThreadStart(CallMyFunc);

    Thread th = new Thread(ts);




Use the Thread Pool

You can avoid expensive thread initialization by using the CLR thread pool to execute thread-based work. The following code shows a method being executed using a thread from the thread pool.

WaitCallback methodTarget = new WaitCallback( myClass.UpdateCache );

ThreadPool.QueueUserWorkItem( methodTarget );

When QueueUserWorkItem is called, the method is queued for execution and the calling thread returns and continues execution. The ThreadPool class uses a thread from the application’s pool to execute the method passed in the callback as soon as a thread is available.

Schedule Periodic Tasks using a Timer

Use the System.Threading.Timer class to schedule periodic tasks. The Timer class allows you to specify a periodic interval that your code should be executed. The following code shows a method being called every 30 seconds.

TimerCallback myCallBack = new TimerCallback( myHouseKeepingTask );

Timer myTimer = new System.Threading.Timer( myCallBack, null, 0, 30000);


static void myHouseKeepingTask(object state)




When the timer elapses, a thread from the thread pool is used to execute the code indicated in the TimerCallback. This results in optimum performance because it avoids the thread initialization incurred by creating a new thread.

Try using Parallel vs. Synchronous Tasks

Before implementing asynchronous code, consider the need for performing multiple tasks in parallel, which can have a significant effect on your performance metrics. Obviously, additional threads consume resources such as memory, disk I/O, network bandwidth, or database connections. Also, additional threads may cause significant overhead from contention, or context switching. In all cases, it is important to verify that adding threads is helping you to meet your objectives rather than hindering your progress.

The following are examples where performing multiple tasks in parallel might be appropriate:

·         Where one task is not dependent on the results of another, such that it can run without waiting on the other.

·         If work is I/O bound. Any task involving I/O benefits from having its own thread, because the thread sleeps during the I/O operation which allows other threads to execute. However, if the work is CPU bound, parallel execution is likely to have a negative impact on performance.

Do Not Use Thread.Abort to Terminate Other Threads

Avoid using Thread.Abort to terminate other threads. When you call Abort, the CLR throws a ThreadAbortException. Calling Abort does not immediately result in thread termination. It causes an exception on the thread to be terminated. You can use Thread.Join to wait on the thread to make sure that the thread has terminated.

Do Not Use Thread.Suspend and Thread.Resume to Pause Threads

Avoid calling Thread.Suspend and Thread.Resume to synchronize the activities of multiple threads. Do not call Suspend to suspend low priority threads — consider setting the Thread.Priority property instead of controlling the threads intrusively.

Calling Suspend on one thread from the other is a highly intrusive process that can result in serious application deadlocks. For example, you might suspend a thread that is holding onto resources needed by other threads or the thread that called Suspend.

If you need to synchronize the activities of multiple threads, use lock(object), Mutex, ManualResetEvent, AutoResetEvent, and Monitor objects. All of these objects are derivatives of the WaitHandle class, which allows you to synchronize threads within and across a process.

Note: lock(object) is the cheapest operation and will meet most, if not all, of your synchronization needs.

In our next article we’ll discuss guidelines and best practices related to asynchronous calls.

Read the first three posts of this series: Part 1Part2 ; Part 3 ; part 4 and part 5 and let us know what you think!

You might also like