MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
thread
Recherche

Understanding thread synchronization in C#

jeudi 27 février 2025, 10:00 , par InfoWorld
The C# programming language has provided support for thread synchronization since its earliest versions. Synchronization is used to prevent multiple threads from accessing a shared resource concurrently or invoking the properties or methods of an object at the same time. Synchronization ensures that multi-threaded programs run smoothly and don’t result in unexpected behavior.

This article discusses synchronization and thread safety in.NET and the best practices involved. We’ll examine the lock statement, the Lock class (just introduced in.NET 9), and synchronization primitives such as the Mutex and Semaphore classes. To work with the code examples provided in this article, you should have Visual Studio 2022 installed in your system. If you don’t already have a copy, you can download Visual Studio 2022 here.

What are locks in C#?

A lock is a variable used to control access to shared resources. The lock statement in C# ensures that only one thread executes the body of the statement at any moment in time. The lock statement acquires the lock for a given object, i.e. the shared resource, then executes a statement block and finally releases the lock.

There are two types of lock, an exclusive lock and a non-exclusive lock. An exclusive lock provides exclusive access to a shared resource, meaning it allows read or write access to one thread only. A non-exclusive lock allows read access to multiple threads, while disallowing write access.

Exclusive locks in C#

Exclusive locks are used to define regions of code, called critical sections, that require mutually exclusive access between threads, ensuring that one and only one thread can access the shared resource. To implement exclusive locks in your application, you would use the lock statement, the Lock class, the Mutex class, or a SpinLock struct.

lock statement – a syntactic shortcut for the static methods of the Monitor class used to acquire an exclusive lock on a shared resource.

Lock class – a class introduced in C# 13 and.NET 9 that is more resource-efficient than using the lock statement.

Mutex class – similar to the lock keyword and the Lock class except that it can work across multiple processes.

SpinLock struct – used to acquire an exclusive lock on a shared resource while avoiding context switching overhead.

In any version of.NET, you can use either the lock keyword or the static methods of the Monitor class to implement thread safety in your applications. They are equivalent ways to prevent concurrent access to a shared resource. In other words, the lock keyword is just a shortcut to implementing synchronization using the Monitor class.

By using a lock statement, you ensure that only one thread can execute the body of the statement at a time. Any other thread is blocked until the lock is released. However, when you need to perform complex operations in a multi-threaded application, the Wait() and Pulse() methods of the Monitor class can be useful.

The following code snippet illustrates how you can implement synchronization using the Monitor class.

private static readonly object lockObj = new object();
Monitor.Enter(lockObj);
try
{
//Some code
}
finally
{
Monitor.Exit(lockObj);
}

The code snippet below shows the equivalent code using the lock keyword.

private static readonly object lockObj = new object();
try
{
lock(lockObj)
{
//Some code
}
}
finally
{
//You can write your custom code to release any resources here
}

If you’re using.NET 9 and C# 13, you can take advantage of the System.Threading.Lock object instead. The Lock class doesn’t use the Monitor class, and it’s more efficient and performant than the lock statement. I examined the Lock class in a previous article here.

The code snippet below illustrates how the Lock class is used.

Lock.Scope scope = new Lock().EnterScope();
try
{
//Some code
}
finally
{
//You can write your custom code to release any resources here
scope.Dispose();
}

The Mutex class is used to implement synchronization that spans multiple processes. Like a lock, a mutex provides exclusive access to a resource and can be released only by the thread that owns it. Because a mutex spans different processes, acquiring and releasing a mutex is slower than acquiring and releasing a lock. I discussed using mutexes in C# in a previous article here.

The SpinLock structure is a more performant alternative to a standard lock when wait times are short, i.e., when the lock will not be held long. Instead of allowing the thread to sleep, a SpinLock keeps the thread busy in a loop (spinning) until the resource is free. As a result, no interaction with the OS is needed, increasing performance can significantly—but only when wait times are short. As wait times increase, a SpinLock consumes more CPU resources and a standard lock becomes preferable. A SpinLock is a good candidate only when the critical section performs a minimal amount of work.

Non-exclusive locks in C#

You can take advantage of non-exclusive locks to limit concurrency. To implement non-exclusive locks in C#, you would use the Semaphore class, the SemaphoreSlim class, or the ReaderWriterLockSlim class.

Semaphore class — used to limit the number of threads or consumers that can access a shared resource concurrently.

SemaphoreSlim class — a fast, lightweight alternative to the Semaphore class to implement non-exclusive locks.

ReaderWriterLockSlim class — a class introduced in.NET Framework 3.5 as a replacement of the ReaderWriterLock class.

The Semaphore class in the System.Threading namespace controls access to shared resources by allowing a specified maximum number of threads to access a critical section at once. A semaphore maintains a counter that is set to your maximum as soon as the semaphore is created. When a thread enters the critical section to access a shared resource, the counter is decremented by one. The counter is incremented by one when the thread leaves the critical section and releases the shared resource. If the counter reaches zero, new requests are blocked until another thread releases the resource.

Note that the Semaphore class can be used to create either local semaphores or named, system-wide semaphores. Local semaphores can synchronize threads within a single process, while named semaphores can synchronize threads across multiple processes (inter-process synchronization).

The SemaphoreSlim class in the System.Threading namespace is a lightweight alternative to the Semaphore class. It’s suitable only in scenarios that require frequent waits and releases and acquire the shared resource for only short durations. Like the Semaphore class, SemaphoreSlim is a nonexclusive locking construct that allows only a specified maximum number of threads to access the critical section. Unlike the Semaphore class, the SemaphoreSlim class can be used as a local semaphore only.

The ReaderWriterLockSlim class is used to provide exclusive write access to a resource while enabling concurrent read access by multiple threads. You would use ReaderWriterLockSlim when you need to control access to a data structure that gets frequent reads but infrequent writes. When a thread requests exclusive access to a shared resource by calling the EnterWriteLock method of the ReaderWriterLockSlim class, all subsequent attempts to read and write will be blocked until all threads reading the resource have exited and the thread writing to the resource has acquired the lock and released it.

Blocking threads in C#

Thread synchronization works by blocking and releasing threads. Blocking means that the execution of a thread is interrupted or paused until a certain condition is met. A blocked thread gives up control of the processor, or its processor time slice, allowing another thread to use the processor. The Wait() and Sleep() methods are used to implement blocking in C#.

The line of code below shows how you can block the current thread for one second (1,000 milliseconds).

Thread.Sleep(1000);

The Join() method is a synchronization method in C# that is used to block the calling thread until the thread on which the Join() method is called has completed its execution.

The following code snippet illustrates how the Join method can be used.

Thread threadObj1 = new Thread(() => Console.WriteLine('The first thread has started its execution.'));
Thread threadObj2 = new Thread(() => Console.WriteLine('The second thread has started its execution.'));
threadObj1.Start();
threadObj2.Start();
threadObj1.Join();
threadObj2.Join();

In the preceding code snippet, two threads, threadObj1 and threadObj2, are started, then made to wait until the other thread has completed its execution. The text of threadObj1 will be displayed first, followed by the text of threadObj2. Then these messages will be displayed in the reverse order because of the calls to the Join method. First threadObj1 blocks while threadObj2 executes, then vice versa.

Hence, the output to the console window will be:

The first thread has started its execution.The second thread has started its execution.The second thread has started its execution.The first thread has started its execution.

Understanding deadlocks

A deadlock occurs in a multi-threaded application when many threads wait for each other to release control of resources that they have obtained. Deadlocks typically occur when a synchronization construct, such as a lock or a mutex, is not used correctly and/or when multiple threads acquire locks in an order that keeps them blocked indefinitely. Here is an example that will help us understand this better.

Assume that two threads, T1 and T2, compete for access to two shared resources, F1 and F2. Imagine these resources are file handles. Now consider:

Thread T1 acquires a lock on the shared file handle F1.

Thread T2 acquires a lock on the shared file handle F2.

Thread T1 tries to acquire a lock on the file handle F2 but is blocked because thread T2 already has a lock on it.

Thread T2 tries to acquire a lock on the file handle F1 but is blocked because thread T1 already has a lock on the same resource.

To avoid deadlocks, we must acquire locks in a consistent order as discussed in the next section. You can refer to this MSDN article to learn more about deadlocks.

Example of a deadlock condition in C#

The code listing given below demonstrates how a deadlock can occur.

class DeadlockDemo
{
private static readonly object sharedObj1 = new();
private static readonly object sharedObj2 = new();
public static void Execute()
{
Thread thread1 = new Thread(Thread1Work);
Thread thread2 = new Thread(Thread2Work);
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
Console.WriteLine('Finished execution.');
}
static void Thread1Work()
{
lock (sharedObj1)
{
Console.WriteLine('Thread 1 has acquired a shared resource 1. ' +
'It is now waiting for acquiring a lock on resource 2');
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine('Thread 1 acquired a lock on resource 2.');
}
}
}
static void Thread2Work()
{
lock (sharedObj2)
{
Console.WriteLine('Thread 2 has acquired a shared resource 2. ' +
'It is now waiting for acquiring a lock on resource 1');
Thread.Sleep(1000);
lock (sharedObj1)
{
Console.WriteLine('Thread 2 acquired a lock on resource 1.');
}
}
}
}

In the code listing above, we have two shared objects and two threads. Each of the threads attempts to acquire a resource that is already locked by the other thread. As a consequence, both threads are blocked and a deadlock occurs.

You can avoid the deadlock condition by changing the order in which the locks are acquired as shown in the code snippets given below.

In the Thread1Work method:

lock (sharedObj1)
{...
lock (sharedObj2)
{...
}
}

Note that the Thread1Work method remains the same as in the original code listing. First a lock is acquired on sharedObj1, then a lock is acquired on sharedObj2.

In the Thread2Work method:

lock (sharedObj1)
{...
lock (sharedObj2)
{...
}
}

Note that the order of the locks in the Thread2Work method has been changed to match the order in Thread1Work. First a lock is acquired on sharedObj1, then a lock is acquired on sharedObj2.

Here is the revised version of the complete code listing:

class DeadlockDemo
{
private static readonly object sharedObj1 = new();
private static readonly object sharedObj2 = new();
public static void Execute()
{
Thread thread1 = new Thread(Thread1Work);
Thread thread2 = new Thread(Thread2Work);
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
Console.WriteLine('Finished execution.');
}
static void Thread1Work()
{
lock (sharedObj1)
{
Console.WriteLine('Thread 1 has acquired a shared resource 1. ' +
'It is now waiting for acquiring a lock on resource 2');
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine('Thread 1 acquired a lock on resource 2.');
}
}
}
static void Thread2Work()
{
lock (sharedObj1)
{
Console.WriteLine('Thread 2 has acquired a shared resource 2. ' +
'It is now waiting for acquiring a lock on resource 1');
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine('Thread 2 acquired a lock on resource 1.');
}
}
}
}

Refer to the original and revised code listings. In the original listing, threads Thread1Work and Thread2Work immediately acquire locks on sharedObj1 and sharedObj2, respectively. Then Thread1Work is suspended until Thread2Work releases sharedObj2. Similarly, Thread2Work is suspended until Thread1Work releases sharedObj1. Because the two threads acquire locks on the two shared objects in opposite order, the result is a circular dependency and hence a deadlock.

In the revised listing, the two threads acquire locks on the two shared objects in the same order, thereby ensuring that there is no possibility of a circular dependency. Hence, the revised code listing shows how you can resolve any deadlock situation in your application by ensuring that all threads acquire locks in a consistent order.

Best practices for thread synchronization

While it is often necessary to synchronize access to shared resources in an application, you must use thread synchronization with care. By following Microsoft’s best practices you can avoid deadlocks when working with thread synchronization. Here are some things to keep in mind:

When using the lock keyword, or the System.Threading.Lock object in C# 13, use an object of a private or protected reference type to identify the shared resource. The object used to identify a shared resource can be any arbitrary class instance.

Avoid using immutable types in your lock statements. For example, locking on string objects could cause deadlocks due to interning (because interned strings are essentially global).

Avoid using a lock on an object that is publicly accessible.

Avoid using statements like lock(this) to implement synchronization. If the this object is publicly accessible, deadlocks could result.

Note that you can use immutable types to enforce thread safety without needing to write code that uses the lock keyword. Another way to achieve thread safety is by using local variables to confine your mutable data to a single thread. Local variables and objects are always confined to one thread. In other words, because shared data is the root cause of race conditions, you can eliminate race conditions by confining your mutable data. However, confinement defeats the purpose of multi-threading, so will be useful only in certain circumstances.
https://www.infoworld.com/article/2237276/understanding-thread-synchronization-in-c-sharp.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mar. 4 mars - 15:07 CET