lock (sharedObj1)
{
...
lock (sharedObj2)
{
...
}
}
Be aware that the order of the locks within the Thread2Work technique has been modified to match the order in Thread1Work. First a lock is acquired on sharedObj1, then a lock is acquired on sharedObj2.
Right here is the revised model of the whole code itemizing:
class DeadlockDemo
{
personal static readonly object sharedObj1 = new();
personal static readonly object sharedObj2 = new();
public static void Execute()
{
Thread thread1 = new Thread(Thread1Work);
Thread thread2 = new Thread(Thread2Work);
thread1.Begin();
thread2.Begin();
thread1.Be a part of();
thread2.Be a part of();
Console.WriteLine("Completed execution.");
}
static void Thread1Work()
{
lock (sharedObj1)
{
Console.WriteLine("Thread 1 has acquired a shared useful resource 1. " +
"It's now ready for buying a lock on useful resource 2");
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine("Thread 1 acquired a lock on useful resource 2.");
}
}
}
static void Thread2Work()
{
lock (sharedObj1)
{
Console.WriteLine("Thread 2 has acquired a shared useful resource 2. " +
"It's now ready for buying a lock on useful resource 1");
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine("Thread 2 acquired a lock on useful resource 1.");
}
}
}
}
Discuss with the unique and revised code listings. Within the authentic itemizing, threads Thread1Work and Thread2Work instantly purchase locks on sharedObj1 and sharedObj2, respectively. Then Thread1Work is suspended till Thread2Work releases sharedObj2. Equally, Thread2Work is suspended till Thread1Work releases sharedObj1. As a result of the 2 threads purchase locks on the 2 shared objects in reverse order, the result’s a round dependency and therefore a impasse.
Within the revised itemizing, the 2 threads purchase locks on the 2 shared objects in the identical order, thereby guaranteeing that there is no such thing as a chance of a round dependency. Therefore, the revised code itemizing reveals how one can resolve any impasse state of affairs in your utility by guaranteeing that every one threads purchase locks in a constant order.
Finest practices for thread synchronization
Whereas it’s typically essential to synchronize entry to shared sources in an utility, it’s essential to use thread synchronization with care. By following Microsoft’s finest practices you’ll be able to keep away from deadlocks when working with thread synchronization. Listed below are some issues to bear in mind:
- When utilizing the lock key phrase, or the System.Threading.Lock object in C# 13, use an object of a non-public or protected reference kind to determine the shared useful resource. The item used to determine a shared useful resource could be any arbitrary class occasion.
- Keep away from utilizing immutable varieties in your lock statements. For instance, locking on string objects might trigger deadlocks because of interning (as a result of interned strings are primarily world).
- Keep away from utilizing a lock on an object that’s publicly accessible.
- Keep away from utilizing statements like lock(this) to implement synchronization. If the this object is publicly accessible, deadlocks might consequence.
Be aware that you should utilize immutable varieties to implement thread security while not having to jot down code that makes use of the lock key phrase. One other method to obtain thread security is by utilizing native variables to restrict your mutable information to a single thread. Native variables and objects are all the time confined to at least one thread. In different phrases, as a result of shared information is the basis explanation for race circumstances, you’ll be able to get rid of race circumstances by confining your mutable information. Nevertheless, confinement defeats the aim of multi-threading, so shall be helpful solely in sure circumstances.
