23Mar
Every time we visit a crowded restaurant, it’s astonishing to see how the people providing the service work seamlessly to achieve serving so many people at the same time. It is evident that this is possible because a single person performs multiple tasks at the same time saving as much idle time as possible.
CPU processing can be imagined analogous to the above scenario. CPU having multiple cores is capable of catering to multiple requests at a time. Multi-threaded programs can utilize this to work on independent tasks at a time to give a feel of a faster response. Usually, it is recommended to use multi-threading for applications expected to perform faster and be more responsive.
Having said that, one should always remember that multithreading doesn’t make the CPU or the program code run faster. It just ensures minimal idle CPU time and tries to utilize them as much CPU time as possible. This need not be beneficial always – for example operations involving heavy IO operations generally do not benefit much from multithreading.
Also, coding complexity is another consideration to be noted. Writing multithreaded applications is not a smooth job. Various aspects like available CPU cores, dependency of functionalities, need to access shared data add up to the complexity of programming. Miss any one of this and the behavior might be unexpected. Troubleshooting such unexpected behaviors also might be a nightmare for developers. If we create too many threads, the context switching itself takes a toll on the CPU and might degrade the responsiveness of the program than improve.
Multithreading is a double-edged sword and should be always be used wisely only after giving thorough thought. Who would want to deteriorate the program rather than improve that too with additional efforts to take care of nuances involved!