Related Posts
The Future of Education Industry with VoiceAI: An Expert Roundtable Discussion
1. Enhanced Personalized Learning VoiceAI in education has the potential to provide... read more
How to plan a software rollout of your user base
How do you seamlessly roll out your platform for a 25+... read more
4th Tellus Satellite Challenge using AI ML
We made it to the top 10% !! That's right! This is... read more
Technology Integration
#TechnologyIntegration is not just a buzzword anymore, but a key to rapid... read more
What is AI ML, and How Is It Helping Your Business?
In the ever-changing landscape of service delivery, Artificial Intelligence (AI) and... read more
How to Create Graphs for Similar Entity Search
How to Create Graphs for Similar Entity Search We demonstrate the application... read more
The secret to remaining calm in stressful situations
“It has been more than a few months now. The product is... read more
Opportunities to maximize shopping season 2021 during uncertainties
NRF Converge 2021 event, held in July 2021, was attended by... read more
Using modern technology to scale customer experience, faster
It’s been a couple of years, the pandemic is around; disrupting... read more
Why DevOps is Essential for Modern Software Product Development
Ever wondered what makes some software development teams stand out while... read more
The team was excited. The demo went well. Our platform was appreciated and they agreed to use it. Great news indeed!
“We will use it from next week. Will it hold well when all users log in on Monday morning?” A common concern, based on past experience.
The team had done a good job of understanding the scalability requirements, designed an architecture that will scale well. However, during the initial phase, everyone was focused on functionality. Performance was yet to be tested.
As I was reviewing, I found one thing missing – concurrency requirements.
How many users can be active at the same time?
How many requests will simultaneously reach the server?
This is an essential factor for capacity planning. Too less will mean dropped requests and too much will mean higher costs.
There is no correct answer. Concurrency keeps changing based on external events, time of the day, user behavior, etc. It can be predicted only with visibility into past data.
This is where instrumentation comes to help. Build desired counters in the system, record, monitor, and notify them regularly. Allow for the tuning of what to check, what to record, and what to notify. You are going to need regular monitoring and tweaking for the best efficiency.
Do you have the counters built in your product?