Hello,
We've been conducting performance testing on our integration with Opendental APIs and have identified significant latency issues (p95 latency of 35 seconds with 50 concurrent users). When we mocked these OD API calls in our test environment, performance improved dramatically to a p95 latency of just 5 seconds (which is still very high and we are investigating further).
As we understand it, API requests flow through the global server before being routed to individual location servers at dental clinics based on the API key provided.
Our question is: Where is this bottleneck most likely occurring?
1. Is it at the global server level (affecting all API consumers equally)?
2. Is it at the individual API key/location level (specific to certain dental clinic servers)? (Because at this point we are using the same API key for all 50 concurrent users in our load tests suite. We want to know whether this will still be a problem in our prod environment where one API key will only have 1 concurrent user)
Understanding the source of this bottleneck would help us optimize our implementation and set appropriate expectations with our users.
Thanks
Opendental API Performance Bottleneck Investigation - Global Server vs API Key Level
-
- Posts: 4
- Joined: Tue Oct 22, 2024 9:50 am
Re: Opendental API Performance Bottleneck Investigation - Global Server vs API Key Level
Hello abhishekcs028,abhishekcs028 wrote: ↑Mon May 05, 2025 1:53 pmHello,
We've been conducting performance testing on our integration with Opendental APIs and have identified significant latency issues (p95 latency of 35 seconds with 50 concurrent users). When we mocked these OD API calls in our test environment, performance improved dramatically to a p95 latency of just 5 seconds (which is still very high and we are investigating further).
As we understand it, API requests flow through the global server before being routed to individual location servers at dental clinics based on the API key provided.
Our question is: Where is this bottleneck most likely occurring?
1. Is it at the global server level (affecting all API consumers equally)?
2. Is it at the individual API key/location level (specific to certain dental clinic servers)? (Because at this point we are using the same API key for all 50 concurrent users in our load tests suite. We want to know whether this will still be a problem in our prod environment where one API key will only have 1 concurrent user)
Understanding the source of this bottleneck would help us optimize our implementation and set appropriate expectations with our users.
Thanks
Here are some clarifications based on how the system behaves under load:
1. Is the bottleneck at the global server level (affecting all API consumers equally)?
The global server does play a role here. On the free tier, when an API call begins, the global server applies up to a 5-second throttle by pausing the response thread before sending it back to the client. So even if the office server responds quickly, your client will still wait the remainder of the throttle window. If you're making many concurrent calls, these paused threads can pile up at the global layer, contributing to latency and resource contention. So yes, the throttle is global in enforcement, though it’s tied to individual API key usage.
2. Is the bottleneck specific to the API key/location (i.e., the destination office server)?
Also yes — all 50 of your concurrent requests are using the same API key, which targets a single office server. That means your test is effectively simulating a single location being hit with 50 simultaneous API calls. Even though you mentioned that in production you expect only “one user” per API key, that doesn’t guarantee low concurrency. Modern integrations often perform parallel operations (e.g., fetching appointments, patients, or procedures in bulk), so multiple requests may still be in-flight at the same time. Unless you’re deliberately serializing your calls (waiting for one to finish before sending the next), you could see similar throttling behavior in production — especially under the free tier’s response delay.
Thanks!