All Systems Operational

About This Site

This page is dedicated to keeping you informed on status updates from Telnyx.

Telephony System Operational
Messaging Operational
Call Control Operational
Mission Control API ? Operational
East Region Operational
Central Region Operational
West Region Operational
CNAM API Operational
Central Region ? Operational
West Region Operational
LRN Lookup API ? Operational
East Region Operational
Central Region Operational
Switch Data API Operational
East Region ? Operational
Central Region Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Scheduled Maintenance
Our Telephony Engineers will be performing maintenance on the Telephony Engine in Europe. This maintenance will not disrupt service, however you may notice re-registration of your devices during the maintenance, as our primary European SIP proxy is replaced temporarily by our secondary SIP proxy. The SIP proxy IP addresses in Europe will be changing from 192.76.120.21 and 64.16.250.21 to 185.246.41.140 and 185.246.41.141. Please note the old IP addresses will remain reachable during and after the maintenance.
Posted on Nov 15, 10:56 CST
Mission Control API Response Time ?
Fetching
CNAM API Response Times
Fetching
LRN Lookup API Response Times
Fetching
Switch Data API Response Times
Fetching
Past Incidents
Nov 17, 2019

No incidents reported today.

Nov 16, 2019

No incidents reported.

Nov 15, 2019

No incidents reported.

Nov 14, 2019
Resolved - This incident has been resolved. We will continue monitoring for any issues with failed or delayed messages.
Nov 14, 19:50 CST
Update - Our engineers have added additional capacity for SMS Termination. We will continue to monitor traffic for failed/delayed messages.

Thank you for your continued patience.
Nov 14, 13:36 CST
Monitoring - At 16:00 UTC, our engineers implemented a fix and included more capacity to ensure no delays or failures.

They're continuing to monitor traffic to ensure stability.

Thank you for your continued patience.
Nov 14, 10:45 CST
Identified - The issue has been identified as a capacity issue and is also resulting in intermittent failed/dropped outbound messages.

Some customers may see a 5XX response from our messaging endpoints.

Our engineers are working on adding more capacity as we speak.
Nov 14, 09:56 CST
Investigating - Our monitoring tools have detected an increase in messaging delays at 15:35 UTC

Our engineers are actively investigating the root cause and full extent of the impact.

Customers may notice a delay in the processing of their sent messages. Messages are still being delivered but just with a delay.
Nov 14, 09:47 CST
Nov 13, 2019
Resolved - This incident has been fully resolved since 20:50 UTC.

A full post-mortem will be provided as soon as a comprehensive review of all details, along with our networking vendors details, have been fully digested.

Thank you again for continued patience.
Nov 13, 15:56 CST
Update - Update:

Thanks again for your continued patience while our engineers continue to work on gathering more details on root cause.

Our networking engineers have enlisted the help of our vendor to determine the cause of the high CPU in our Chicago region. This is a continued and ongoing effort.

We'd like to clarify that services located in Chicago region were impacted during the times of 20:15 UTC - 20:49 UTC.

Messaging will have seen little impact as other regions picked up the re-routed traffic.

Call control was impacted during this the above time-frame and users would have seen intermittent 5XX responses from their respective API's.

Telephony calls were re routed but some users may have observed added latency or audio quality issues at 20:20 - 20:22 UTC.

The mission control portal and related services such as porting were also impacted during this time-frame.

We are continuing to monitor all services at this time but everything has been fully stable since 20:50 UTC.
Nov 13, 15:27 CST
Update - Update:

We're observing continued stability.

We can confirm that traffic from other US regions were affected and further details will be available as soon as everything is confirmed by the networking engineers.

Thank you for your continued patience.
Nov 13, 14:55 CST
Monitoring - Update:

Issues are currently resolved.

In the meantime, we continue to bypass traffic via Chicago and can confirm all production based services are not impacted.
Nov 13, 14:41 CST
Identified - Update:

We're seeing high CPU usage across our Chicago region which resulted in DNS flaps.

Our networking team have identified the issue and are working on resolving it.

We're still confirming exact services which are impacted at this time.
Nov 13, 14:32 CST
Investigating - We are currently investigating internal dns resolution issues which are causing certain services to intermittently resolve.

More information on what services are impacted shortly.
Nov 13, 14:22 CST
Nov 12, 2019
Resolved - This had been resolved after our chat provider, Intercom, rolled back a bad deploy that led to the original service interruptions.
Nov 12, 19:06 CST
Identified - We've notified our third party provider, intercom, of the unavailability of their chat application running on telnyx.com and portal.telnyx.com

Please contact support@telnyx.com, sales@telnyx.com or porting@telnyx.com for any queries.
Nov 12, 15:02 CST
Resolved - Our engineers have implemented a subsequent fix and we are now seeing better response times.

We are continuing to monitor to ensure full stability.
Nov 12, 16:20 CST
Identified - Our engineers have detected another increase in latency to api.telnyx.com, they're investigating as we speak.

The same behaviour from the previous posts will apply.
Nov 12, 15:14 CST
Monitoring - A fix has been implemented and our engineers are continuing to monitor response times.
Nov 12, 14:52 CST
Identified - Our monitoring tools have alerted our engineers in an increase in api.telnyx.com response time.

They are actively working on the issue as we speak and have identified the cause.

You may see some timeouts on requests and your session may end and be logged out.
Nov 12, 14:45 CST
Resolved - This incident has been resolved.
Nov 12, 14:36 CST
Identified - We've notified our third party provider, intercom, of the unavailability of their chat application running on telnyx.com and portal.telnyx.com

Please contact support@telnyx.com, sales@telnyx.com or porting@telnyx.com for any queries.
Nov 12, 09:01 CST
Nov 11, 2019
Resolved - Our engineers have updated the relevant certificate and the Web Dialer, along with our WebRTC SDK, are back functioning.

Thank you for your continued patience.
Nov 11, 11:04 CST
Identified - Our engineers have identified an invalid certificate which is causing our web dialer, https://portal.telnyx.com/#/app/debugging/web-dialer, to function incorrectly at this time.

They are actively working on restoring functionality.
Nov 11, 10:12 CST
Nov 10, 2019

No incidents reported.

Nov 9, 2019

No incidents reported.

Nov 8, 2019
Resolved - The situation has been under careful monitoring for the past hour and no further delays have been experienced. The issue is now resolved.
Nov 8, 16:16 CST
Monitoring - The cause of the messaging delays affecting SMS to/from short codes has been identified and fixed. The team is now monitoring the situation to ensure there are no lingering issues.
Nov 8, 15:16 CST
Investigating - We have identified delays in SMS being sent from Telnyx numbers TO short codes. There is also a delay in messages being sent FROM short codes to Telnyx numbers.

This issue will not affect short codes on the Telnyx network.

We will post additional updates here as soon as we are able to provide one.
Nov 8, 15:00 CST
Nov 7, 2019

No incidents reported.

Nov 6, 2019

No incidents reported.

Nov 5, 2019

No incidents reported.

Nov 4, 2019
Resolved - After continuous monitoring, our engineers have not seen anymore delays in inbound or outbound MMS. We thank you for your continued patience and apologize for any inconvenience caused.
Nov 4, 16:17 CST
Identified - Our engineers have identified a delay in delivery of inbound and outbound MMS messages.

They are currently investigating the root cause and are working diligently on resolving the issue.
Nov 1, 13:39 CDT
Resolved - After continuous monitoring, our engineers have not seen anymore failures in status updates for outbound messages. We thank you for your continued patience and apologize for any inconvenience caused.
Nov 4, 16:17 CST
Identified - Our engineers have detected an increase in outbound messaging status update failures through our internal monitoring system since 14:30 UTC. Your messages are being processed, sent and delivered but their status update is currently delayed. Our engineers are continuing to investigate the root cause at this time.
Nov 4, 08:33 CST
Resolved - We have been informed by our upstream carriers that the service has been restored and the fiber cut lines have been fixed.

Please reach out to support on chat if you see any issues.

We appreciate your patience with this issue.
Nov 4, 02:32 CST
Update - The issue in WA is still being addressed at this time.

A dispatch has addressed the fiber cut but there is also a hardware issue in the central office.

We will advise when further status updates are provided.
Nov 2, 08:45 CDT
Update - We are continuing to work on a fix for this issue.
Oct 31, 13:25 CDT
Identified - We've been made aware of a fiber cut that's affecting inbound calls to the 509 area code. Crews in the area are working to have this service restored. Updates will be provided when available.
Oct 31, 13:24 CDT
Nov 3, 2019

No incidents reported.