CloudFlare are having issues with their DNS services.
This may impact sites and services globally.
We’re starting to see some of their services returning to normal but we still caution that there may be more impacts.
More information at https://www.cloudflarestatus.com/
One of our network upstreams has announced network maintenance.
Please note that routing changes may cause momentary traffic dip during this time frame.
Our own traffic should automatically failover to secondary ISPs.
Date and time for maintenance window
Start date and time: 2020-05-05 03:00 UTC
End date and time: 2020-05-05 04:00 UTC
At 04.30 we performed a reboot of one of our core routers to ensure the issue is not on our end.
We will reboot some other equipment to troubleshoot issues.
We apologize for the disruption.
Update: One core switch was also rebooted manually at 05.50 – 05.52 . (short dip)
We have made adjustments to network traffic to find a stopgap solution to flapping network connection.
One of our upstream transits suffered an outage at 19.05. This caused network re-routing to happen and may have been felt as momentary network dip.
Traffic has been re-routed to failovers. We’re monitoring the networks.
When: 19:05 – 19:10 (upstream has failed over)
Impact: 5min re-routing, traffic dip
Update: Some more network dips occured during the evening. The network upstream suspected of causing high CPU usage in our cores have been isolated. We’re continuing to monitor the situation.
When: 20.30, 23.30 , duration (roughly 2-3min per incident)
At 22.00 we experienced some routing issues. We are currently investigating the cause of this.
Cause: One of our Upstream Transit lost BGP connectivity / flapped
Impact: 5 – 10min routing table rebuilds
Update (22.30): One of our transit operators having issues. Recovered after a few minutes.
At 04:00 we pushed an urgent patch to routers. This caused BGP to reset.
Traffic was disrupted momentarily until BGP was re-established.
We’ve applied important security updates to some of our switches in Västberga.
Each switch requires less than 5 minutes to reboot.
Impact: 08:50 – 08:55, 09:50 – 09.55
Problem: Upstream/ISP router rebooted. Suspected DDOS. Operator is restarting services and restoring connectivity.
Future resolution: Replacing upstream router.
We’ve now omitted the ISP with connectivity issues and re-routed traffic to different ISP.
Uppströmsleverantör hade problem med sitt nätverk. Det drabbade vissa delar av Adminors nät.
Orsak var DDOS mot leverantör.
At 18:15 we were notifed that BGP sessions towards one of our upstream providers went down.
This caused some network dips for one of our core routers while routes were being rebuilt.
At 18.21 the BGP sessions towards upstream was restored.
We’ve asked upstream to investigat the unannounced loss of session.
19.00 Update: Upstream has replied and said that a high CPU situation on their router caused BGP sessions to ”flap” (disconnect and reconnect). The cpu usage is now stable and BGP sessions too.