[10:23 CET] The DC is currently unreachable.
Trying to get in contact with the team onsite to get some answers to the situation.
[UPDATE 11:25 CET]
Services are restored, waiting for an RFO
RFO from DC
We're experiencing a few issues with our core routers causing BGP sessions to flap. Issue has been identified and we are working to implement a fix.
Update 10:20: We are currently power cycling one of our core routers. ETA 10-15 minutes.
Update 10.34: Core router has been power cycled. ETA 5 minutes.
Update 10:59: Issues with NVRAM on one of our core routers. Manually restoring config backup.
Update 11:25: Backup has been restored. We are proceeding with supervisor replacement
Update 11:40: We are verifying switch uplinks. Had issues with a few switches. Should be resolved now.
Update 11:56: We are seeing some packet loss in parts of our DC. Working to resolve it.
Update 12:01: IOS has a self defense mechanism when low on RAM it inserts a limit into the TCAM causing routes to be dropped. This limit was the reason for the sporadic packet loss at certain parts of the DC.
The issues with our core router should now be completely resolved. We are sorry for the inconvenience.
We will proceed by:
Migrating from SUP720-3BXLs to our new ready to go SUP6T.
Wednesday, January 24, 2018