Incident Closed

Incident Resolved

Microsoft have now resolved the issue. Root cause analysis identified an issue with a specific section of the mailbox database infrastructure. This section was restarted and is now working correctly again. 

 

Microsoft are now analysing performance data on the affected area to prevent this happening again. 

 

We have found that in some cases users have required to log off of their sessions and to log back on again, to get the connection working correctly again. 

Incident Resolved

Disconnection issues were resolved. 

Incident Resolved

Microsoft have mitigated the issue by diverting Mail flow around the affected infrastructure. 
 
Email services should now be returning to normal, however due to the backlog of emails, it may take some time for emails to send through as per usual. 
It would also be advisable to resend emails that you have received bouncebacks on to ensure they reach their intended recipients. 
 
Kind Regards
Accesspoint Support Team

Incident Resolved

Please be aware that the remediation work has now concluded and  services have now been fully restored. All client platforms are once again fully operational. We will continue to monitor the platform following this. 

Incident Resolved

Please be aware that Mimecast have now resolved the issue with UK email service and mail flow has now returned to normal. 

Incident Resolved

Issues with the delay in sending/receiving external emails appear to now be resolved. 
Emails that were sent yesterday also appear to have now caught up with themselves. No further issues have been reported on these matters. 
 
If you continue to encounter any problem with email delay, please do not hesitate to contact Support@theaccesspoint.co.uk
 
Kind Regards
Accesspoint

Incident Resolved

Issues with Mimecast appear to have now subsided so incident will be closed off. 

Incident Resolved

The works to replace the main firewall cluster have been completed successfully, logins have been enabled for all platforms.

Incident Resolved

Update Number: 20 (Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25
  • Network Stability has been seen since Card replacement

Resolution Notes:

  • Network Cards at two network locations replaced.
  • The remaining two costed out links are internal to CityFibre and there is no impact to customers and will be forward managed separately

Resolved:

14:31 (Customer impact resolved post card Swap out at 19:25 on Saturday 23rd July)

Incident Resolved

Issues with Chrome have been resolved.