Incident Started

Please note that we have been made aware of an ongoing Nationwide incident within the UK Mimecast platform. This is causing delays in the sending and receiving of email for Mimecast customers. This does not affect internal messages.

The incident is currently being investigated by Mimecast support and we will provide an update by no later that 1pm GMT.

Incident Resolved

Issues with the delay in sending/receiving external emails appear to now be resolved. 
Emails that were sent yesterday also appear to have now caught up with themselves. No further issues have been reported on these matters. 
 
If you continue to encounter any problem with email delay, please do not hesitate to contact Support@theaccesspoint.co.uk
 
Kind Regards
Accesspoint

Status Update

We’ve had further reports that delays are improving. Our investigations point it towards being a Microsoft issue and to expect further delays today as emails will be catching up after this disruption. We will continue to monitor the situation going forward.

Incident Started

Delay in external email sending/receiving. 
 
We have had several reports this afternoon that external emails are being delayed when they have been sent. It appears this delay is around 30 minutes. 
 
We are actively investigating the issues at present to determine if this is a fault from Microsoft or from Mimecast. Further updates will be posted here as soon as we hear more. 
 
Kind Regards
Accesspoint 

Incident Resolved

Issues with Mimecast appear to have now subsided so incident will be closed off. 

Status Update

Latest update from Mimecast. 
 
[Monitoring]  We have resolved the mail flow delivery delays on the UK grid, but continue to monitor while processing backlogs return to normal. All services should now be functioning normally. We appreciate your patience as we worked to resolve this issue.
 
Accesspoint will continue to monitor the situation as it progresses. 

Incident Started

Degraded Mimecast Service- Delivery Delays
 
We have been made aware of Mimecast experiencing a degraded service in the UK at present. This is resulting in email delivery being delayed on emails intermittently.
 
Current Statement from Mimecast on the matter reads: 
 
[Investigating] 

We are aware that some customers hosted on our UK grid may be experiencing intermittent delivery delays and messages may be held in the processing queue. We are currently investigating the issue and updates will be posted here as they become available.

 
To view the status of Mimecast as updates happen, please see the following link: https://status.mimecast.com/  
 
We will update as soon as we hear more. 
 
Kind Regards

Accesspoint

Incident Resolved

The works to replace the main firewall cluster have been completed successfully, logins have been enabled for all platforms.

Incident Resolved

Update Number: 20 (Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25
  • Network Stability has been seen since Card replacement

Resolution Notes:

  • Network Cards at two network locations replaced.
  • The remaining two costed out links are internal to CityFibre and there is no impact to customers and will be forward managed separately

Resolved:

14:31 (Customer impact resolved post card Swap out at 19:25 on Saturday 23rd July)

Status Update

Update Number: 19 (Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25
  • Network Stability has been seen since Card replacement

Current Action Plan:

  • Monitoring continues
  • Current Service impact – None, all resilient ports are back in service
  • Current Network impact – None, restored to previous state.
  • Two 2 core internal links which are still costed out will be reintroduced this evening under controlled conditions.

Next Update 20:00