Incident Update

Status Update

Please be aware that the Mimecast incident is still ongoing. Mimecast support have identified the problem and are implementing mitigation methods. They have advised that we should expect to see mail flow processing as this is implemented. A further update will follow by 13:30 GMT. 

Status Update

We’ve had further reports that delays are improving. Our investigations point it towards being a Microsoft issue and to expect further delays today as emails will be catching up after this disruption. We will continue to monitor the situation going forward.

Status Update

Latest update from Mimecast. 
 
[Monitoring]  We have resolved the mail flow delivery delays on the UK grid, but continue to monitor while processing backlogs return to normal. All services should now be functioning normally. We appreciate your patience as we worked to resolve this issue.
 
Accesspoint will continue to monitor the situation as it progresses. 

Status Update

Update Number: 19 (Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25
  • Network Stability has been seen since Card replacement

Current Action Plan:

  • Monitoring continues
  • Current Service impact – None, all resilient ports are back in service
  • Current Network impact – None, restored to previous state.
  • Two 2 core internal links which are still costed out will be reintroduced this evening under controlled conditions.

Next Update 20:00

Status Update

Update Number: 18(Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of car performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25

Current Action Plan:

  • Current service impact – None, all resilient ports are back in service
  • Network impact – None, restored to previous state. With the exception of 2 core links which are still costed out
  • Monitoring will now commence for 24 hours, after this time period the 2 costed out links will be brought back into service sequentially under controlled engineer conditions
  • Further update will be posted prior to commencement of work to bring the 2 links back into service

Next Update:

12:00 Sunday 24th July

Status Update

Update Number: 17 (Entanet / Cityfibre)

Completed Actions:

– Reports of circuit impact into the CityFibre TSC

– CityFibre TSC engaged CityFibre NOC for initial investigations

– CityFibre NOC confirmed an issue seen on active monitoring

– MI process engaged

– MI accepted

– Internal Bridge call scheduled

– NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC

– Further impact to Consumer circuits discovered and acknowledged

– NOC investigations determined an issue within the core network emanating from a specific location

– NOC contacted hardware supplier and raised a Priority 1 case

– All logs provided to hardware supplier for analysis

– Internal Bridge call convened

– Conference call between CityFibre NOC and hardware supplier convened

– Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.

– It has been found that the origin point of the issue is on a line card situated within a core network device.

– Soft clear of card performed without success

– Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again

– Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps

– Escalation made to Hardware Supplier confirm part availability and Engineer ETA

– Part sourcing resolved

-Engineer details confirmed and will be collecting at 0700.

-Access request to DC in confirmed

-Issue with retrieving parts from location resolved

– Engineer attended Slough DC

-Engineer has completed card swap successfully

-Testing and checks completed

-BGP reenabled

-Network stability confirmed

-CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.

-Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted

-Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.

– Card causing alarms remains out of service

-Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.

Current Action Plan:

Engineers have arrived on site and will be completing the card swap within the next 30 minutes

There remains no customer service impact that we are aware of, with services either taking alternative routes round the network or utilising their designed service resiliency at this location.

Next Update:

20:30

Status Update

Update Number: 16 (Entanet / Cityfibre)

Completed Actions:

– Reports of circuit impact into the CityFibre TSC

– CityFibre TSC engaged CityFibre NOC for initial investigations

– CityFibre NOC confirmed an issue seen on active monitoring

– MI process engaged

– MI accepted

– Internal Bridge call scheduled

– NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC

– Further impact to Consumer circuits discovered and acknowledged

– NOC investigations determined an issue within the core network emanating from a specific location

– NOC contacted hardware supplier and raised a Priority 1 case

– All logs provided to hardware supplier for analysis

– Internal Bridge call convened

– Conference call between CityFibre NOC and hardware supplier convened

– Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.

– It has been found that the origin point of the issue is on a line card situated within a core network device.

– Soft clear of card performed without success

– Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again

– Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps

– Escalation made to Hardware Supplier confirm part availability and Engineer ETA

– Part sourcing resolved

-Engineer details confirmed and will be collecting at 0700.

-Access request to DC in confirmed

-Issue with retrieving parts from location resolved

– Engineer attended Slough DC

-Engineer has completed card swap successfully

-Testing and checks completed

-BGP reenabled

-Network stability confirmed

-CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.

-Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted

-Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.

Current Action Plan:

With errors continuing following the suspected faulty line card being replaced, focus has switched to another core device which connects directly to it. Joint investigations between CityFibre and the vendor of the other core device commenced a short while ago, and further alarms were quickly identified here.

The card causing the alarms was taken out of service in order for redundant links on another card to be forced into use. This will mitigate any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.

The next stage is for engineers to attend the relevant location with replacement parts. The ETA for this activity is 18:30.

There is currently no customer service impact that we are aware of, with services either taking alternative routes round the network or utilising their designed service resiliency at this location.

Next Update:

18:30

Status Update

Update Number: 15 (Entanet / Cityfibre)

Completed Actions:

– Reports of circuit impact into the CityFibre TSC

– CityFibre TSC engaged CityFibre NOC for initial investigations

– CityFibre NOC confirmed an issue seen on active monitoring

– MI process engaged

– MI accepted

– Internal Bridge call scheduled

– NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC

– Further impact to Consumer circuits discovered and acknowledged

– NOC investigations determined an issue within the core network emanating from a specific location

– NOC contacted hardware supplier and raised a Priority 1 case

– All logs provided to hardware supplier for analysis

– Internal Bridge call convened

– Conference call between CityFibre NOC and hardware supplier convened

– Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.

– It has been found that the origin point of the issue is on a line card situated within a core network device.

– Soft clear of card performed without success

– Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again

– Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps

– Escalation made to Hardware Supplier confirm part availability and Engineer ETA

– Part sourcing resolved

-Engineer details confirmed and will be collecting at 0700.

-Access request to DC in confirmed

-Issue with retrieving parts from location resolved

– Engineer attended Slough DC

-Engineer has completed card swap successfully

-Testing and checks completed

-BGP reenabled

-Network stability confirmed

-CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.

-Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted

Current Action Plan:

investigation continues with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time

Next Update:

14:00

Status Update

Update Number: 14 (Entanet / Cityfibre)

Completed Actions:

– Reports of circuit impact into the CityFibre TSC

– CityFibre TSC engaged CityFibre NOC for initial investigations

– CityFibre NOC confirmed an issue seen on active monitoring

– MI process engaged

– MI accepted

– Internal Bridge call scheduled

– NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC

– Further impact to Consumer circuits discovered and acknowledged

– NOC investigations determined an issue within the core network emanating from a specific location

– NOC contacted hardware supplier and raised a Priority 1 case

– All logs provided to hardware supplier for analysis

– Internal Bridge call convened

– Conference call between CityFibre NOC and hardware supplier convened

– Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.

– It has been found that the origin point of the issue is on a line card situated within a core network device.

– Soft clear of card performed without success

– Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again

– Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps

– Escalation made to Hardware Supplier confirm part availability and Engineer ETA

– Part sourcing resolved

-Engineer details confirmed and will be collecting at 0700.

-Access request to DC in confirmed

-Issue with retrieving parts from location resolved

– Engineer attended Slough DC

-Engineer has completed card swap successfully

-Testing and checks completed

-BGP reenabled

-Network stability confirmed

Current Action Plan:

CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.

Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate.

This recovery step has been reverted and investigation continues with CityFibre Engineers and Network Equipment Supplier TAC

Next Update:

12:00

Status Update

Update Number: 13 (Entanet / Cityfibre)

Completed Actions:

– Reports of circuit impact into the CityFibre TSC

– CityFibre TSC engaged CityFibre NOC for initial investigations

– CityFibre NOC confirmed an issue seen on active monitoring

– MI process engaged

– MI accepted

– Internal Bridge call scheduled

– NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC

– Further impact to Consumer circuits discovered and acknowledged

– NOC investigations determined an issue within the core network emanating from a specific location

– NOC contacted hardware supplier and raised a Priority 1 case

– All logs provided to hardware supplier for analysis

– Internal Bridge call convened

– Conference call between CityFibre NOC and hardware supplier convened

– Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.

– It has been found that the origin point of the issue is on a line card situated within a core network device.

– Soft clear of card performed without success

– Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again

– Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps

– Escalation made to Hardware Supplier confirm part availability and Engineer ETA

– Part sourcing resolved

-Engineer details confirmed and will be collecting at 0700.

-Access request to DC in confirmed

-Issue with retrieving parts from location resolved

– Engineer attended Slough DC

-Engineer has completed card swap successfully

-Testing and checks completed

-BGP reenabled

-Network stability confirmed

Current Action Plan:

CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.

Next Update:

12:00