Uncategorized

Incident Resolved

Update Number: 20 (Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25
  • Network Stability has been seen since Card replacement

Resolution Notes:

  • Network Cards at two network locations replaced.
  • The remaining two costed out links are internal to CityFibre and there is no impact to customers and will be forward managed separately

Resolved:

14:31 (Customer impact resolved post card Swap out at 19:25 on Saturday 23rd July)

Status Update

Update Number: 19 (Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of card performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25
  • Network Stability has been seen since Card replacement

Current Action Plan:

  • Monitoring continues
  • Current Service impact – None, all resilient ports are back in service
  • Current Network impact – None, restored to previous state.
  • Two 2 core internal links which are still costed out will be reintroduced this evening under controlled conditions.

Next Update 20:00

Status Update

Update Number: 18(Entanet / Cityfibre)

Completed Actions:

  • Reports of circuit impact into the CityFibre TSC
  • CityFibre TSC engaged CityFibre NOC for initial investigations
  • CityFibre NOC confirmed an issue seen on active monitoring
  • MI process engaged
  • MI accepted
  • Internal Bridge call scheduled
  • NOC investigations ongoing with several examples of affected circuits provided from information gathering by TSC
  • Further impact to Consumer circuits discovered and acknowledged
  • NOC investigations determined an issue within the core network emanating from a specific location
  • NOC contacted hardware supplier and raised a Priority 1 case
  • All logs provided to hardware supplier for analysis
  • Internal Bridge call convened
  • Conference call between CityFibre NOC and hardware supplier convened
  • Following discussions between CityFibre NOC and our hardware supplier, there have been developments on this incident in regards to restoration.
  • It has been found that the origin point of the issue is on a line card situated within a core network device.
  • Soft clear of card performed without success
  • Full remote reboot of car performed which was successful for a period of approx. 30 mins before the issue manifested again
  • Further internal call held with CityFibre NOC and Hardware Supplier to agree next steps
  • Escalation made to Hardware Supplier confirm part availability and Engineer ETA
  • Part sourcing resolved
  • Engineer details confirmed and will be collecting at 0700.
  • Access request to DC in confirmed
  • Issue with retrieving parts from location resolved
  • Engineer attended Slough DC
  • Engineer has completed card swap successfully
  • Testing and checks completed
  • BGP reenabled
  • Network stability confirmed
  • CityFibre NOC Engineers have advised they are seeing network instability issues and are currently investigating.
  • Initial assessment appears to have identified that post the reintroduction of traffic following repairs stability was observed to deteriorate. This recovery step has been reverted
  • Investigation continued with diagnostics being carried out on the Network Device. Network traffic remains rerouted and services have been observed as stable at this time.
  • Card causing alarms remains out of service
  • Resilient links remain carrying traffic mitigating any remaining impact to services previously utilising that card and to prevent any further service disruption by our planned restoration activity.
  • Engineers swapped the Card causing alarms successfully at 19:25

Current Action Plan:

  • Current service impact – None, all resilient ports are back in service
  • Network impact – None, restored to previous state. With the exception of 2 core links which are still costed out
  • Monitoring will now commence for 24 hours, after this time period the 2 costed out links will be brought back into service sequentially under controlled engineer conditions
  • Further update will be posted prior to commencement of work to bring the 2 links back into service

Next Update:

12:00 Sunday 24th July