Need help or advice?
Call us FREE 0800 298 2375

Merula mail server upgrades

The email servers are being upgraded over the weekend to a pair of faster mirrored servers. During the course of this migration email will appear to have disappeared from in-boxes but this isn’t any cause for concern and is merely a side-effect of the migration process

No email has been lost – it is in the process of being synced. During the course of the weekend the email should fully re-appear.

New email will be delivered instantly – older emails will be restored as the sync process progresses. Any security certificate error messages can be safely ignored for the duration of this migration. These will disappear after both servers are fully synced and updated.

We will confirm when this process has completed.

UPDATE: Broadband packet loss & intermittent connectivity

UPDATE:

We have seen the services starting to recover and our normal traffic profile is virtually back to normal. Any subscribers still to reconnect may require a router reboot if the issue persists.

The fault is still open with our supplier until the overall service has been restored. Our apologies again to those affected.

+++++++++++++++++++

One of our back-haul providers is aware of an ongoing issue affecting a small section of our lines which is causing either packet loss or intermittent connectivity or sometimes both. NOTE: This isn’t affecting all lines but the following STD codes are those seeing issues through this supplier. We expect an update by 14.30. In the meantime, we apologise if your line is one of those affected.

01171 01173 01179 01200 01214 01282 01372 01483 01485 01512 01513 01514 01515 01517 01518 01519 01527 01553 01604 01628 01905 01932 02010 02011 02030 02031 02032 02033 02034 02035 02070 02071 02072 02073 02074 02075 02076 02077 02078 02079 02080 02081 02082 02083 02084 02085 02086 02087 02088 02089 02311 02380

RESOLVED: Power Issue – Harbour Exchange Square

We have become aware of another power issue in Harbour Exchange Square. This occurred this afternoon while no work was being performed by ourselves.

We have raised this to the Data Centre (Equinix) requesting an urgent update. We also have an engineer en route to the site to assist as needed.

Most broadband and leased lines should be working although there will be issues for any service ONLY connected at HEX.

More as soon as we have an update

update: 6:30pm The core issue is now resolved and we are seeing all services in HEX back up and running. We are continuing to check for any remaining issues. There was a issue earlier affecting new FTTC/ADSL logins that was resolved. If you are still seeing an issue please call or email support via the normal routes.

We are continuing to work with Equinix to understand the root cause of the issue

UPDATE 4 Network / Power Issue – Habour Exchange Square

We have lost power to our rack in Harbour Exchange Square. Our UPS held power for a while but the batteries are now exhausted meaning that services that are provided from Harbour Exchange Square are currently affected – this primarily relates to some of our Leased Lines which are single homed. Most other services have re-routed via alternative Data Centres.

The Data Centre Technicians are working to restore power  to the rack asap and we will then expect to see services recover here.

We will update this further as we have the updates.

Note this does not affect services (including leased lines) from other data centres – although there may have been some network instability initially

We are sorry for this issue

UPDATE 14:15 We are starting to see power restored to our rack though there are still some service affected – many are now restored. We are working through these issues and will update this further – However in many cases you should see service restored now

UPDATE 14:22 Equinix (Our Data Centre Supplier in HEX) have just emailed a Incident Update confirming a possible power issue at the facility. We are continuing to see services restore. There are a few remaining services down and we continue to work to resolve these asap. NOTE we have used the opportunity with the power loss to complete the UPS batter replacement – so that there will be no further maintenance on the power within our rack and in the unlikely event of another power fail we now have new batteries in the UPS

UPDATE 15:30 We have restored most services now although it seems the power failure has caused a switch to fail in the rack. All critical services have been moved off of the affected switch and a replacement is being organised to swap out hopefully later this afternoon/evening. There should now be no affected services – However the network should be deemed at risk due to the reduction in redundancy. We will update this once the switch replacement starts

UPDATE 21:10 A replacement switch is now in place and configured in Harbour Exchange square and the remaining services (and resilience) are now restored. There is a need to investigate the power issue further and the data centre may need to change the breaker we are connected to. However this will be a separate planned works and will be announced later. This may be at sort notice BUT will be out of core hours – and will not be today.

We believe service is now fully restored – IF anyone has any ongoing issues please raise them to support via the normal means.

UPDATE 2 – HEX UPS Maintenance [23/09/2017]

Our in rack UPS in our Harbour Exchange PoP is showing a possible battery failure. We have a replacement batteries and will be replacing these tomorrow (23/09). These can be swapped without removing power. However with any activity there is a small risk of disruption to services connected to this rack

UPDATE: It appears there may be a more serious issue with the UPS than failed batteries. A short time ago we lost all Power in our HEX PoP which caused a period of Network instability

We believe that all Network access with the exception of services directly connected to HEX is now up. We are working on HEX to bring service back asap

UPDATE 13:50 The Data centre have located a trip has failed which has lead to no Power being available to our rack. The Data Centre technicians are working to restore power to us as a priority. We expect a further update very shortly

FIXED: some circuits are affected & currently down 17th Sep 9am

10:23am UPDATE: the supplier reports that the problem has been resolved and we believe that all circuits are now back online. Affected circuits may need to reboot their router to bring their session back on stream.

The following exchanges are affected by this issue since 6.21am this morning.

BT and the supplier engineers are en-route to work on-site. No time to fix yet but we will update here as we hear more.

 

Exchanges affected include Barrow, Buntingford, Bottisham, Burwell, Cambridge, Crafts Hill, Cheveley, Clare, Comberton, Costessey, Cherry Hinton, Cottenham, Dereham, Downham Market, Derdingham, Ely, Fakenham, Fordham Cambs, Feltwell, Fulbourn, Great Chesterford, Girton,Haddenham, Histon, Holt, Halstead, Harston, Kentford, Kings Lynn, Lakenheath, Littleport, Madingley, Melbourne, Mattishall, Norwich North, Rorston, Science Park, Swaffham, Steeple Mordon, Soham, Sawston, Sutton, South Wootton, Swavesey, Teversham, Thaxted, Cambridge Trunk, Trumpington, Terrington St Clements, Tittleshall, Willingham, Waterbeach, Watlington, Watton, Buckden, Crowland, Doddington, Eye, Friday Bridge, Glinton, Huntingdon, Long Sutton, Moulton Chapel, Newton Wisbech, Parson Drove, Papworth St Agnes, Ramsey Hunts, Sawtry, Somersham, St Ives, St Neots, Sutton Bridge, Upwell, Warboys, Werrington, Whittlesey, Woolley, Westwood, Yaxley, Ashwell, Gamlingay and Potton

We are aware some other exchanges may be impacted

Update – we have just started to see some circuits recover but have no update from the carrier as yet

Supplier upgrade work affecting a few DSL circuits 29/8/17

One of our carriers is performing some planned works on their interconnect in Telehouse North. We have resilient interconnects in other locations, however during the work you may see your connections drop and re-connect over our alternative interconnect.

The work is scheduled for between 29/8/2017 22:00 and 30/8/2017 06:00.

Most circuits will automatically re-conect but a few may require a re-boot or 30-minute power down. If there are any issues please contact support via the normal routes

Routing instability

We have seen some routing issues on our Telehouse North core today affecting circuits terminated in this location or traffic passing through thn-gw1 or thn-gw2.

The primary issue has now been resolved and all traffic should be routing correctly. We anticipate some follow up work and we will communicate this as soon as it is planned

We apologise for the issues that some customers may have seen

[update] As a side effect of this with the router reload – it seems that one of the core routers decided to load a OLD version of the config which cased an issue for a number of directly connected customers. The config has now been restored to the latest version and we believe routing is now stable and correct

 

 

 

post.merula.net – Slow [23/06/2017]

We are aware that our post.merula.net server was slow / unresponsive between approx 5pm and 8:30pm this evening.

This was due to a customer sending a significant volume of SPAM through the server. We have blocked the affected customer and cleared the Mail Queue on this mail server. Service is now restored.

NOTE that there is a small chance of some outgoing emails being lost while the queue was being cleared – although we believe all legitimate emails have been sent OK.

IF you have sent an email during this time via post.merula.net and have not received a reply you may which to re-send it to ensure it has arrived

We apologies for any issues caused here

UPDATE: leased lines outage earlier today

RFO 09:40am:

To fix an issue with transient IPv6 and other intermittent routing issues we had seen recently, we were obliged to upgrade the software on one of our core routers. This holds live and backup routes that allow a smooth failover in the case of a single router failing in London. However it now (in an undocumented change from the software supplier) appears with the latest software that the routers set themselves as live on both primary and backup routers – resulting in a routing loop for some IP addresses which had static IP routes originating from this one affected router thus not correctly falling over as previously was the case.

Again, please accept our apologies for this short outage. It shouldn’t have happened.

We are aware of the cause, the problem has now been fixed on this, the one affected router and we have also made sure that all others in the network have been checked and we are confident all are now running properly.

UPDATE 09:27am:

We are aware of the root cause — located at a core switch in our London locations — and are working on bringing this back into service. No ETA yet but we expect this to be resolved shortly. Apologies for the downtime some of you are experiencing.

09:09am We are aware of reports of leased lines down and are investigating. More updates here as we know the cause & ETA to fix.