Everything fades out, eventually

The last year has been busy, to say the least. Planning, designing, and executing all parts of a big network migration - From old stuff to new, from old design to new, from old thinking, to new… We’ll get back to that thinking part. It turned out it is hard to leave the old thinking…

After summer, the project went into the execution phase - Time to deploy everything we prepared! Runbooks was created, automation involved. At the same time we started to setup the logical network (VRFs, peering, prefix-sets, firewalls, external connetions etc), massive amounts (like, 100s) of servers were prepared for migration by other teams. For the server-side, this migration involved replicating to the new datacenter via a VMWare specific L2 tool called HCX. Initial tests were less then impressive - It litterally took days to move one machine.. However, the server teams ended up with a way of working that… worked. Meanwhile, we (network design team) were struggling with the topology - As in many projects, you start off with a design that is agreed upon, then you run into snags along the way wich forces you to change the design, and then some more… And then some more. This also rendered more meetings, with more people, that not neccessarily could contribute to any decision on what to do, but still had a saying, because… reasons.

A few months into the autumn, we realized that we wouldn’t be able to fully evacuate the old equipment, since it had connections that was needed even after everything was moved. These connections had been flagged as not needed in the new solution, and was removed from all designs.. What it all was about was that certain traffic was routed via the company’s internal routers, instead of via the customer’s, wich is a very bad design for obvious reasons (security, segmentation, compliance etc). In the early days of the project’s planning phase, all services that used this connection should either be moved, be reached over the internet instead, or was set to be de-commissioned. As you expect, this happened for 80% of the involved service, but we still had 20% left with no other path then via the old routers/firewalls. And project couldn’t change what as agreed upon and handshaked…

I will not go into the politics of this matter, since it’s all just “routing” really, so nothing hard to solve for a technician, but in this case, other forces were at play. Anyway, we were left hanging - Project would close the last of february 2023, and all old equipment should be evacuated by then (preferrably de-commissioned also), but we still had traffic that only could use the old path, and we weren’t allowed to set this up in the new environment. Catch /22!

Further on, project ending was postponed a month, and it was finally closed last of march 2023. A bit of residue is left, like the routing issue. But progress is made. For me as a consultant, time is more or less up. Project doesn’t really exist any more, resources are restrained. The residue is for someone else to handle, even though I will throw in a few hours to assist whenever needed. Off to new adventures!

Start page