Categories
T-SQL Tuesday

T-SQL Tuesday #171 – The Last Ticket

Tickets. Each one helps make the solutions we support more feature complete, bigger, faster, and fixed-er. This month’s invitation from Brent asks us to describe one of the last tickets which was closed to give an insight into our day to day.

As an introduction: this is a response from a Development DBA who is relatively new into a well established team. The organisation is large online business, and the team is sized accordingly. Based on Brent’s matrix we’re a more Developer tilt on the Development DBA role. We primarily build on SQL Server which includes queries, tables and tuning. Less frequently this covers indexing, monitoring and dabbling with application code, although in more specific cases.

Here’s the situation: a ticket was raised due to figures on different areas of the site not aligning. The ticket was to understand what was returning different data, why this was the case and investigate how we can resolve it.

Analysis

We have the ability to trace proc calls from the UI so we’re able to replay those against the databases. It’s incredibly helpful. Using that I was able to compare results between different areas of the site to identify specific procedures returning incorrect figures. That’s the straight forward part of identifying what was returning incorrect results. The more complex part, why?

Reviewing the procs (there were multiple) there were two different patterns emerging. Both were as a result of older sources of data being used which don’t support some newer features (overly simplifying).

That explains the why we have a difference. Ticket updated with triage. Now how do we go about resolving them?

Changes

We had two groups of procs which had issues. All existing versions return what we’ll refer to as v1 format. One group should be driven by v2 data, and the other should be using v3 data. The output format needed to remain the same though, that’s a constraint due to the impacts down stream.

The changes to switch the v2 data into v1 format was nice and straightforward. It’s mainly swapping out a table in the query and then applying some newer filters. On top of that its the usual process of versioning and commenting to help our future selves. Those changes were the easier part.

We also had a few instances of pushing v3 data out in v1 format. The v3 format is quite different to v1 in that it returns more data which can be filtered by the client. This helps support better caching through the environments. The downstream APIs aren’t expecting the v3 format however so it needed to be reverse-engineered.

I’ll note here that it’s helpful at times like these to be surrounded by folks experienced in these established solutions. This goes for both the production of the data as well as the usage of it. Before reverse engineering something like this it’s good to know if a) that’s the wrong way to tackle it, or b) if someone else has solved that problem before.

In this case both answers were No, so time to get the hands dirty.

The challenge for v3 data was to parse apply the extra filters manually within the procs. This needed a heavy dose of string manipulation (which everyone enjoys). As it wasn’t something we’d needed to shoe-horn in previously there was some experimentation on different approaches. Extra time was taken to do a broader review of the performance impacts with sample data sets due to the scope of these changes.

With the changes made it’s time to get them checked into Git, update the ticket, and start on the road to deployment.

Deployment

Deployment for us is about gateways. We have a reliable pipeline to deploy incremental changes relatively freely. This includes a number of reviews and tests. Here’s a run-down of the key steps between scripting the changes and releasing:

  • Developer DBA reviews code and manually deploys into development environment
  • Changes in development are smoke tested by the developer as much as possible with limited data
  • Changes are reviewed and authorised for UAT with a senior or lead developer
  • Production DBA team review changes and either deploy to UAT or feed back to the dev. This includes reviewing execution plans and performance against current release
  • Developer creates a test scope for the changes covering any touch points and related areas which need to be covered
  • Test team will undertake testing based on scope and original request and sign off or fail back to the dev
  • Finally a lead can authorise a live deployment

If at any point the reviews, deployments, or tests fail, the changes will be back with the developer for reworking.

Not only does this process check if the changes work, we look to include performance profiling and coding standards across the teams. This is why we have a high degree of confidence when we deploy to production.

Wrap up

Here I’ve covered a recent ticket which was closed off to try and demonstrate how a Development DBA role may feel within a larger organisation. This is a typical small work ticket we see. Sometimes it’s newer functionality or changes to existing features. Other times changes have happened, there’s some edge cases which has been missed, and we need to mop up later.

With a lot of hands and moving parts, things will get missed. It’s not a criticism, it’s just how things are.

All in, this change was around for just over a week. A day or two to triage, design and implement the changes, and the remainder whilst the release worked through the deployment pipeline. It’s reliable and gives us confidence, it isn’t always fast.

Thanks again to Brent for hosting and I look forward to seeing everyone’s submission in the roundup!

One reply on “T-SQL Tuesday #171 – The Last Ticket”

Leave a comment