graphic says How to speeding up your website: Uncovering performance issues

Fast websites are often taken for granted, but performance is incredibly important. Bloated websites may seem fast on broadband, but not everyone is on super-fast wi-fi. User who try to access bloated websites on slow wi-fi connections or over mobile networks may get frustrated with you. That could tarnish your brand and actually send customers over to your competition — you know, the ones with high-performing websites that load in under 3 seconds.

We had a client that needed their website to go faster. Standard Beagle decided to take an in-depth look at the site’s performance. Thankfully, with the use of some modern tools that have come out the past few years, we were able to comb out and prioritize a website page’s particular performance issues.

How to uncover what’s causing performance issues

We first leaned on the chrome extension Lighthouse. Chrome browser has been updating how they measure and display website audit data in their developer tools. Lighthouse has been a part of that change, integrated into the developer tools in Devtools (Chrome 60). There is a lot of information that comes back from this, broken up into different sections:

  • Best Practices
  • PWA (Progressive Web App)
  • Accessibility, and
  • Performance

The information after running a site through the extension covers quite a lot. The website we were addressing performance issues on is a WordPress website, so we weren’t interested in the Progressive Web App metrics. We run separate tests for our Accessibility testing, and though Best Practices definitely had great information for us to peruse through, we were specifically interested in and focused on the Performance section.

We wanted to be able to use the information that Lighthouse gives regarding performance and let it help us find what was most impactful and important to the page performance for our client site. Luckily, Lighthouse also has a Node CLI – this lets us run the Lighthouse test locally, or attach it to another process and use the data in any way that we want. We installed the node module and ran the tool, using the option to output the results as a json file:

// install Lighthouse globally
npm install -g lighthouse

// Run the tool as: 
// - a headless browser (does not open chrome browser to run)
// - outputted to json, in a folder under audits 
lighthouse (name-of-website.com) --output=json --output-path=(where-you-want-to-output) --save-assets --perf --chrome-flags="--headless"

We were then given all of the performance data given to us from Lighthouse. The raw JSON data had a lot of great information to peruse through. We only wanted the results data, so we built a python script to parse through that data and omit what we wanted. Since we were already using python, we used csv.writer to write our CSVs for us as well.

Now we could have also built this with the Lighthouse script – there is an output option for CSV for that. However, we wanted two things to happen:

  1. We wanted only the information from the JSON file that we had cherry-picked out already – the results section
  2. We wanted to be able to insert our own data into the script.

The second part is where we wanted to customize our performance evaluations. Lighthouse shows raw numbers and the goal that we should be striving for, which is great – we have very precise goals to hit. Lighthouse also has their own scoring system, giving a site a number value for each of the overall audit sections. However, there can be plenty of other outside influences that may change the importance of what to really drill in on for a project’s performance plan, so we wanted to come up with our own. We figured we would look into scoring based off of two important factors:

  1. Ease
  2. Impact

We determined the difficulty level of addressing a performance metric (ease) should have a hand in how we would prioritize all of the metrics given. Our team agreed upon a score of 1-5: 1 being most difficult, and 5 being easiest. With that, we also wanted to have a similar score for what we believed would have the most impact. Impact would involve things like seeing which metric had the largest gap between the raw value we received vs the goal, and our own experience of which metrics provided the largest impact when typically addressed. This ease + impact total score would then show us what would have the most impact to the project while also being the easiest to implement – this would then maximize the value of the performance plan as much as possible.

An example of metrics spit back out from Lighthouse running on a website like discovery.com:

Perceptual Speed Index (Speed Index shows how quickly the contents of a page are visibly populated):

  • 6,254 ms
  • target: < 1,250
  • Ease: 3
  • Impact: 4

First Meaningful Paint (First meaningful paint measures when the primary content of a page is visible):

  • 7,090 ms
  • No target goal given, just needs to be as small as possible
  • Ease: 2
  • Impact: 4

We could evaluate these two as saying both Perceptual Speed Index and First Meaning Paint both would have about the same impact to better performance for discovery.com, but knowing what we know about the site we might see that Perceptual Speed Index may be easier to implement or get a lower score for. For this one, determining ease also plays into the question “Do we have a specific target goal or amount for this metric?” If we have a particular goal in mind, then it would be easier to determine once we’ve hit that mark on Perceptual Speed Index – whereas First Meaningful Paint does not have a specific target amount so we can’t as easily determine when we’re finished working on it.

We ended up adding prompts to our pythons script, asking for user input on Ease and Impact for each metric Lighthouse gives us for the Performance section of the audit. We then simply added those as a total project amount. We then added Ease, Impact, and the Total to a separate dictionary in Python that we could then export as JSON and CSV. We then sorted each row of the CSV based on the Total we created from ease + impact. Finally, we created an html page with a simple html table showing us this data.

If you know about page performance on the web, you may have realized how much these two metrics overlap each other. It’s true they do – if you run Lighthouse on any web site, Google will refer you to pages on how to address each metric. These two we have reviewed both share the suggestion on Optimizing the Critical Path. There is definitely overlap here, which shows us that if we work on a few metrics, we are most likely addressing other semi-related metrics.

But why worry about adding our own grading system if Lighthouse already has their own? If we know there is a lot of overlapping suggestions for making our site smaller and faster, why go through and add our own scores? If we can evaluate performance, and find out what is most important to a project or makes the most sense to address, then we actually gain quite a bit:

  1. We have the best chance of addressing the most concerning, easiest-to-address specific to this project
  2. We can build on getting real data in a way that we want it, with only the data that we choose to want to use
  3. We can continue continue to find sore spots on the site’s performance when we address others
  4. We can build data points so we can show clients the results of addressing their page performance with real numbers

Overall, we gain an understanding of actionable tasks from real data that has been prioritized by our team. We’ve begun using an agile workflow to give us the best opportunity to build a performance plan that really drills in both the data with the addition our own knowledge of performance auditing helping to sort out the most important metrics to address. From there, we can begin the actual work of addressing those issues, basing our workflow off the blueprint we’ve just build.

Web performance is incredibly important to notice when building any sort of web site or application, especially in our modern times considering how most of us get online. There can be a lot of different aspects to review and address and it can seem like a large undertaking. We can make it easier by figuring out what give us the biggest bang for our performance buck.

Similar Posts