On the planet of HTTP Headers, there’s one header that I imagine deserves extra air-time and that’s the Server-Timing header. To me, it’s a must-use in any mission the place actual consumer monitoring (RUM) is being instrumented. To my shock, internet efficiency monitoring conversations not often floor Server-Timing or cowl a really shallow understanding of its software — regardless of it being out for a few years.
A part of that’s because of the perceived limitation that it’s completely for monitoring time on the server — it might present a lot extra worth! Let’s rethink how we are able to leverage this header. On this piece, we’ll dive deeper to indicate how Server-Timing headers are so uniquely highly effective, present some sensible examples by fixing difficult monitoring issues with this header, and provoke some artistic inspiration by combining this system with service staff.
Server-Timing is uniquely highly effective, as a result of it’s the solely HTTP Response header that helps setting free-form values for a particular useful resource and makes them accessible from a JavaScript Browser API separate from the Request/Response references themselves. This enables useful resource requests, together with the HTML doc itself, to be enriched with knowledge throughout its lifecycle, and that info could be inspected for measuring the attributes of that useful resource!
The one different header that’s near this functionality is the HTTP Set-Cookie / Cookie headers. Not like Cookie headers, Server-Timing is simply on the response for a particular useful resource the place Cookies are despatched on requests and responses for all assets after they’re set and unexpired. Having this knowledge sure to a single useful resource response is preferable, because it prevents ephemeral knowledge about all responses from turning into ambiguous and contributes to a rising assortment of cookies despatched for remaining assets throughout a web page load.
Setting Server-Timing
This header could be set on the response of any community useful resource, reminiscent of XHR, fetch, photos, HTML, stylesheets, and so on. Any server or proxy can add this header to the request to offer inspectable knowledge. The header is constructed through a reputation with an elective description and/or metric worth. The one required area is the identify. Moreover, there could be many Server-Timing headers set on the identical response which might be mixed and separated through a comma.
Just a few easy examples:
Server-Timing: cdn_process;desc=”cach_hit”;dur=123
Server-Timing: cdn_process;desc=”cach_hit”, server_process; dur=42;
Server-Timing: cdn_cache_hit
Server-Timing: cdn_cache_hit; dur=123
Vital Notice: For cross-origin assets, Server-Timing and different doubtlessly delicate timing values are usually not uncovered to customers. To permit these options, we can even want the Timing-Permit-Origin header that features our origin or the * worth.
For this text, that’s all we’ll want to start out exposing the worth and depart different extra particular articles to go deeper. MDN docs.
Consuming Server-Timing
Internet browsers expose a world Efficiency Timeline API to examine particulars about particular metrics/occasions which have occurred through the web page lifecycle. From this API we are able to entry built-in efficiency API extensions which expose timings within the type of PerformanceEntries.
There are a handful of various entry subtypes however, for the scope of this text, we can be involved with the PerformanceResourceTiming and PerformanceNavigationTiming subtypes. These subtypes are at present the one subtypes associated to community requests and thus exposing the Server-Timing info.
For the top-level HTML doc, it’s fetched upon consumer navigation however remains to be a useful resource request. So, as a substitute of getting totally different PerformanceEntries for the navigation and the useful resource features, the PerformanceNavigationTiming supplies useful resource loading knowledge in addition to extra navigation-specific knowledge. As a result of we’re taking a look at simply the useful resource load knowledge, we’ll completely discuss with the requests (navigational docs or in any other case) merely as assets.
To question efficiency entries, we’ve 3 APIs that we are able to name: efficiency.getEntries(), efficiency.getEntriesByType(), efficiency.getEntriesByName(). Every will return an array of efficiency entries with rising specificity.
const navResources = efficiency.getEntriesByType(‘navigation’);
const allOtherResources = efficiency.getEntriesByType(‘useful resource’);
Lastly, every of those assets may have a serverTiming area which is an array of objects mapped from the data supplied within the Server-Timing header — the place PerformanceEntryServerTiming is supported (see concerns beneath). The form of the objects on this array is outlined by the PerformanceEntryServerTiming interface which primarily maps the respective Server-Timing header metric choices: identify, description, and period.
Let’s have a look at this in a whole instance.
A request was made to our knowledge endpoint and among the many headers, we despatched again the next:
Server-Timing: lookup_time; dur=42, db_cache; desc=”hit”;
On the client-side, let’s assume that is our solely useful resource loaded on this web page:
const dataEndpointEntry = efficiency.getEntriesByName(‘useful resource’)[0];
console.log( dataEndpointEntry.serverTiming );
// outputs:
// [
// { name: “lookup_time”, description: undefined, duration: 42 },
// { name: “db_cache”, description:”hit”, duration: 0.0 },
// ]
That covers the elemental APIs used to entry useful resource entries and the data supplied from a Server-Timing header. For hyperlinks to extra particulars on these APIs, see the assets part on the backside.
Now that we’ve the basics of learn how to set and use this header/API combo, let’s dive into the enjoyable stuff.
It’s Not Simply About Time
From my conversations and work with different builders, the identify “Server-Timing” impresses a robust connection that it is a instrument used to trace spans of time or a element a few span of time completely. That is completely justified by the identify and the intent of the function. Nonetheless, the spec for this header could be very versatile; permitting for values and expressing info that might don’t have anything to do with timing or efficiency in any manner. Even the period area has no predefined unit of measurement — you may put any quantity (double) in that area. By stepping again and realizing that the fields obtainable haven’t any particular bindings to explicit forms of knowledge, we are able to see that this system can be an efficient supply mechanism for any arbitrary knowledge permitting plenty of attention-grabbing prospects.
Examples of non-timing info you possibly can ship: HTTP Response standing code, areas, request ids, and so on. — any freeform knowledge that fits your wants. In some instances, we’d ship redundant info that may be in different headers already, however that’s okay. As we’ll cowl, accessing different headers for assets very often isn’t attainable, and if it has monitoring worth, then it’s alright to be redundant.
No References Required
Because of the design of internet browser APIs, there are at present no mechanisms for querying requests and their relative responses after the very fact. That is necessary due to the necessity to handle reminiscence. To learn details about a request or its respective response, we’ve to have a direct reference to those objects. All the internet efficiency monitoring software program we work with present RUM purchasers that put extra layers of monkey patching on the web page to keep up direct entry to a request being made or the response coming again. That is how they provide drop-in monitoring of all requests being made with out us needing to alter our code to watch a request. That is additionally why these purchasers require us to place the shopper earlier than any request that we wish to monitor. The complexities of patching the entire varied networking APIs and their linked performance can develop into very advanced in a short time. If there have been an easy accessibility mechanism to tug related useful resource/request details about a request, we would definitely desire to try this on the monitoring facet.
To make issues tougher, this monkey-patching sample solely works for assets the place JavaScript is immediately used to provoke the networking. For Photos, Stylesheets, JS recordsdata, the HTML Doc, and so on. the strategies for monitoring the request/response particulars are very restricted, as often there isn’t any direct reference obtainable.
That is the place the Efficiency Timeline API supplies nice worth. As we noticed earlier, it fairly actually is an inventory of requests made and a few knowledge about every of them respectively. The information for every efficiency entry could be very minimal and virtually completely restricted to timing info and a few fields that, relying on their worth, would impression how a useful resource’s efficiency is measured relative to different assets. Among the many timing fields, we’ve direct entry to the serverTiming knowledge.
Placing the entire items collectively, assets can have Server-Timing headers of their community responses that include arbitrary knowledge. These assets can then be simply queried, and the Server-Timing knowledge could be accessed with out a direct reference to the request/response itself. With this, it doesn’t matter for those who can entry/handle references for a useful resource, all assets could be enriched with arbitrary knowledge accessible from an easy-to-use internet browser API. That’s a really distinctive and highly effective functionality!
Subsequent, let’s apply this sample to some historically powerful challenges to measure.
Answer 1: Inspecting Photos and Different Asset Responses
Photos, stylesheets, JavaScript recordsdata, and so on. usually aren’t created through the use of direct references to the networking APIs with details about these requests. For instance, we virtually at all times set off picture downloads by placing an img aspect in our HTML. There are methods for loading these belongings which require utilizing JavaScript fetch/xhr APIs to tug the info and push it into an asset reference immediately. Whereas that alternate method makes them simpler to watch, it’s catastrophic to efficiency most often. The problem is how can we examine these assets with out having direct networking API references?
To tie this to real-world use instances, it’s necessary to ask why would possibly we wish to examine and seize response details about these assets? Listed below are just a few causes:
We’d wish to proactively seize particulars like standing codes for our assets, so we are able to triage any adjustments.
For instance, lacking photos (404s) are seemingly completely totally different points and forms of work than coping with photos returning server errors (500s).
Including monitoring to elements of our stack that we don’t management.
Often, groups offload a lot of these belongings to a CDN to retailer and ship to customers. If they’re having points, how rapidly will the workforce be capable to detect the difficulty?
Runtime or on-demand variations of assets have develop into extra commonplace methods.
For instance, picture resizing, automated polyfilling of scripts on the CDN, and so on — these methods can have many limits and causes for why they may not be capable to create or ship a variation. In the event you count on that 100% of customers retrieve a selected sort of asset variation, it’s useful to have the ability to verify that.
This got here up at a earlier firm I labored at the place on-demand picture resizing was used for thumbnail photos. Because of the limitations of the supplier, a big variety of customers would worsen experiences on account of full-size photos loading the place thumbnails are supposed to look. So, the place we thought >99% of customers would get optimum photos, >30% would hit efficiency points, as a result of photos didn’t resize.
Now that we’ve some understanding of what would possibly inspire us to examine these assets, let’s see how Server-Timing could be leveraged for inspection.
Picture HTML:
Picture response headers:
…
Server-Timing: status_code; dur=200;, resizing; desc=”failed”; dur=1200; req_id; desc=”zyx4321”
Inspecting the picture response info:
// filter/seize entry knowledge as wanted
console.log(imgPerfEntry.serverTiming);
// outputs:
// [
// { name: “status_code”, description: undefined, duration: 200 },
// { name: “resizing”, description:”failed”, duration: 1200 },
// { name: “req_id”, description:”zyx4321”, duration: 0.0 },
// ]
This metric was very useful as a result of, regardless of returning “completely satisfied” responses (200s), our photos weren’t resized and doubtlessly not transformed to the suitable format, and so on. Together with the opposite efficiency info on the entry like obtain occasions, we see the standing was served as 200 (not triggering our onerror handlers on the aspect), resizing failed after spending 1.2s on trying to resize, and we’ve a request-id that we are able to use to debug this in our different tooling. By sending this knowledge to our RUM supplier, we are able to combination and proactively monitor how usually these situations occur.
Answer 2: Examine Assets That Return Earlier than JS Runs
Code used to watch assets (fetch, XHR, photos, stylesheets, scripts, HTML, and so on.) requires JavaScript code to combination after which ship the data someplace. This virtually at all times means there’s an expectation for the monitoring code to run earlier than the assets which can be being monitored. The instance introduced earlier of the essential monkey patching used to robotically monitor fetch requests is an efficient instance of this. That code has to run earlier than any fetch request that must be monitored. Nonetheless, there are many instances, from efficiency to technical constraints, the place we’d not be capable to or just mustn’t change the order through which a useful resource is requested to make it simpler to be monitored.
One other quite common monitoring method is to place occasion listeners on the web page to seize occasions that may have monitoring worth. This often comes within the type of onload or onerror handlers on components or utilizing addEventListener extra abstractly. This system requires JS to have been set earlier than the occasion fires, or earlier than the listener itself is hooked up. So, this strategy nonetheless carries the attribute solely monitoring occasions going ahead, after the monitoring JS is run, thus requiring the JS to execute earlier than the assets requiring measurement.
Mapping this to real-world use-cases, e-commerce websites put a heavy emphasis on “above the fold” content material rendering in a short time — usually deferring JS as a lot as attainable. That mentioned, there may be assets which can be impactful to measure, reminiscent of profitable supply of the product picture. In different conditions, we’d additionally determine that the monitoring library itself shouldn’t be within the essential path on account of web page weight. What are the choices to examine these requests retroactively?
The method is identical as Answer #1! That is attainable as a result of browsers robotically keep a buffer of the entire Efficiency Entries (topic to the buffer dimension restrict that may be modified). This enables us to defer JS till later within the web page load cycle with no need so as to add listeners forward of the useful resource.
As an alternative of repeating the Answer #1 instance, let’s have a look at what each retroactive and future inspection of efficiency entries seems to be like to indicate the distinction of the place they are often leveraged. Please notice that, whereas we’re inspecting photos in these examples, we are able to do that for any useful resource sort.
Establishing context for this code, our want is that we’ve to make sure our product photos are being delivered efficiently. Let’s assume all web site photos return this Server-Timing header construction. A few of our necessary photos can occur earlier than our monitoring script and, because the consumer navigates, extra will proceed to load. How can we deal with each?
Picture response headers:
…
Server-Timing: status_code; dur=200;, resizing; desc=”success”; dur=30; req_id; desc=”randomId”
Our monitoring logic. We count on this to run after the essential path content material of the web page.
Inspecting the picture response info:
perfEntries.forEach((perfEntry)=>{
// monitoring for the efficiency entries
console.log(perfEntry.serverTiming);
})
}
const alreadyLoadedImageEntries = efficiency.getEntriesByType(‘useful resource’).filter(({ initiatorType })=> initiatorType === ‘img’);
monitorImages( alreadyLoadedImageEntries );
const imgObserver = new PerformanceObserver(operate(entriesList) {
const newlyLoadedImageEntries = entriesList.getEntriesByType(‘useful resource’).filter(({ initiatorType })=> initiatorType === ‘img’);
monitorImages( newlyLoadedImageEntries );
});
imgObserver.observe({entryTypes: [“resource”]});
Regardless of deferring our monitoring script till it was out of the essential path, we’re capturing the info for all photos which have loaded earlier than our script and can proceed to watch them, because the consumer continues utilizing the positioning.
Answer 3: Inspecting the HTML Doc
The ultimate instance resolution we’ll have a look at is said to the final word “earlier than JS can run” useful resource — the HTML doc itself. If our monitoring options are loaded as JS through the HTML, how can we monitor the supply of the HTML doc?
There may be some priority in monitoring HTML doc supply. For monitoring response knowledge, the most typical setup is to make use of server logs/metrics/traces to seize this info. That could be a good resolution however relying on the tooling, the info could also be decoupled from RUM knowledge inflicting us to wish a number of instruments to examine our consumer experiences. Moreover, this apply may additionally miss metadata (web page occasion identifiers for instance) that enables us to combination and correlate info for a given web page load — for instance, correlating async requests failing when the doc returns sure doc response codes.
A typical sample for doing this work is placing the content material inside the HTML content material itself. This needs to be put into the HTML content material, as a result of the JS-based monitoring logic doesn’t have entry to the HTML request headers that got here earlier than it. This turns our HTML doc right into a dynamic doc content material. This can be wonderful for our wants and permits us to take that info and supply it to our RUM tooling. Nonetheless, this might develop into a problem if our system for HTML supply is out of our management, or if the system has some assumptions into how HTML supply should operate. Examples of this may be, anticipating that the HTML is totally static, such that we are able to cache it downstream in some deterministic method — “partially dynamic” HTML our bodies are more likely to be dealt with incorrectly by caching logic.
Throughout the HTML supply course of, there may be extra knowledge that we wish to perceive, reminiscent of what datacenters processed the request all through the chain. We’d have a CDN edge handler that proxies a request from an origin. On this case, we are able to’t count on every layer may/ought to course of and inject HTML content material. How would possibly Server-Timing headers assist us right here?
Constructing on the ideas of Answer #1 and Answer #2, right here’s how we are able to seize useful knowledge concerning the HTML doc itself. Understand that any a part of the stack can add a Server-Timing header to the response, and it is going to be joined collectively within the last header worth.
Let’s assume we’ve a CDN edge handler and an origin which may course of the doc:
CDN added response headers:
…
Server-Timing: cdn_status_code; dur=200;, cdn_cache; desc=”expired”; dur=15; cdn_datacenter; desc=”ATL”; cdn_req_id; desc=”zyx321abc789”; cdn_time; dur=120;
Origin added response headers:
…
Server-Timing: origin_status_code; dur=200;, origin_time; dur=30; origin_region; desc=”us-west”; origin_req_id; desc=”qwerty321ytrewq789″;
Inspecting the HTML response info:
// that has a superset of data associated to the useful resource and the navigation-specific information
const htmlPerfEntry = efficiency.getEntriesByType(‘navigation’)[0];
// filter/seize entry knowledge as wanted
console.log(htmlPerfEntry.serverTiming);
// outputs:
// [
// { name: “cdn_status_code”, description: undefined, duration: 200 },
// { name: “cdn_cache”, description:”expired”, duration: 0.0},
// { name: “cdn_datacenter”, description:”ATL”, duration: 0.0 },
// { name: “cdn_req_id”, description:”zyx321abc789”, duration: 0.0 },
// { name: “cdn_time”, description: undefined, duration: 120 },
// { name: “origin_status_code”, description: undefined, duration: 200 },
// { name: “origin_time”, description: undefined, duration: 30 },
// { name: “origin_region”, description:”us-west”, duration: 0.0 },
// { name: “origin_req_id”, description:”qwerty321ytrewq789”, duration: 0.0 },
// ]
From this info, our monitoring JavaScript (which may have been loaded manner later) can combination the place the HTML processing occurred, standing codes from the totally different servers (which may differ for legit causes — or bugs), and request identifiers if they should correlate this with server logs. It additionally is aware of how a lot time was taken on the “server” through the cdn_time period — “server” time being the entire time beginning on the first non-user proxy/server that we offer. Utilizing that cdn_time period, the already accessible HTML Time-To-First-Byte worth and the origin_time period, we are able to decide latency sections extra precisely, such because the consumer latency, the cdn to origin latency, and so on. That is extremely highly effective in optimizing such a essential supply level and defending it from regression.
Combining Server-Timing with Service Staff
Service Staff are scripts which can be initialized by the web site to take a seat between the web site, the browser, and the community (when obtainable). When appearing as a proxy, they can be utilized to learn and modify requests coming from and responses returning to the web site. Given service staff are so function wealthy, we gained’t try and cowl them in depth on this article — a easy internet search will yield a mountain of details about their capabilities. For this text, we’ll concentrate on the proxying functionality of a service employee — it’s capacity to course of requests/responses.
The important thing to combining these instruments is realizing that the Server-Timing header and its respective PerformanceEntry is calculated after service employee proxying takes place. This enables us to make use of the service staff so as to add Server-Timing headers to responses that may present useful details about the request itself.
What sort of data would possibly we wish to seize throughout the service employee? As talked about earlier than, service staff have plenty of capabilities, and any a kind of actions may produce one thing useful to seize. Listed below are just a few that come to thoughts:
Is that this request served from the service employee cache?
Is that this served from the service employee whereas off-line?
What service employee technique for this request sort is getting used?
What model of the service employee is getting used?
That is useful in checking our assumptions about service employee invalidation.
Take values from different headers and put them right into a Server-Timing header for downstream aggregation.
Useful once we don’t have the choice to alter the headers on the request however want to examine them in RUM — such is often the case with CDN suppliers.
How lengthy has a useful resource been in service employee cache?
Service staff should be initialized on the web site which is an asynchronous course of itself. Moreover, service staff solely course of requests throughout the outlined scope. As such, even the essential query of, “is that this request processed by the service employee?” can drive attention-grabbing conversations on how a lot we’re leaning on its capabilities to drive nice experiences.
Let’s dive into how this would possibly look within the code.
Primary JS logic used on the positioning to initialize the service employee:
navigator.serviceWorker.register(‘/service-worker.js’).then(operate (registration) {
registration.replace(); // instantly begin utilizing this sw
});
}
Within /service-worker.js, fundamental request/response proxying:
self.addEventListener(‘fetch’, operate (occasion) {
occasion.respondWith(
// test to see if this request is cached
caches.match(occasion.request)
.then(operate (response) {
// Cache hit – return response
if (response) {
const updatedHeaders = new Headers(response.headers);
updatedHeaders.append(‘Server-Timing’, ‘sw_cache; desc=”hit”;’);
const updatedResponse = new Response(response.physique, {
…response,
headers: updatedHeaders
});
return updatedResponse;
}
return fetch(occasion.request).then(operate (response) {
// relying on the scope the place we load our service employee,
// we’d must filter our responses to solely course of our
// first-party requests/responses
// Regex match on the occasion.request.url hostname ought to
const updatedHeaders = new Headers(response.headers);
updatedHeaders.append(‘Server-Timing’, standing_code;desc=${response.standing};, sw_cache; desc=”miss”;)
const modifiableResponse = new Response(response.physique, {
…response,
headers: updatedHeaders
});
// solely cache identified good state responses
if (!response || response.standing !== 200 || response.sort !== ‘fundamental’ || response.headers.get(‘Content material-Kind’).contains(‘textual content/html’)) {
return modifiableResponse;
}
const responseToCache = modifiableResponse.clone();
caches.open(CACHE_NAME).then(operate (cache) {
cache.put(occasion.request, responseToCache);
});
return modifiableResponse;
}
);
})
);
});
Requests which can be processed from the service employee, now may have a Server-Timing header appended to their responses. This enables us to examine that knowledge through the Efficiency Timeline API, as we demonstrated in all of our prior examples. In apply, we seemingly didn’t add the service employee for this single want — which means we have already got it instrumented for dealing with requests. Including the one header in 2 locations allowed us to measure standing codes for all requests, service worker-based cache-hit ratios, and the way usually service staff are processing requests.
Why Use Server-Timing If We Have Service Staff?
This is a crucial query that comes up when discussing combining these methods. If a service employee can seize the entire header and content material info, why do we’d like a unique instrument to combination it?
The work of measuring timing and different arbitrary metadata about requests is nearly at all times, in order that we are able to ship this info to a RUM supplier for evaluation, alerting, and so on. All main RUM purchasers have 1 or 2 home windows for which we are able to enrich the info a few request — when the response occurs, and when the PerformanceEntry is detected. For instance, if we make a fetch request, the RUM shopper captures the request/response particulars and sends it. If a PerformanceEntry is noticed, the shopper sends that info as properly — trying to affiliate it to the prior request if attainable. If RUM purchasers provide the power so as to add details about that requests/entries, these had been the one home windows to do it.
In apply, a service employee might or will not be activated but, a request/response might or might not have processed the service employee, and all service employee knowledge sharing requires async messaging to the positioning through postMessage() API. All of those features introduce race situations for a service employee to be lively, in a position to seize knowledge, after which ship that knowledge in time to be enriched by the RUM shopper.
Contrasting this with Server-Timing, a RUM shopper that processes the Efficiency Timeline API will instantly have entry to any Server-Timing knowledge set on the PerformanceEntry.
Given this evaluation of service employee’s challenges with enriching request/response knowledge reliably, my suggestion is that service staff be used to offer extra knowledge and context as a substitute of being the unique mechanism for delivering knowledge to the RUM shopper on the primary thread. That’s, use Server-Timing and, the place wanted, use service employee so as to add extra context or in instances the place Server-Timing isn’t supported — if required. On this case, we may be creating customized occasions/metrics as a substitute of enriching the unique request/response knowledge aggregation, as we’ll assume that the race situations talked about will result in lacking the home windows for common RUM shopper enrichment.
Issues for Server-Timing Utilization
As uniquely highly effective as it’s, it’s not with out necessary concerns. Right here’s an inventory of concerns primarily based on the present implementation at time of writing:
Browser Assist — Safari doesn’t help placing the Server-Timing knowledge into the Efficiency Timeline API (they do present it in DevTools).
It is a disgrace, nonetheless, given this isn’t about performance for customers, however as a substitute it’s about improved capabilities for efficiency monitoring — I facet with this not being a blocking drawback. With browser-based monitoring, we by no means count on to measure 100% of browsers/customers. Presently, this implies we’d look to get ~70-75% help primarily based on international browser utilization knowledge. Which is often greater than sufficient to really feel assured that our metrics are exhibiting us good alerts concerning the well being and efficiency or our methods. As talked about, Server-Timing is usually the one option to get these metrics reliably, so we should always really feel assured about leveraging this instrument.
As talked about beforehand, if we completely should have this knowledge for Safari, we may discover utilizing a cookie-based resolution for Safari customers. Any options right here must be examined closely to make sure they don’t hinder efficiency.
If we need to enhance efficiency, we wish to keep away from including plenty of weight to our responses, together with headers. It is a trade-off of extra weight for worth added metadata. My suggestion is that for those who’re not within the vary 500 bytes or extra to your Server-Timing header, I’d not be involved. If you’re frightened, attempt various lengths and measure its impression!
When appending a number of Server-Timing headers on a single response, there’s a danger of duplicate Server-Timing metric names. Browsers will floor all of them within the serverTiming array on the PerformanceEntry. It’s finest to make sure that that is averted by particular or namespaced naming. If it might’t be averted, then we’d break down the order of occasions that added every header and outline a conference we are able to belief. In any other case, we are able to create a utility that doesn’t blindly add Server-Timing entries however can even replace current entries if they’re already on the Response.
Attempt to keep away from the error of misremembering that responses cache the Server-Timing values as properly. In some instances you would possibly wish to filter out the timing-related knowledge of cached responses that, earlier than they had been cached, frolicked on the server. There are various methods to detect if the request went to the community with knowledge on the PerformanceEntry, reminiscent of entry.transferSize > 0, or entry.decodedBodySize > 0, or entry.period > 40. We will additionally lean into what we’ve realized with Server-Timing to set a timestamp on the header for comparability.
Wrapping Up
We’ve gone fairly deep into the appliance of the Server-Timing Header to be used instances that aren’t aligned to the “timing” use case that this header is mostly related to. We’ve seen its energy so as to add freeform knowledge a few useful resource and entry the info with no need a reference to the networking API used to make it. It is a very distinctive functionality that we leveraged to measure assets of all kinds, examine them retroactively, and even seize knowledge concerning the HTML doc itself. Combining this system with service staff, we are able to add extra info from the service employee itself or to map response info from uncontrolled server responses to Server-Timing for simple entry.
I imagine that Server-Timing is so impressively distinctive that it must be used far more, however I additionally imagine that it shouldn’t be used for every little thing. Up to now, this has been a must have instrument for efficiency instrumentation tasks I’ve labored on to offer inconceivable to entry useful resource knowledge and establish the place latency is occuring. In the event you’re not getting worth out of getting the info on this header, or if it doesn’t suit your wants — there’s no motive to make use of it. The aim of this text was to give you a brand new perspective on Server-Timing as a instrument to achieve for, even for those who’re not measuring time.
Assets
W3C Server Timing
Server-Timing MDN
“Measuring Efficiency With Server Timing”, Drew McLellan
Efficiency Timeline MDN
Subscribe to MarketingSolution.
Receive web development discounts & web design tutorials.
Now! Lets GROW Together!