An interesting issue that I think is common in many fields and that will be increasingly tackled is the lack of sense of how does your instance of a given process compares to the rest of the world. For example, a person who starts running as a New Years commitment would be initially clueless as to how he/she is doing. He/she will just run and progressively compare against himself. Occasionally he might compare against competition times, but those records are obviously biased towards better performing individuals, giving him/her the wrong impression that he is doing much worse than he should. Enter the running apps (Strava, Run Keeper, Sports Tracker, etc). Now, suddenly, all these individuals are using automated applications to track their performance, originally to track their personal progress by comparing against themselves, but increasingly to know how are they doing against the rest of their peers. Suddenly, by making everyone use the same performance tracking method we can compare easily across individuals, and our original fellow can have a much more realistic picture of where he lies in the world of runners. 

Why is this important? Motivation comes to mind, comparing yourselves against the best from the onset sounds like a great way of getting disappointed. Perhaps setting the correct expectations of what races you can enter. You could also pair the performance tracker with a training/eating schedule app and suddenly try to correlate the two. This information becomes a benchmark for our runners, and maybe a guide for how to improve.

The running example is a very simple case, and it now seems ridiculous to not have it around. Truth is, however, there are countless of casese where benchmarking is difficult. Let's try a couple of examples! 

How about construction processes? Not entirely sure how RSMEANS (Cost and Man-hour DB for typical construction processes) does it, but it doesn't give the right picture. Their numbers point estimates of averages, and they give little information on the variability of those processes. Additionally, they might be taking numbers as reported by the contractors which would probably suffer the drawbacks of traditional surveys. Ideally, a project management tool in a construction project would now what each task consists, what resources are in it and maybe other relevant information. Then, as the project manager updates the progress of the project, the application would collect the finish times of each task. Expand this to hundreds and thousands of users and suddenly you start getting a pretty accurate picture of how much time pouring a reinforced concrete wall takes. Add this knowledge to the task dependency schedule and your resources and you can run actually meaningful Monte Carlo simulations. This becomes quite a powerful tool! The issue? Well, contractors are very skeptical of using new tools. They normally only adopt a new tool if it provides them benefits from the onset and with little alterations to their workflow.

Let's take it elsewhere, say, computer programming? Tracking here is even more complicated, not only is it difficult to know how good you are but also how do you even define good? Is it by time to implement a given feature? Very unlikely that the exact same feature gets implemented all over the place. Is it lines of code per hour? Sometimes less code is better. Unlike the construction scenario, we have difficulty putting a metric in performance, yet many of us have heard of the mythical 10x programmer, how would you know if you are one? However, we do have a lot of raw data in open repositories of GitHub. If we can come with meaningful metrics, we can start creating benchmarks. 

So we can see that there are issues in choosing metrics and in data collection, however, I do not think there is a shortage of cases that would benefit from having this sort of information and that probably have an easier way to implement.