in reply to [OT] Grouping/clustering of values
Common examples of data which we normally scale this way are sound volume and earthquake intensity. Both are normally quoted as a log of intensity.
If you really want to take your original approach, I would take the largest dataset, and group it with things which are within a factor of 10 of its size. Then put everything else in group 2. But then you have to figure out how to handle the situation where a dataset keeps jumping from one chart to another, and you can't get a good graph of response time variations of more than a couple of orders of magnitude.
I'd suggest trying the log scale first.
|
|---|