Not sure why people are suggesting databases for a task that can be done by a one liner on a file of 132000 lines in under 1 second?
[21:23:36.53] C:\test> perl -anlE"$s{$F[1]}+=$F[2];++$n{$F[1]}}{say $_,':',$s{$_}/$n{$_} for +sort keys%s" junk.dat 0101:4.563 0102:4.602 0103:4.632 0104:4.557 0105:4.515 0106:4.605 0107:4.59 0108:4.501 0109:4.441 0110:4.439 0111:4.542 0201:4.611 0202:4.461 0203:4.627 0204:4.447 0205:4.537 0206:4.434 0207:4.421 0208:4.412 0209:4.58 0210:4.416 0211:4.431 0301:4.444 0302:4.73 0303:4.541 0304:4.564 0305:4.524 0306:4.596 0307:4.618 0308:4.352 0309:4.331 0310:4.489 0311:4.436 0401:4.6 0402:4.425 0403:4.455 0404:4.451 0405:4.482 0406:4.601 0407:4.677 0408:4.307 0409:4.59 0410:4.528 0411:4.366 0501:4.602 0502:4.471 0503:4.5 0504:4.431 0505:4.372 0506:4.543 0507:4.441 0508:4.499 0509:4.476 0510:4.512 0511:4.575 0601:4.425 0602:4.536 0603:4.522 0604:4.585 0605:4.495 0606:4.425 0607:4.595 0608:4.48 0609:4.553 0610:4.528 0611:4.578 0701:4.38 0702:4.648 0703:4.583 0704:4.409 0705:4.575 0706:4.423 0707:4.352 0708:4.599 0709:4.372 0710:4.564 0711:4.39 0801:4.408 0802:4.51 0803:4.52 0804:4.412 0805:4.581 0806:4.469 0807:4.614 0808:4.632 0809:4.387 0810:4.533 0811:4.403 0901:4.314 0902:4.612 0903:4.463 0904:4.481 0905:4.643 0906:4.454 0907:4.343 0908:4.459 0909:4.593 0910:4.527 0911:4.545 1001:4.655 1002:4.456 1003:4.585 1004:4.536 1005:4.577 1006:4.441 1007:4.648 1008:4.549 1009:4.464 1010:4.696 1011:4.493 1101:4.548 1102:4.534 1103:4.646 1104:4.522 1105:4.522 1106:4.549 1107:4.563 1108:4.439 1109:4.539 1110:4.497 1111:4.531 1201:4.486 1202:4.471 1203:4.54 1204:4.428 1205:4.517 1206:4.506 1207:4.413 1208:4.49 1209:4.418 1210:4.475 1211:4.483 Date:0 [21:23:37.63] C:\test>wc -l junk.dat 132001 junk.dat
In reply to Re: Calculating the average on a timeslice of data
by BrowserUk
in thread Calculating the average on a timeslice of data
by perlbrother
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |