Fundamentally, you cannot reliably detect encodings. You can guess UTF-8 if the input is valid UTF-8, but that is still a guess at best.
The problem is that pre-Unicode encodings actually made full use of the available 256 codepoints in an octet. UTF-8 must use those same 256 codepoints (and the lower 128 are ASCII), so all valid UTF-8 is also valid in other encodings. There is no general solution to this problem, although you might be able to make some headway with either a dictionary of valid names, or some rules for recognizing "plausible" names — that is, names that use only characters used in names from one language, since mixed-language names are highly unlikely.
For the special case of deciding whether the input is UTF-8 as requested or ISO-Latin-1 due to following an outdated link, you can probably make good progress by simply checking if the input is valid UTF-8 and assuming ISO-Latin-1 if not. This is not exactly correct, but is probably a fair heuristic.