in reply to I need to convert the text into UTF-8 from any charset

Here's a basic script that will run as a "stdin-stdout filter" -- that is, it always reads data from STDIN and prints it to STDOUT, so to run it you always pipe or redirect its input and output, like this
enc-converter cp1252 < file.txt > file_utf8.txt #or some-process | enc-converter shiftjis | some-utf8-process #or a mix: some-process | enc-converter koi8-r > file_utf8.txt enc-converter iso-8859-1 < file.txt | utf8-process
Note that you must provide the name of the input encoding as the command-line argument ($ARGV[0]):
#!/usr/bin/perl use strict; ( @ARGV == 1 and $ARGV[0] =~ /^\w[-\w]+$/ and ! -t ) or die "Usage: $0 inp-enc < file.inp-enc > file.utf8\n"; my $inp_enc = sprintf( ":encoding(%s)", shift ); binmode STDIN, $inp_enc; binmode STDOUT, ":utf8"; print while (<>);
The Encode manual provides some instructions on how to get a listing of the names of known encodings usable with the ":encoding(...)" technique. This command will print the list:
perl -MEncode -le 'print for(Encode->encodings(":all"))'
(In a windows/dos shell, you need to invert the single- and double-quotes.)

As Joost pointed out, you need to know in advance what the input encoding is, because it would be a lot more work to write code that would guess the input encoding automatically. (This can be done, but you need valid training data for each combination of language + encoding you might encounter in order to build models, then you test each input stream against each model and hope that the best match is the right one.)

update: As you might expect, given the simplicity of the script shown above, it's not that much more typing just to do character conversion as a perl one-liner:

perl -CO -pe 'BEGIN{binmode STDIN,":encoding(cp936)"}' < file.txt > fi +le.utf8
The "-C" option with capital letter "O" sets STDOUT to utf8 (so does "-C2"); the script itself is just the BEGIN block to set the encoding for STDIN; the "-p" option does the rest.