在C#中确定字符串的编码

时间:2009-06-22 03:04:17

标签: c# string encoding

有没有办法在C#中确定字符串的编码?

说,我有一个文件名字符串,但我不知道它是否以 Unicode UTF-16或系统默认编码编码,我如何找到?

10 个答案:

答案 0 :(得分:47)

以下代码具有以下功能:

  1. 检测或尝试检测UTF-7,UTF-8/16/32(bom,no bom,little& big endian)
  2. 如果未找到Unicode编码,则回退到本地默认代码页。
  3. 检测(极有可能)缺少BOM /签名的unicode文件
  4. 在文件中搜索charset = xyz和encoding = xyz以帮助确定编码。
  5. 要保存处理,您可以'品尝'文件(可定义的字节数)。
  6. 返回编码和解码的文本文件。
  7. 纯粹基于字节的效率解决方案
  8. 正如其他人所说,没有任何解决方案可以完美(当然,人们无法轻易区分全球使用的各种8位扩展ASCII编码),但我们可以“足够好”,特别是如果开发人员也提出给用户一个替代编码列表,如下所示:What is the most common encoding of each language?

    可以使用Encoding.GetEncodings();

    找到完整的编码列表
    // Function to detect the encoding for UTF-7, UTF-8/16/32 (bom, no bom, little
    // & big endian), and local default codepage, and potentially other codepages.
    // 'taster' = number of bytes to check of the file (to save processing). Higher
    // value is slower, but more reliable (especially UTF-8 with special characters
    // later on may appear to be ASCII initially). If taster = 0, then taster
    // becomes the length of the file (for maximum reliability). 'text' is simply
    // the string with the discovered encoding applied to the file.
    public Encoding detectTextEncoding(string filename, out String text, int taster = 1000)
    {
        byte[] b = File.ReadAllBytes(filename);
    
        //////////////// First check the low hanging fruit by checking if a
        //////////////// BOM/signature exists (sourced from http://www.unicode.org/faq/utf_bom.html#bom4)
        if (b.Length >= 4 && b[0] == 0x00 && b[1] == 0x00 && b[2] == 0xFE && b[3] == 0xFF) { text = Encoding.GetEncoding("utf-32BE").GetString(b, 4, b.Length - 4); return Encoding.GetEncoding("utf-32BE"); }  // UTF-32, big-endian 
        else if (b.Length >= 4 && b[0] == 0xFF && b[1] == 0xFE && b[2] == 0x00 && b[3] == 0x00) { text = Encoding.UTF32.GetString(b, 4, b.Length - 4); return Encoding.UTF32; }    // UTF-32, little-endian
        else if (b.Length >= 2 && b[0] == 0xFE && b[1] == 0xFF) { text = Encoding.BigEndianUnicode.GetString(b, 2, b.Length - 2); return Encoding.BigEndianUnicode; }     // UTF-16, big-endian
        else if (b.Length >= 2 && b[0] == 0xFF && b[1] == 0xFE) { text = Encoding.Unicode.GetString(b, 2, b.Length - 2); return Encoding.Unicode; }              // UTF-16, little-endian
        else if (b.Length >= 3 && b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF) { text = Encoding.UTF8.GetString(b, 3, b.Length - 3); return Encoding.UTF8; } // UTF-8
        else if (b.Length >= 3 && b[0] == 0x2b && b[1] == 0x2f && b[2] == 0x76) { text = Encoding.UTF7.GetString(b,3,b.Length-3); return Encoding.UTF7; } // UTF-7
    
    
        //////////// If the code reaches here, no BOM/signature was found, so now
        //////////// we need to 'taste' the file to see if can manually discover
        //////////// the encoding. A high taster value is desired for UTF-8
        if (taster == 0 || taster > b.Length) taster = b.Length;    // Taster size can't be bigger than the filesize obviously.
    
    
        // Some text files are encoded in UTF8, but have no BOM/signature. Hence
        // the below manually checks for a UTF8 pattern. This code is based off
        // the top answer at: https://stackoverflow.com/questions/6555015/check-for-invalid-utf8
        // For our purposes, an unnecessarily strict (and terser/slower)
        // implementation is shown at: https://stackoverflow.com/questions/1031645/how-to-detect-utf-8-in-plain-c
        // For the below, false positives should be exceedingly rare (and would
        // be either slightly malformed UTF-8 (which would suit our purposes
        // anyway) or 8-bit extended ASCII/UTF-16/32 at a vanishingly long shot).
        int i = 0;
        bool utf8 = false;
        while (i < taster - 4)
        {
            if (b[i] <= 0x7F) { i += 1; continue; }     // If all characters are below 0x80, then it is valid UTF8, but UTF8 is not 'required' (and therefore the text is more desirable to be treated as the default codepage of the computer). Hence, there's no "utf8 = true;" code unlike the next three checks.
            if (b[i] >= 0xC2 && b[i] <= 0xDF && b[i + 1] >= 0x80 && b[i + 1] < 0xC0) { i += 2; utf8 = true; continue; }
            if (b[i] >= 0xE0 && b[i] <= 0xF0 && b[i + 1] >= 0x80 && b[i + 1] < 0xC0 && b[i + 2] >= 0x80 && b[i + 2] < 0xC0) { i += 3; utf8 = true; continue; }
            if (b[i] >= 0xF0 && b[i] <= 0xF4 && b[i + 1] >= 0x80 && b[i + 1] < 0xC0 && b[i + 2] >= 0x80 && b[i + 2] < 0xC0 && b[i + 3] >= 0x80 && b[i + 3] < 0xC0) { i += 4; utf8 = true; continue; }
            utf8 = false; break;
        }
        if (utf8 == true) {
            text = Encoding.UTF8.GetString(b);
            return Encoding.UTF8;
        }
    
    
        // The next check is a heuristic attempt to detect UTF-16 without a BOM.
        // We simply look for zeroes in odd or even byte places, and if a certain
        // threshold is reached, the code is 'probably' UF-16.          
        double threshold = 0.1; // proportion of chars step 2 which must be zeroed to be diagnosed as utf-16. 0.1 = 10%
        int count = 0;
        for (int n = 0; n < taster; n += 2) if (b[n] == 0) count++;
        if (((double)count) / taster > threshold) { text = Encoding.BigEndianUnicode.GetString(b); return Encoding.BigEndianUnicode; }
        count = 0;
        for (int n = 1; n < taster; n += 2) if (b[n] == 0) count++;
        if (((double)count) / taster > threshold) { text = Encoding.Unicode.GetString(b); return Encoding.Unicode; } // (little-endian)
    
    
        // Finally, a long shot - let's see if we can find "charset=xyz" or
        // "encoding=xyz" to identify the encoding:
        for (int n = 0; n < taster-9; n++)
        {
            if (
                ((b[n + 0] == 'c' || b[n + 0] == 'C') && (b[n + 1] == 'h' || b[n + 1] == 'H') && (b[n + 2] == 'a' || b[n + 2] == 'A') && (b[n + 3] == 'r' || b[n + 3] == 'R') && (b[n + 4] == 's' || b[n + 4] == 'S') && (b[n + 5] == 'e' || b[n + 5] == 'E') && (b[n + 6] == 't' || b[n + 6] == 'T') && (b[n + 7] == '=')) ||
                ((b[n + 0] == 'e' || b[n + 0] == 'E') && (b[n + 1] == 'n' || b[n + 1] == 'N') && (b[n + 2] == 'c' || b[n + 2] == 'C') && (b[n + 3] == 'o' || b[n + 3] == 'O') && (b[n + 4] == 'd' || b[n + 4] == 'D') && (b[n + 5] == 'i' || b[n + 5] == 'I') && (b[n + 6] == 'n' || b[n + 6] == 'N') && (b[n + 7] == 'g' || b[n + 7] == 'G') && (b[n + 8] == '='))
                )
            {
                if (b[n + 0] == 'c' || b[n + 0] == 'C') n += 8; else n += 9;
                if (b[n] == '"' || b[n] == '\'') n++;
                int oldn = n;
                while (n < taster && (b[n] == '_' || b[n] == '-' || (b[n] >= '0' && b[n] <= '9') || (b[n] >= 'a' && b[n] <= 'z') || (b[n] >= 'A' && b[n] <= 'Z')))
                { n++; }
                byte[] nb = new byte[n-oldn];
                Array.Copy(b, oldn, nb, 0, n-oldn);
                try {
                    string internalEnc = Encoding.ASCII.GetString(nb);
                    text = Encoding.GetEncoding(internalEnc).GetString(b);
                    return Encoding.GetEncoding(internalEnc);
                }
                catch { break; }    // If C# doesn't recognize the name of the encoding, break.
            }
        }
    
    
        // If all else fails, the encoding is probably (though certainly not
        // definitely) the user's local codepage! One might present to the user a
        // list of alternative encodings as shown here: https://stackoverflow.com/questions/8509339/what-is-the-most-common-encoding-of-each-language
        // A full list can be found using Encoding.GetEncodings();
        text = Encoding.Default.GetString(b);
        return Encoding.Default;
    }
    

答案 1 :(得分:30)

这取决于字符串'来自'的位置。 .NET字符串是Unicode(UTF-16)。如果您将数据库中的数据读入字节数组,那么唯一的方法就是不同。

可能会对此CodeProject文章感兴趣:Detect Encoding for in- and outgoing text

Jon Skeet的Strings in C# and .NET是对.NET字符串的一个很好的解释。

答案 2 :(得分:30)

查看Utf8Checker它是一个简单的类,它在纯托管代码中完成了这一点。 http://utf8checker.codeplex.com

注意:正如已经指出的那样,“确定编码”仅对字节流有意义。如果你有一个字符串,那么它已经是从已经知道或猜到编码的人那里编码的,以便首先得到字符串。

答案 3 :(得分:18)

我知道这有点晚了 - 但要明确:

字符串实际上没有编码...在.NET中,字符串是char对象的集合。基本上,如果它是一个字符串,它已经被解码。

但是,如果您正在读取由字节组成的文件的内容,并希望将其转换为字符串,则必须使用文件的编码。

.NET包括以下编码和解码类:ASCII,UTF7,UTF8,UTF32等。

这些编码中的大多数都包含某些字节顺序标记,可用于区分使用的编码类型。

.NET类System.IO.StreamReader能够通过读取那些字节顺序标记来确定流中使用的编码;

以下是一个例子:

    /// <summary>
    /// return the detected encoding and the contents of the file.
    /// </summary>
    /// <param name="fileName"></param>
    /// <param name="contents"></param>
    /// <returns></returns>
    public static Encoding DetectEncoding(String fileName, out String contents)
    {
        // open the file with the stream-reader:
        using (StreamReader reader = new StreamReader(fileName, true))
        {
            // read the contents of the file into a string
            contents = reader.ReadToEnd();

            // return the encoding.
            return reader.CurrentEncoding;
        }
    }

答案 4 :(得分:11)

另一种选择,很晚才到来,抱歉:

http://www.architectshack.com/TextFileEncodingDetector.ashx

这个小的C#-only类使用BOMS(如果存在),尝试自动检测可能的unicode编码,否则,如果没有任何Unicode编码是可能的,则会退回。

听起来上面引用的UTF8Checker做了类似的事情,但我认为这在范围上稍微宽泛 - 而不仅仅是UTF8,它还会检查可能缺少BOM的其他可能的Unicode编码(UTF-16 LE或BE)

希望这有助于某人!

答案 5 :(得分:6)

Timecop Nuget包将SimpleHelpers.FileEncoding包装成一个死的简单API:

COUNT(*) OVER ()

答案 6 :(得分:5)

我的解决方案是使用内置的东西和一些后备。

我从stackoverflow上的另一个类似问题的答案中选择了策略,但我现在无法找到它。

它首先使用StreamReader中的内置逻辑检查BOM,如果有BOM,则编码将不是Encoding.Default,我们应该信任该结果。

如果不是,则检查字节序列是否为有效的UTF-8序列。如果是,它将猜测UTF-8作为编码,如果不是,则再次,默认的ASCII编码将是结果。

static Encoding getEncoding(string path) {
    var stream = new FileStream(path, FileMode.Open);
    var reader = new StreamReader(stream, Encoding.Default, true);
    reader.Read();

    if (reader.CurrentEncoding != Encoding.Default) {
        reader.Close();
        return reader.CurrentEncoding;
    }

    stream.Position = 0;

    reader = new StreamReader(stream, new UTF8Encoding(false, true));
    try {
        reader.ReadToEnd();
        reader.Close();
        return Encoding.UTF8;
    }
    catch (Exception) {
        reader.Close();
        return Encoding.Default;
    }
}

答案 7 :(得分:2)

注意:这是一个了解UTF-8编码如何在内部工作的实验。 The solution offered by vilicvane,使用初始化的UTF8Encoding对象在解码失败时引发异常,要简单得多,基本上也是如此。

我写了这段代码来区分UTF-8和Windows-1252。它不应该用于巨大的文本文件,因为它将整个内容加载到内存中并完全扫描它。我将它用于.srt字幕文件,只是为了能够将它们保存回加载它们的编码中。

作为ref赋予函数的编码应该是8位回退编码,以便在检测到文件无效时使用UTF-8;通常,在Windows系统上,这将是Windows-1252。这并不像检查实际有效的ascii范围那样做任何事情,并且即使在字节顺序标记上也没有检测到UTF-16。

按位检测背后的理论可以在这里找到: https://ianthehenry.com/2015/1/17/decoding-utf-8/

基本上,第一个字节的位范围决定了它作为UTF-8实体的一部分之后的数量。它之后的这些字节总是在相同的位范围内。

/// <summary>
///     Detects whether the encoding of the data is valid UTF-8 or ascii. If detection fails, the text is decoded using the given fallback encoding.
///     Bit-wise mechanism for detecting valid UTF-8 based on https://ianthehenry.com/2015/1/17/decoding-utf-8/
///     Note that pure ascii detection should not be trusted: it might mean the file is meant to be UTF-8 or Windows-1252 but simply contains no special characters.
/// </summary>
/// <param name="docBytes">The bytes of the text document.</param>
/// <param name="encoding">The default encoding to use as fallback if the text is detected not to be pure ascii or UTF-8 compliant. This ref parameter is changed to the detected encoding, or Windows-1252 if the given encoding parameter is null and the text is not valid UTF-8.</param>
/// <returns>The contents of the read file</returns>
public static String ReadFileAndGetEncoding(Byte[] docBytes, ref Encoding encoding)
{
    if (encoding == null)
        encoding = Encoding.GetEncoding(1252);
    // BOM detection is not added in this example. Add it yourself if you feel like it. Should set the "encoding" param and return the decoded string.
    //String file = DetectByBOM(docBytes, ref encoding);
    //if (file != null)
    //    return file;
    Boolean isPureAscii = true;
    Boolean isUtf8Valid = true;
    for (Int32 i = 0; i < docBytes.Length; i++)
    {
        Int32 skip = TestUtf8(docBytes, i);
        if (skip != 0)
        {
            if (isPureAscii)
                isPureAscii = false;
            if (skip < 0)
                isUtf8Valid = false;
            else
                i += skip;
        }
        // if already detected that it's not valid utf8, there's no sense in going on.
        if (!isUtf8Valid)
            break;
    }
    if (isPureAscii)
        encoding = new ASCIIEncoding(); // pure 7-bit ascii.
    else if (isUtf8Valid)
        encoding = new UTF8Encoding(false);
    // else, retain given fallback encoding.
    return encoding.GetString(docBytes);
}

/// <summary>
/// Tests if the bytes following the given offset are UTF-8 valid, and returns
/// the extra amount of bytes to skip ahead to do the next read if it is
/// (meaning, detecting a single-byte ascii character would return 0).
/// If the text is not UTF-8 valid it returns -1.
/// </summary>
/// <param name="binFile">Byte array to test</param>
/// <param name="offset">Offset in the byte array to test.</param>
/// <returns>The amount of extra bytes to skip ahead for the next read, or -1 if the byte sequence wasn't valid UTF-8</returns>
public static Int32 TestUtf8(Byte[] binFile, Int32 offset)
{
    Byte current = binFile[offset];
    if ((current & 0x80) == 0)
        return 0; // valid 7-bit ascii. Added length is 0 bytes.
    else
    {
        Int32 len = binFile.Length;
        Int32 fullmask = 0xC0;
        Int32 testmask = 0;
        for (Int32 addedlength = 1; addedlength < 6; addedlength++)
        {
            // This code adds shifted bits to get the desired full mask.
            // If the full mask is [111]0 0000, then test mask will be [110]0 0000. Since this is
            // effectively always the previous step in the iteration I just store it each time.
            testmask = fullmask;
            fullmask += (0x40 >> addedlength);
            // Test bit mask for this level
            if ((current & fullmask) == testmask)
            {
                // End of file. Might be cut off, but either way, deemed invalid.
                if (offset + addedlength >= len)
                    return -1;
                else
                {
                    // Lookahead. Pattern of any following bytes is always 10xxxxxx
                    for (Int32 i = 1; i <= addedlength; i++)
                    {
                        // If it does not match the pattern for an added byte, it is deemed invalid.
                        if ((binFile[offset + i] & 0xC0) != 0x80)
                            return -1;
                    }
                    return addedlength;
                }
            }
        }
        // Value is greater than the start of a 6-byte utf8 sequence. Deemed invalid.
        return -1;
    }
}

答案 8 :(得分:1)

我在GitHub上找到了新库:CharsetDetector/UTF-unknown

Charset检测器内置于C#中-.NET Core 2-3,.NET标准1-2和.NET 4 +

它也是基于其他存储库的Mozilla Universal Charset Detector的端口。

CharsetDetector/UTF-unknown有一个名为CharsetDetector的类。

CharsetDetector包含一些静态编码检测方法:

  • CharsetDetector.DetectFromFile()
  • CharsetDetector.DetectFromStream()
  • CharsetDetector.DetectFromBytes()

检测到的结果在类DetectionResult中具有属性Detected,它是类DetectionDetail的实例,具有以下属性:

  • EncodingName
  • Encoding
  • Confidence

下面是显示用法的示例:

// Program.cs
using System;
using System.Text;
using UtfUnknown;

namespace ConsoleExample
{
    public class Program
    {
        public static void Main(string[] args)
        {
            string filename = @"E:\new-file.txt";
            DetectDemo(filename);
        }

        /// <summary>
        /// Command line example: detect the encoding of the given file.
        /// </summary>
        /// <param name="filename">a filename</param>
        public static void DetectDemo(string filename)
        {
            // Detect from File
            DetectionResult result = CharsetDetector.DetectFromFile(filename);
            // Get the best Detection
            DetectionDetail resultDetected = result.Detected;

            // detected result may be null.
            if (resultDetected != null)
            {
                // Get the alias of the found encoding
                string encodingName = resultDetected.EncodingName;
                // Get the System.Text.Encoding of the found encoding (can be null if not available)
                Encoding encoding = resultDetected.Encoding;
                // Get the confidence of the found encoding (between 0 and 1)
                float confidence = resultDetected.Confidence;

                if (encoding != null)
                {
                    Console.WriteLine($"Detection completed: {filename}");
                    Console.WriteLine($"EncodingWebName: {encoding.WebName}{Environment.NewLine}Confidence: {confidence}");
                }
                else
                {
                    Console.WriteLine($"Detection completed: {filename}");
                    Console.WriteLine($"(Encoding is null){Environment.NewLine}EncodingName: {encodingName}{Environment.NewLine}Confidence: {confidence}");
                }
            }
            else
            {
                Console.WriteLine($"Detection failed: {filename}");
            }
        }
    }
}

示例结果屏幕截图: enter image description here

答案 9 :(得分:1)

我最后的工作方法是通过检测编码从字节数组创建的字符串中的无效字符来尝试预期编码的潜在候选者。 如果我没有遇到无效字符,我想测试的编码对测试数据工作正常。

对我来说,只需要考虑拉丁语和德语特殊字符,为了确定字节数组的正确编码,我尝试使用此方法检测字符串中的无效字符:

    /// <summary>
    /// detect invalid characters in string, use to detect improper encoding
    /// </summary>
    /// <param name="s"></param>
    /// <returns></returns>
    public static bool DetectInvalidChars(string s)
    {
        const string specialChars = "\r\n\t .,;:-_!\"'?()[]{}&%$§=*+~#@|<>äöüÄÖÜß/\\^€";
        return s.Any(ch => !(
            specialChars.Contains(ch) ||
            (ch >= '0' && ch <= '9') ||
            (ch >= 'a' && ch <= 'z') ||
            (ch >= 'A' && ch <= 'Z')));
    }

(注意:如果您要考虑其他基于拉丁语的语言,您可能需要修改代码中的 specialChars const 字符串)

然后我就这样使用它(我只期望 UTF-8 或默认编码):

        // determine encoding by detecting invalid characters in string
        var invoiceXmlText = Encoding.UTF8.GetString(invoiceXmlBytes); // try utf-8 first
        if (StringFuncs.DetectInvalidChars(invoiceXmlText))
            invoiceXmlText = Encoding.Default.GetString(invoiceXmlBytes); // fallback to default