Windows CMD无法正确输出UTF-16

时间:2014-05-05 18:53:28

标签: c++ unicode encoding utf-8 utf-16

我正在尝试将非ascii字符输出到Windows CMD,但问题是,它无法正常工作。我没有写下面的代码,我把这两个部分粘在了一起。该代码应该将字符转换为UTF-8,然后从UTF-8转换为UTF-16,以便在Windows上正确显示。这是代码:

// codecvt::in example
#include <iostream>       // std::wcout, std::wcout
#include <locale>         // std::locale, std::codecvt, std::use_facet
#include <string>         // std::wstring
#include <cwchar>         // std::mbstate_t

void GetUnicodeChar(unsigned int code, char chars[5]) {
        if (code <= 0x7F) {
            chars[0] = (code & 0x7F); chars[1] = '\0';
        } else if (code <= 0x7FF) {
            // one continuation byte
            chars[1] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[0] = 0xC0 | (code & 0x1F); chars[2] = '\0';
        } else if (code <= 0xFFFF) {
            // two continuation bytes
            chars[2] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[1] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[0] = 0xE0 | (code & 0xF); chars[3] = '\0';
        } else if (code <= 0x10FFFF) {
            // three continuation bytes
            chars[3] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[2] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[1] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[0] = 0xF0 | (code & 0x7); chars[4] = '\0';
        } else {
            // unicode replacement character
            chars[2] = 0xEF; chars[1] = 0xBF; chars[0] = 0xBD;
            chars[3] = '\0';
        }
    }

int main ()
{
  typedef std::codecvt<wchar_t,char,std::mbstate_t> facet_type;

  std::locale mylocale;

  const facet_type& myfacet = std::use_facet<facet_type>(mylocale);

  char mystr[5];
  GetUnicodeChar(225, mystr);

  // prepare objects to be filled by codecvt::in :
  wchar_t pwstr[sizeof(mystr)];              // the destination buffer (might be too short)
  std::mbstate_t mystate = std::mbstate_t(); // the shift state object
  const char* pc;                            // from_next
  wchar_t* pwc;                              // to_next

  // translate characters:
  facet_type::result myresult = myfacet.in (mystate,
      mystr, mystr+sizeof(mystr), pc,
      pwstr, pwstr+sizeof(mystr), pwc);

  if ( myresult == facet_type::ok )
  {
    std::wcout << L"Translation successful: ";
    std::wcout << pwstr << std::endl;
  }
  return 0;
}

问题是,当我向225函数提供数字á(unicode字符GetUnicodeChar的十进制表示)时,OSX上的输出是正确的,因为它显示了字母á但在Windows上显示字符├í。但我认为Windows在内部使用了UTF-16,这就是为什么我认为这应该有效。但事实并非如此。

1 个答案:

答案 0 :(得分:0)

您需要先设置_O_U16TEXT模式:

_setmode(_fileno(stdout), _O_U16TEXT);

更多信息请参阅Michael Kaplain的老博客文章:http://www.siao2.com/2008/03/18/8306597.aspx