I have a code that uses binary files using fstream with binary flag set and read without using formatted I / O functions and I write works correctly on all the systems I have ever used (bits are actually expected in the file), but they are basically all U.S.. Are english I am thinking of the possibility of being modified by codecvv on different systems for these bytes. It seems like the formatted I / O using standard using spreadsheets / sgetc in the virtual world. These will lead to overflow or underflow functions in Streambuff, and it looks like these leads for things that go through some codecvt (e.g., see 27.8.1.4.3 in C ++ standard). The build of this codecvt for the basic file is specified in 27.8.1.1.5. From this, it seems that the result will depend on the basic file that returns the default_filebuf.tlock (). So, my question is, can I assume that using a character array made through a system on any system? Verbally using any ifstream.read on any other system, even if any locale configuration is using any person on their system? I will make the following assumptions: If the default locale pass is not guaranteed, through this content attached to some system configuration (I do not know, Arabic or something else), write binary files using C ++ What is the best way? This should be fine on Windows, but on other OS you can get a line end (such as security) Should also be seen. The default C / C ++ locale is "C" which is not depending on the system locale. It is not guaranteed as you know the C / C + compiler and their target machines are very different. If you keep all those beliefs, then you are waiting for troubles. There is a negligible overhead to change the locale, unless you try to make it hundreds of seconds per second. itemprop = "text">
Sunday, 15 February 2015
Writing binary files using C++: does the default locale matter? -
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment