How to print UTF-8 encoded text to the console in Python < 3?

Related searches

I'm running a recent Linux system where all my locales are UTF-8:

LANG=de_DE.UTF-8
LANGUAGE=
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
...
LC_IDENTIFICATION="de_DE.UTF-8"
LC_ALL=

Now I want to write UTF-8 encoded content to the console.

Right now Python uses UTF-8 for the FS encoding but sticks to ASCII for the default encoding :-(

>>> import sys
>>> sys.getdefaultencoding()
'ascii'
>>> sys.getfilesystemencoding()
'UTF-8'

I thought the best (clean) way to do this was setting the PYTHONIOENCODING environment variable. But it seems that Python ignores it. At least on my system I keep getting ascii as default encoding, even after setting the envvar.

# tried this in ~/.bashrc and ~/.profile (also sourced them)
# and on the commandline before running python
export PYTHONIOENCODING=UTF-8

If I do the following at the start of a script, it works though:

>>> import sys
>>> reload(sys)  # to enable `setdefaultencoding` again
<module 'sys' (built-in)>
>>> sys.setdefaultencoding("UTF-8")
>>> sys.getdefaultencoding()
'UTF-8'

But that approach seems unclean. So, what's a good way to accomplish this?

Workaround

Instead of changing the default encoding - which is not a good idea (see mesilliac's answer) - I just wrap sys.stdout with a StreamWriter like this:

sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout)

See this gist for a small utility function, that handles it.

How to print UTF-8 encoded text to the console in Python < 3?

print u"some unicode text \N{EURO SIGN}"
print b"some utf-8 encoded bytestring \xe2\x82\xac".decode('utf-8')

i.e., if you have a Unicode string then print it directly. If you have a bytestring then convert it to Unicode first.

Your locale settings (LANG, LC_CTYPE) indicate a utf-8 locale and therefore (in theory) you could print a utf-8 bytestring directly and it should be displayed correctly in your terminal (if terminal settings are consistent with the locale settings and they should be) but you should avoid it: do not hardcode the character encoding of your environment inside your script; print Unicode directly instead.

There are many wrong assumptions in your question.

You do not need to set PYTHONIOENCODING with your locale settings, to print Unicode to the terminal. utf-8 locale supports all Unicode characters i.e., it works as is.

You do not need the workaround sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout). It may break if some code (that you do not control) does need to print bytes and/or it may break while printing Unicode to Windows console (wrong codepage, can't print undecodable characters). Correct locale settings and/or PYTHONIOENCODING envvar are enough. Also, if you need to replace sys.stdout then use io.TextIOWrapper() instead of codecs module like win-unicode-console package does.

sys.getdefaultencoding() is unrelated to your locale settings and to PYTHONIOENCODING. Your assumption that setting PYTHONIOENCODING should change sys.getdefaultencoding() is incorrect. You should check sys.stdout.encoding instead.

sys.getdefaultencoding() is not used when you print to the console. It may be used as a fallback on Python 2 if stdout is redirected to a file/pipe unless PYTHOHIOENCODING is set:

$ python2 -c'import sys; print(sys.stdout.encoding)'
UTF-8
$ python2 -c'import sys; print(sys.stdout.encoding)' | cat
None
$ PYTHONIOENCODING=utf8 python2 -c'import sys; print(sys.stdout.encoding)' | cat
utf8

Do not call sys.setdefaultencoding("UTF-8"); it may corrupt your data silently and/or break 3rd-party modules that do not expect it. Remember sys.getdefaultencoding() is used to convert bytestrings (str) to/from unicode in Python 2 implicitly e.g., "a" + u"b". See also, the quote in @mesilliac's answer.

How to print utf-8 to console with Python 3.4 (Windows 8)?, print can only print Unicode strings. For other types including the bytes string that results from the encode() method, it gets the literal representation ( repr ) of the object. b'\xe2\x99\xa0' is how you would write a Python 3 bytes literal containing a UTF-8 encoded ♠. How to print UTF-8 encoded text to the console in Python < 3? print u"some unicode text \N{EURO SIGN}" print b"some utf-8 encoded bytestring \xe2\x82\xac".decode('utf-8') i.e., if you have a Unicode string then print it directly. If you have a bytestring then convert it to Unicode first. Your locale settings (LANG, LC_CTYPE) indicate a utf-8 locale and

It seems accomplishing this is not recommended.

Fedora suggested using the system locale as the default, but apparently this breaks other things.

Here's a quote from the mailing-list discussion:

The only supported default encodings in Python are:

 Python 2.x: ASCII
 Python 3.x: UTF-8

If you change these, you are on your own and strange things will
start to happen. The default encoding does not only affect
the translation between Python and the outside world, but also
all internal conversions between 8-bit strings and Unicode.

Hacks like what's happening in the pango module (setting the
default encoding to 'utf-8' by reloading the site module in
order to get the sys.setdefaultencoding() API back) are just
downright wrong and will cause serious problems since Unicode
objects cache their default encoded representation.

Please don't enable the use of a locale based default encoding.

If all you want to achieve is getting the encodings of
stdout and stdin correctly setup for pipes, you should
instead change the .encoding attribute of those (only).

-- 
Marc-Andre Lemburg
eGenix.com

How to print UTF-8 encoded text to the console in Python < 3 , Now I want to write UTF-8 encoded content to the console. Right now Python uses UTF-8 for the FS encoding but sticks to ASCII for the default encoding :-( Since you are using Python 3, just add the encoding parameter to open(): corpus = open( r"C:\Users\Customer\Desktop\DISSERTATION\ettuthokai.txt", encoding="utf-8" ).read() share | improve this answer | follow |

This is how I do it:

#!/usr/bin/python2.7 -S

import sys
sys.setdefaultencoding("utf-8")
import site

Note the -S in the bangline. That tells Python to not automatically import the site module. The site module is what sets the default encoding and the removes the method so it can't be set again. But will honor what is already set.

Print to the console in Python without UnicodeEncodeErrors, encode('ascii', 'ignore') . But just this week, I realized that there is a much better solution. Your terminal can typically handle UTF8 characters just� The function PyRun_InteractiveOneObject which currently obtains the encoding from sys.stdin will select utf-8 unless the legacy-mode flag is in effect. This may require readline hooks to change their encodings to utf-8, or to require legacy-mode for correct behaviour.

If the program does not display the appropriate characters on the screen, i.e., invalid symbol, run the program with the following command line:

PYTHONIOENCODING=utf8 python3 yourprogram.py

Or the following, if your program is a globally installed module:

PYTHONIOENCODING=utf8 yourprogram

On some platforms as Cygwin (mintty.exe terminal) with Anaconda Python (or Python 3), simply run export PYTHONIOENCODING=utf8 and later run the program does not work, and you are required to always do every time PYTHONIOENCODING=utf8 yourprogram to run the program correctly.

On Linux, in case of sudo, you can try to do pass the -E argument to export the user variables to the sudo process:

export PYTHONIOENCODING=utf8
sudo -E python yourprogram.py

If you try this and it did no work, you will need to enter on a sudo shell:

sudo /bin/bash
PYTHONIOENCODING=utf8 yourprogram

Related:

  1. How to print UTF-8 encoded text to the console in Python < 3?
  2. Changing default encoding of Python?
  3. Forcing UTF-8 over cp1252 (Python3)
  4. Permanently set Python path for Anaconda within Cygwin
  5. https://superuser.com/questions/1374339/what-does-the-e-in-sudo-e-do
  6. Why bash -c 'var=5 printf "$var"' does not print 5?
  7. https://unix.stackexchange.com/questions/296838/whats-the-difference-between-eval-and-exec

Unicode HOWTO — Python 3.8.5 documentation, Applications are often internationalized to display messages and output in a variety of glyph to display is generally the job of a GUI toolkit or a terminal's font renderer. UTF-8 is one of the most commonly used encodings, and Python often� String encoding and decoding as well as encoding detection can be a headache, more so in Python 2 than in Python 3. Here are two little helpers which are used in PDFx, the PDF metadata and reference extractor: make_compat_str - decode any kind of bytes/str into an unicode object print_to_console - print (unicode) strings to any kind of console (even windows with cp437, etc.) All of this code

TestText = "Test - āĀēĒčČ..šŠūŪžŽ" # this not UTF-8it is a Unicode string in Python 3.X. TestText2 = TestText.encode('utf8') # this is a UTF-8-encoded byte string. To send UTF-8 to stdout regardless of the console's encoding, use the its buffer interface, which accepts bytes: import sys sys.stdout.buffer.write(TestText2)

So if we add 2nd UTF-8 mode, changing only default text encoding to “UTF-8” whould be helpful to them. I think it is the best short term solution. But when thinking long term, most Python users in Python community will go “always UTF-8” world more by this option, leaving Python users outside of community.

A string of ASCII text is also valid UTF-8 text. UTF-8 is fairly compact; the majority of commonly used characters can be represented with one or two bytes. If bytes are corrupted or lost, it’s possible to determine the start of the next UTF-8-encoded code point and resynchronize. It’s also unlikely that random 8-bit data will look like

Comments
  • Perhaps this will work: #!/usr/bin/env python # -- coding: utf-8 --
  • And remember to put it at the very head of the source file.
  • That only effects how Python interprets literal strings in the source code. The IO encoding will still be ASCII.
  • PYTHONIOENCODING is not ignored; it's just that, as its name suggests, it affects the encoding used for stdin/stdout/stderr, which is not what you're checking with sys.getdefaultencoding().
  • @Brutus: How did you test that it doesn't work? It seems to work for me. python -c 'import sys; print sys.stdout.encoding' gives UTF-8, and PYTHONIOENCODING='C' python -c 'import sys; print sys.stdout.encoding' gives C.
  • Could you expand on this based on the answer that mesilliac gave? Is it still correct?
  • @Arafangion The method I use happens right at the very beginning of Python initialization. No caches have been created yet. I agree that using the reload trick is bad. This is because lots of other things may have already been instantiated or cached the original encoding. Thus I came up with this method which happens early. Note that no other imports are before it. It works for me.
  • While this has worked for me in tests, I decided to avoid it. It's just to unclear if I may run into any side effects and smells kinda fishy ;-) I just wrap sys.stdout in a StreamWriter with the default encoding (which should be UTF-8, at least in modern Linux systems): sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout).
  • This is a really bad idea. I've solved two questions in the last few weeks that were resolved by removing sys.setdefaultencoding("utf-8") from the user's code. IMHO, this just masks any underlying issues
  • @AlastairMcCormack I've used this without any problems. As long as you know what's going on it's not a problem. What are the underlying issues that you think it masks?
  • Is utf8 case-sensitive? Also, is the only possible setting utf8 or is utf-8 also valid? It's just because I've been seeing so many variants... (and you're using two of them in your answer! 😉)
  • I think at least for my Python 3.7.2, the usage of UTF-8 is case insensitive and I a not sure if it is ignoring the hyphen in UTF-8.
  • that makes sense — I was using Python 2.7.X and I was unsure about what to use...