Jump to content

Talk:Character encoding

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Archived discussion

[edit]

Drop 'Popular character encodings' section?

[edit]

With the link to Category:Character sets added, I'm wondering whether Popular character encodings should be dropped (or shortened to the really popular ones)? As of today I wasn't bold enough to go forward. Comments? Pjacobi 12:17, 10 Jul 2004 (UTC)

I'd say no. Categories have their uses, but the only way they can order their contents is alphabetically. I'd prefer not to abolish existing collections such as the one in this article in favour of categories. -- pne 10:55, 12 Jul 2004 (UTC)

"Code page" versus "Codepage"

[edit]

Codepage redirects to Character encoding, but Code page gives the page on vendor specific code pages. Am I the only one puzzled about this? Pjacobi 12:20, 10 Jul 2004 (UTC)

Somebody appears to have fixed this now: Codepage redirects to Code page. -- pne 10:57, 12 Jul 2004 (UTC)


Article and category title ambiguity

[edit]

What's the difference between a "character set" and a "character encoding" and a "text encoding" ? They deal with assigning a unique integer to each character. I suspect the difference is so subtle that we might as well merge Category:Text encodings and Category:Character sets into one category. OK? --DavidCary 15:53, 18 Jun 2005 (UTC)

The separateness of article text encoding seems of dubious value to me.
For a very detailed discussion of the terminology see http://www.unicode.org/reports/tr17/
Pjacobi 21:57, 2005 Jun 18 (UTC)
I've gone ahead and merged the text encoding article's intro paragraph with this article's intro paragraph, and replaced the text encoding article with a redirect to this one. I have also nominated Category:Text encodings for deletion.
I've gone through most of the character encoding articles and have a new scheme for their categorization in mind. Category:Character sets can stay, but it will be a subcategory of a new overarching Category:Character encoding. — mjb 28 June 2005 04:32 (UTC)
Thanks Pjacobi. http://www.unicode.org/reports/tr17/ is an excellent resource. Everyone editing this article should read it! --Negrulio (talk) 12:06, 1 November 2008 (UTC)[reply]

I suggest separating character encoding and character sets into two articles. Character sets need not use in computers. As you may know, Chinese and Japanese are composed of characters (not words). A group of characters form a character set, for example, character sets used in primary and secondary school education. (See Kyōiku kanji, Jōyō kanji, Jinmeiyo kanji for Japanese usage, and 現代漢語常用字表, 現代漢語通用字表, 常用國字標準字體表, 次常用國字標準字體表, 常用字字形表 for Chinese usage.)

Besides, character sets do not equal to character encoding because one character set can apply several character encodings. For example, latin letters encoded in ASCII or encoded in EBCDIC; or JIS X 0208 (a Japanese Kanzi set) encoded in EUC-JP or in Shift_JIS. --Hello World! 02:57, 27 September 2005 (UTC)[reply]

Unicode and the ISO and IEC have standardized terminology for such things. The "character set", as in "a set of characters", that you are talking about is officially termed a character repertoire (for which there is no need for a separate article and thus no need to disambiguate it from character encoding; at most it could just be more clearly described in the character encoding article). The term character set is acknowledged only as an overloaded, much-abused, legacy term most often referring to what they now prefer to call either a coded character set (a repertoire of characters mapped to numbers) or a character map (a repertoire of characters mapped to specific byte sequences), or occasionally a character encoding scheme (a map or method of converting a character encoding form (don't ask) to specific byte sequences. For more info, Unicode Technical Report #17 is a good reference.
Also, I asked elsewhere about how the term "character" is used in the study of written languages, as opposed to in computing, and it turns out that it's actually used to describe only certain kinds of graphemes used by certain written languages (a subset of Chinese logograms, IIRC). So your examples of other possible definitions of "character set" are in error. I think it's best to be very careful about preserving the distinction between a grapheme, the type of grapheme that is a 'character' according to scholars of written language, and the arbitrary abstraction that is a character (computing). "Character set", "character encoding" and other terms derived from the latter should be kept within the domain of computing related articles. — mjb 06:12, 27 September 2005 (UTC)[reply]

Which article in English wikipedia talks about character repertoire? --Hello World! 14:47, 3 October 2005 (UTC)[reply]

It should be in this one and in the Unicode article. Clearly, there's work to be done :)mjb 18:07, 3 October 2005 (UTC)[reply]

thinking of a rearrangement

[edit]

This article seems to be written as if the primary meaning of character encoding is "coded character set" whereas it seems to be far more often used to mean "complete process of encoding characters into a stream of code units". Plugwash 12:15, 16 January 2006 (UTC)[reply]

Braille 'the world's first binary character encoding'?

[edit]

There is a discussion going on in Talk:Braille on the history of (binary) character encodings. This article is far better a place for the history (and the associated discussion). I Started a section on history. This should be expanded. -- Petri Krohn 00:26, 22 April 2006 (UTC)[reply]

Are I Ching, geomantic figures and Braille "character encodings"?

[edit]

According to the definition given in this article, I Ching and geomantic figures aren't character encodings. They don't represent a sequence of characters, but symbolize crucial philosophical concepts; they don't aim to facilitate computer storage or telecommunication, but divination. According to that latter argument, Braille isn't a character encoding either (it's just a plain code). Therefore, I'm intending to remove the new history section. ― j. 'mach' wust | 19:47, 22 April 2006 (UTC)[reply]

Usage

[edit]

Where's information about the usage of Unicode in Wiki articles?

Simon de Danser 14:08, 13 January 2007 (UTC)[reply]

ISO-8859-16

[edit]

I just added ISO-8859-16 to the list of ISO character sets but it was reverted by the anti-vandal bot. How stupid... Maybe someone will know why it was classified as vandalism and how to add it. 01:04, 2 April 2007 (UTC)

Byte order mark

[edit]

Some mention should be made of the BOM (byte order mark) in files with various encodings. The BOM is described here. SharkD (talk) 02:07, 25 January 2008 (UTC)[reply]

History

[edit]

I'd be interested in seeing a proper history of character encodings. What came before ASCII? What was the first character encoding used on a computer?

-- TimNelson (talk) 07:18, 10 May 2008 (UTC)[reply]

Several important computers before (or even slightly later than the introduction of) 7-bit ASCII (as opposed to 8-bit extended ASCII) used 6-bit codes; e.g. DEC SIXBIT. Some of these 6-bit codes were influenced by or indeed influenced ASCII. The influential Signetics 2513 character generator (which had a perfectly good stub before deletionist zealots got to it) often came pre-programmed with ASCII sticks 2 through 5, a configuration shown in its (still-googleable) datasheet, though if you ordered in quantity, Signetics would ship your order programmed to anything you wanted. Earlier codes like 5-bit Baudot code and similar were used with computers, but really they preceded them, having been designed originally for the telegraph wire and teletypewriters.
To add my own two cents here, IMNSHO the entire History section as it presently exists, as of this writing, could do with a complete rewrite, not just because it lacks some historically relevant information, but chiefly because it's no longer chronological. Reading that section, I get that "it's designed by committee" feeling. First it tells you about some early encodings, then about ASCII and Unicode, then it jumps back in time and talks about encodings from Baudot code to ASCII, then it jumps back to Hollerith encodings... it's a mess. While it's not always necessary to be strictly chronological, you would expect somewhat of a chronological arrangement in a history section. The litmus test is: Will someone not already familiar with the information in that section find its account inscrutable? Sadly, I suspect here the answer is likely going to be yes, yes they will. —ReadOnlyAccount (talk) 22:27, 11 December 2023 (UTC)[reply]

encoding in programming language?

[edit]

If I understand well the article (bravo for the clear explaination of meaningful distinctions in section "unicode..."), a programming language adds a level of encoding on top of the whole mess: how characters are internally encoded meaning what is actually a character/string object inside? This will indeed affect how they are manipulated, how easily a given operation is performed, both for the language itself and for the programmar if the interface is not transparent (a dream in python, even python 3).

We could call this a PL-specific "character (or text) format" to avoid ambiguity with the previous 4 concepts: character~grapheme, ordinal ("code point" in unicode), code unit(s) (abstract byte or word values), (concrete) bytestring.

I guess in python by default the representation is close if not equal to utf8. But people told me one can build python with an alternate string format, close instead to strings of unicode ordinals, I guess in fact it is similar or equal to utf32/UCS4. I also read somewhere common C implementations use 32-bit representation of chars.

Note that it's rather complicated because texts to be representated (by string data in memory at runtime) come from:

  • literal strings in source code
  • various forms of computations which result in strings
  • user direct input
  • files in local file system, files over all kinds of networks
  • other...

How does a language, how does a programmer, guess the original encoding (concrete scheme)? A real mystery for me...

What about a section on this topic? Searched for info in WP, couldn't find anything. Pointers (to WP or elsewhere)? Well, after some reflexion, I guess this would be worth a separate article, so much complicated the topic is. But it would certainly be hard to find references to point to, and avoid the content to be so-called "original research". Except maybe for some (unreadible for humans) docs & (hardly usable) tools provided by the unicode technocracy itself.

--86.205.134.247 (talk) 08:01, 31 October 2009 (UTC)[reply]

You would have to read documentation for the specific language you are interested in. Most older languages (e.g. C) are encoding agnostic, as far as they are concerned strings are sequences of bytes and beyond alocating a few values and sequences from the ascii range (and therefore common to all encodings in wide use today) special meaning they don't care what the bytes mean (C did later gain support for widechar strings but i'm not sure if and how the operation of widechar constants is standardised). JAVA uses UTF-16 strings and there is an option on the compiler command line to tell it the encoding of source files. I've never used python so I can't comment on the situation there. Plugwash (talk) 01:46, 3 November 2009 (UTC)[reply]

character encoding and text encoding

[edit]

the redirection of 'text encoding' to this article is misleading and from the point of view of a specialist in digital humanities just wrong. 'character encoding' refers to the encoding of single characters as part of some character stream. 'text encoding' on the other hand refers to encoding schemas which allow to markup specific features of a text like structural divisions, layout information, linguistic analysis etc. -> markup language Probably it would be the best to refer text encoding to markup language as long as there is no dedicated article on this topic. —Preceding unsigned comment added by 80.128.52.133 (talk) 10:49, 3 July 2010 (UTC)[reply]

Other names?

[edit]

Are "codeset", "code set", "character coding" possible synonyms? —Preceding unsigned comment added by 193.144.63.50 (talk) 14:57, 2 December 2010 (UTC)[reply]

Recent rewrite of intro

[edit]

CecilWard's recent rewrite of the lead redefined codes as being the same thing as "numbers". That's not correct. The pre-UCS/Unicode encodings generally mapped characters to bit or byte sequences. Morse Code, for example, maps them directly to electrical pulses. There are no numbers involved whatsoever. The old lead was very stable and, I believe, correct. If it is too technical or confusing to a newbie, we can work on that, but for now I'm reverting the change and am inviting discussion: what's confusing about its current form? —mjb (talk) 01:21, 18 August 2011 (UTC)[reply]

There is a code link, and it is the right thing. I see no need to change something here; it would be better to refine the "code" article to make a better overview of various ways of encoding. Incnis Mrsi (talk) 17:06, 19 August 2011 (UTC)[reply]

Proposed merge from Special characters

[edit]
The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The result of this discussion was not to merge the articles. — Dsimic (talk | contribs) 22:36, 23 April 2014 (UTC)[reply]

There's a pair of tags proposing merger of the article Special characters to this article. This would be a place to discuss this merge.--Wtshymanski (talk) 21:30, 1 February 2012 (UTC)[reply]

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Merge of Code page

[edit]
The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The result of this discussion was not to merge the articles. — Dsimic (talk | contribs) 22:36, 23 April 2014 (UTC)[reply]

  • oppose No reason given for merge. No reason for merge. Two big scope articles, plenty in each. Code pages are also an obvious sub-topic within the broader topic of encodiing. Why on earth would we do this? What do we gain? Andy Dingley (talk) 17:13, 27 February 2014 (UTC)[reply]
  • oppose "Code page" is not synonymous with "character encoding" as the lede to the Code page article originally stated. Code pages are an aspect of character encoding, but are not equivalent to the broader concept of character encoding, and code pages should be discussed in a separate article. BabelStone (talk) 19:15, 27 February 2014 (UTC)[reply]
  • Oppose: Just as already noted above; also, code pages are simply "excerpts" from complete character encodings, and they were created as workarounds before Unicode was available etc. Thus, merging these two articles woldn't make much sense. — Dsimic (talk | contribs) 19:29, 27 February 2014 (UTC)[reply]
  • oppose The two articles are related, but I could not find any overlap in the contents. This article is about character encoding in general. The code page article speaks about code page numbers, their origin, and their relation to the encoding. In my humble opinion, they are perfectly separated as should be. If the code page article was merged into the encoding article there would not be any redundancies to remove. The whole article would make a huge subsection of the encoding article - a perfect candidate to be extracted into a separate one :-) The only thing that comes to my mind is add a subsection with a few sentences about code page and a reference to the "main article". --Chiccodoro (talk) 08:45, 23 April 2014 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Unclear section "Character sets, maps and code pages"

[edit]

This sentence defines "code page" in terms of many code pages.

  • A "code page" usually means a byte-oriented encoding, but with regard to some suite of encodings (covering different scripts), where many characters share the same codes in most or all those code pages.

It may be just me that don't understand, but I cannot make sense of it. It says "those code pages." What are these code pages? Does a "code page" refer to many code pages?

In general, this paragraph should contain clear definitions that use already defined terms to define new terms. It should avoid recursive definition, unless necessary. — Preceding unsigned comment added by 174.89.96.228 (talk) 15:08, 1 September 2015 (UTC)[reply]

Giving away paper copies of 7 ECMA standard character sets

[edit]

I have paper copies of the following ECMA standards. They are 8 1/2 x 11 with blue covers. They all deal with character sets, which I was working on at the time. (See my contribution to the tilde article, which I'm very proud of.) The biggest is 24 pages. I want to get rid of them (they were very hard to get from ECMA in the 80s, which, from the little corner I saw, did not seem to be a well-run enterprise). Does anybody want them, or have a suggestion of any library/museum that I could send them to? It's all available online now, although these specific editions might not be, if anyone else is interested in character set history.

  • 6, 5th edition, March 1985
  • 43, 2nd edition, December 1985
  • 113, 2nd edition, July 1988
  • 114, June 1986
  • 118, December 1986
  • 121, July 1987
  • 128, July 1988

(See List of Ecma standards for more details.)


I will pay for postage if destination is in U.S. deisenbe (talk) 18:22, 7 June 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Character encoding. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 17:03, 19 November 2016 (UTC)[reply]

Character encoding should really be called Character coding

[edit]

Encoding is the process of converting abstract characters (integers/Codepoints) into a sequence of bytes. Decoding is the process of converting a sequence of bytes into abstract characters (integers/Codepoints).

So encoding is really a misnomer, because both directions matter and are defined by a so called "character encoding", not just one direction. It should really be called "character coding" to be logical.

Similarly to how video compression algorithms are called codecs (Codec is a portmanteau of coder-decoder). — Preceding unsigned comment added by 88.219.19.186 (talk) 05:58, 10 February 2019 (UTC)[reply]

WP:COMMONNAME. Andy Dingley (talk) 10:19, 10 February 2019 (UTC)[reply]
Concur with Andy Dingley. Go look at the published literature on Google Books. Character encoding returns 35,000 results and character coding returns only 5,500. And about half of those 5,500 appear to be "coding" of characters in contexts completely unrelated to computers, like literary analysis. Character encoding is by far the established term. --Coolcaesar (talk) 15:04, 10 February 2019 (UTC)[reply]

"compromise solution that was eventually found and developed into Unicode"?

[edit]

The last paragraph of the History section says:

The compromise solution that was eventually found and developed into Unicode was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.

If the "variety of ways" in which code points are represented are the Unicode Transformation Formats, was there any mention of transformation formats in Unicode version 1.0? ISO 10646-1:1993 has the notion of code planes, with a UCS-4 format for the entire character set, a UCS-2 format for the Basic Multilingual Plane, and a UTF-1 format that "does not use octet values specified in IS0 2022 as coded representations of CO, SPACE, DEL, or Cl characters, and can thus be used for transmitting text data through communication systems that are sensitive to these octet values."

So did that compromise solution appear in ISO 10646-1:1993 before it appeared in Unicode 1.1, which has the FSS-UTF in Appendix F]? Or does "eventually found and developed into Unicode" mean "eventually incorporated into Unicode" (as in "wasn't initially in Unicode but appeared later") rather than "eventually became Unicode" (as in "was a part of Unicode from the beginning")? Guy Harris (talk) 07:40, 14 April 2023 (UTC)[reply]

Note also that Joe Becker's 1988 Unicode proposal is a pure 16-bit proposal and cites that as an advantage. He argues in section 2.3 "Twofold expansion of ASCII English text" that the additional storage requirement would not be an issue, as small text files don't take enough space that the expansion would matter ("Nearly all text-system clients create and store quantities of English text small enough that it is not worth their while to use the currently available techniques for compressing ASCII English text by a factor of 2 or more.") and that systems storing large amounts of ASCII English text already use compression and compressing Unicode would be as effective as compressing ASCII, so he appeared not to be sympathetic to the issue that "for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users)."
The UTFs with 8-bit code units appear to have been motivated by other concerns, mostly dealing with legacy systems. UTF-1 was intended to be "used to transmit text data through communication systems which are sensitive to octet values for control characters coded according to the structure of IS0 2022.", so the intent appears to make it a format that can be used with ISO 2022 (Appendix G of ISO 10646-1:1993 provides a way to designate UTF-1 in ISO 2022). FSS-UTF was intended to be "compatible with historical file systems" (they note that those file systems "disallow the null byte and the ASCII slash character as part of the file name", so "historical file systems" means "UN*X file systems") and "compatible with existing programs", which sounds as if the intent was to use it as a way to store Unicode strings on Unix-like operating systems. Guy Harris (talk) 09:38, 14 April 2023 (UTC)[reply]