Jump to content

Talk:Universal asynchronous receiver-transmitter

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Chips

[edit]

The IBM PC compatibles used National Semiconductor 8250, not the Intel serial chip - because the National chips had built-in programmable baud rate generators. The sucessors were also made by National. This article needs a block diagram of a UART, too. --Wtshymanski 05:58, 13 Apr 2005 (UTC)

I made a diagram of the encoding scheme. I don't know if that's what you meant, though. I'd like it if someone integrated it into the article for me - I fear I would not be able to do it properly. The diagram is here. RT Jones (talk) 00:18, 5 November 2008 (UTC)[reply]

RS 232 and so forth

[edit]

These standards are discussed in their own articles; the UART may interface to the control lines of an RS 232 interface but doesn't directly generate the voltage levels that go out on the wire. --Wtshymanski 17:49, 17 May 2005 (UTC)[reply]

Programming tips for PCs

[edit]

"INT 14" is hopelessly IBM-PC specific and not appropriate for a general article on UARTS. I've moved the following lines here but I don't think they belong in this article at all. --Wtshymanski 18:57, 24 February 2006 (UTC)[reply]

Common registers

[edit]

In most UART chips, registers are grouped into 3 categories:

  • Data Registers: Receiver Buffer Register (RBR), Transmitter Holding Register (THR), Sratchpad Register (SCR)
  • Control Registers: Bitrate Select Register (DLL/DLM), Line Control Register (LCR), Modem Control Register (MCR), Interrupt Enable Register (IER), Interrupt Identification Register (IIR)
  • Status Registers: Line Status Register (LSR), Modem Status Register (MSR)
  • Description

    [edit]
  • RBR Circuitry in the 8250 chip is programmable to work with 5, 6, 7, or 8 data bits
  • THR is used to hold temporary parallel data waiting to be converted to sequence of bits.
  • On a 8250 chip, the Read/Write SCR is not used
  • LCR is used to control the format of data character
  • LSR is usually the first register read by the microprocessor. This register is used to indicate the errors, if any, that may occur as well as the status of the current operation
  • MCR is used to control the interface with the modem/data set
  • MSR provides the microprocessor with the status of the modem input lines
  • Also, in some hardware manuals, DLab refers to the last bit (bit 7, or the most significant bit) of the Line Control Register (LCR)

    Basic steps in serial port programming involving the UART

    [edit]

    The software will need to initialise the serial port first. After successful initialisation, all operations on the serial port will be passed to and handled by the UART chipset. The basic steps are summarised below:

  • Initialise the serial port. To do this, one need to either call interrupt 14h, or write data to the port via its address directly. It is also important that the software set the data format (using LCR) and the data transfer rate (using MCR).
  • First level (or device-level) handshaking. The software and hardware will indicate their readiness for the operation. This includes sending Data Terminal Ready (DTR) and Data Set Ready (DSR) signals.
  • Second level (or data-level) handshaking. Both parties indicate their readiness for the data transfer process. The software and hardware will send Request To Send (RTS) and Clear To Send (CTS) signals respectively.
  • During the transfer process, both the software and the hardware will need to make sure that THR is empty before sending any data and that RBR is ready before reading any data from it.
  • Baud vs. bps

    [edit]

    From the article:

    Speeds for UARTs are in bits per second (bit/s or bps), although often incorrectly called the baud rate.

    From the baud article:

    In telecommunications and electronics, baud [...] is a measure of the symbol rate [...] Early modems operated only at one bit per symbol, and so baud rate and bit rate for those devices were equivalent.

    Doesn't the second quote contradict the first quote? UARTS are 1 bit per symbol and their baud should therefor be equal to their bps. The first quote should therefor be changed. (I'm just passing by.) 129.241.129.67 20:40, 15 June 2007 (UTC)[reply]

    Yes, for a UART, the bit rate and baud rate are the same. For an analog signal employing modulation, as used with modems on the public switched telephone network, more than one bit is usually encoded into a signal, so the baud rate is usually much lower than the bit rate. --Brouhaha 07:11, 16 June 2007 (UTC)[reply]
    RS-232 allows for two levels, so on the cable baud is bits/second. And also, the send/receive pins on the UART. The distinction is important for the signal going out on the telephone line for modems, but that doesn't apply here. Gah4 (talk) 18:56, 12 April 2023 (UTC)[reply]

    pronunciation

    [edit]

    Someone gave a pronunciation of OO-wuh AT. I thought this was odd, so I edited it out. Please replace if I'm wrong - there's an English IPA key at {{IPA-en}}. kwami 06:16, 16 October 2007 (UTC)[reply]

    Swap

    [edit]

    An asynchronous transmission sends nothing over the interconnection when the transmitting device has nothing to send; but a synchronous interface must send "pad" characters to maintain synchronism between the receiver and transmitter. The two modes need to be swapped, don't they? Sevcsik 20:12, 27 October 2007 (UTC)[reply]

    I've tweaked the article to now say "An asynchronous transmission sends no characters over the interconnection when the transmitting device has nothing to send".

    Take the case where a USART ran out of things to send, but then 3 normal bit-times later, it was given a "W" to send. All the asynchronous transmitters I've studied send nothing when it had nothing to send -- in some cases "disconnecting" hi-Z from the transmission media, literally sending nothing; in other cases continuously transmitting the "logical 1 stop bit" until it has something else to send. In this case, the UART would send 3 stop bits (rather than the normal 1 stop bit between characters), then when the "W" character suddenly arrived, it would send the 1 start bit, then start sending the data bits that make up the "W". Those "extra" 2 stop bits do help maintain synchronism, but they are not considered an entire "character".

    I've seen a few synchronous transmission protocols that would simply stop not transmit those 3 bit clock pulses when it had nothing to send, which seems to contradict the article.

    However, all the USARTs I've studied (which is the focus of this article, right?) conform to the article's description. They all continuously transmit bit clock pulses at a constant rate whether or not it has anything to send. If such a synchronous transmitter immediately started sending the "W" 3 bit clocks after the last byte sent, then the receiver would lose the byte alignment. So, in practice, -- in synchronous mode -- the USART immediately must start sending the "pad" character after sending the last real data bit; then it delays sending the "W" until all 8 bits of that pad character have been transmitted, and so maintains byte alignment.

    Some higher-level protocols -- especially when communication goes over noisy radio links, rather than hard-wired links -- do send extra "pad" characters to maintain synchronism between the receiver and transmitter, whether or not the underlying hardware uses synchronous or asynchronous transmission. These pad characters -- even when the underlying hardware uses asynchronous transmission, and so does not need them to maintain byte alignment -- helps the higher-level protocol maintain packet alignment (which byte is the start of the packet?), and can also help underlying hardware maintain bit alignment (is this exactly 7 zeros in a row, or 8 zeros?).

    Does that answer your question? --68.0.124.33 (talk) 17:07, 16 July 2008 (UTC)[reply]

    My experience with USARTs when they are operating synchronously is that "pad" data is unnecessary. Also, when operating synchronously, they will still include the framing data--the start,stop and (optional) parity bits.

    The "pad" characters are unnecessary since, just as when operating asynchronously, the transmission line will be held in the idle state. And, just as when operating asynchronously, a start bit is used to indicate a start send condition.

    A USART operating synchronously operates pretty much like a traditional UART with the exception that the two ends of the transmission share the same transmission clock. One end acts as a master and generates a clock signal which it shares with the slave. The slave then uses this external clock signal for synchronization when sending and receiving. The two ends still need to agree on which frame format will be used but no longer need to agree to a baud rate since this is determined by the clock signal generated by the master.

    When operating synchronously, the two ends need to agree about which edge of the clock will be used for data sampling and which edge of the clock will be used for changing the data.

    The clock signal that the master generates will continue to run even when it (the master) isn't sending data. This is neccessary so that the slave can transmit data to the master. I imagine some implementations may stop the clock if the master doesn't receive data from the slave but I don't know that for certain. --99.179.69.42 (talk) 10:53, 24 February 2010 (UTC)[reply]

    Questions

    [edit]

    So is UART similar to RAMDAC (in Graphics card and TV Tuner cards), except they translate Digital-to-Analog vs UART that translate parallel-to-serial? --Ramu50 (talk) 17:36, 18 July 2008 (UTC)[reply]

    No. At least not in any sense stronger than the way sheep are similar to fish, except that they give milk.--Wtshymanski (talk) 20:54, 18 July 2008 (UTC)[reply]

    roflmao....milk Do they come in Human Breast Milk yogurt form......(slap in the face) joking, jokin --Ramu50 (talk) 00:01, 21 July 2008 (UTC)[reply]

    Yes. There are several similarities between a UART and a combination of a graphics card RAMDAC and a TV tuner card ADC. Both are integrated circuits. The CPU sends information -- a bitmap image to the buffer RAM on the graphics card, characters to the character buffer inside the UART. The chip then sends that information out at a steady pace, one significant condition at a time. The graphics card also adds a "frame" of HSYNC and VSYNC signals, analogous to the "start" and "stop" framing bits generated by the UART. The signals coming out the UART pins or RAMDAC pins flow into a line driver, and the signals from the line driver in turn typically flow into some kind of D-subminiature connector connected to a cable to the outside world.
    At the receiver, incoming signals go through the connector to an analog amplifier; the amplifier output flows into the UART or TV decoder pins.
    The TV tuner card uses the HSYNC and VSYNC to synchronize with the transmitter and discover the beginning and the end of the image, analagous to the way the receiving UART uses the framing bits to synchronize with the transmitter and discover the beginning and end of the character. Both systems have a small buffer memory to hold the data. With both systems, the CPU typically pulls the data out of that buffer and stores it on a hard drive.
    Both systems often have software handle bigger chunks of data and decide where those chunks begin and end (the beginning and end of a movie for the graphics card / tuner card; the beginning and end of a packet or a file for the UART).
    Other than that, I agree with the previous poster that UART vs. RAMDAC are totally different categories of things. :-). --68.0.124.33 (talk) 02:33, 21 July 2008 (UTC)[reply]
    Right. After all, both sheep and fish are found in Scotland. But one must work hard to find the similarities and I doubt they explain the operation of either system well. --Wtshymanski (talk) 01:00, 23 July 2008 (UTC)[reply]

    must be set for the same ... stop bits for proper operation

    [edit]

    Except for many if not most systems, the stop bits don't have to be the same, because there is nothing timing the stop level. It all gets re-synchronised at the next start bit. That's the beauty of asynchronous communication.

    Is there an example of a system that requires the stop bits to be set the same? Change it back if I'm wrong.

    Think about this. If the sending end sends a shorter stop bit than the receiver expects, the receiver will never be ready for a new character unless there are intervals betweeen transmitted characters. The reciever needs at least the stop bit time to recognize the end of a character. I suppose if the sender sends a longer stop bit than the reciever expects, this would work, but the link would be slower than necessary. It is more accurate to say the stop bits must match than to leave them out. --Wtshymanski (talk) 18:09, 8 August 2008 (UTC)[reply]
    The systems that I use don't care about the number of stop bits. Think about it: If it cares about the length of the word, it is a synchronous system, not an asynchronous system. UARTs re-synchronise on the start bit, and if they have got a bit out of sync by the end of the previous byte, they don't care.
    I am using physical systems that do not care about the number of stop bits. I know that most current physical devices do not care about the number of stop bits (mechanical tty did not have UARTs anyway). http://www.atmel.com/dyn/resources/prod_documents/doc2513.pdf. http://ece-www.colorado.edu/~ecen2120/Manual/uart/UART.html,
    What I mean is, change it back if you know that there is a specific Windows implementation which requires matched stop bits, and it would confuse people to put in the truth about current normal implementations. Obviously, most modems don't care about the number of stop bits, but they don't count, because they are auto-synchronising anyway. Assuming you use serial comms on your PC, does setting the stop bits wrong matter? It doesn't on mine: I can do laplink with mis-matched stop bits. It doesn't on any current hardware I'm familiar with, but if in practice it would confuse people I'm willing to let it stand. You need to find an actual example though. —Preceding unsigned comment added by 150.101.166.15 (talk) 03:12, 12 August 2008 (UTC)[reply]
    It's got nothing to do with Windows, PCs, UARTs or mechanical teletypes - imagine youself as a device watching a serial line. I've just gotten a start bit, I count bit times and now I have my eight (or seven or five or whatever) data bits, I'm expecting two stop bits...suddenly I get another edge. Chaos! Is this a new start bit? What do I do with the old character - it never completed it's frame, I must raise an overrun error. And, at the very least, sending two stop bits on every character when the receiver needs only one is inefficient. If mechanical TTYs didn't care about two stop bits, then why did they send two stop bits? I'd also be wary of anything PC-hosted communications software tells you about how characters are set up - I'd like to see it on a scope, because I think some software lies about what it is doing. I'm restoring it, oh annonymous co-editor. Have you considered logging in? It makes this sort of collabrative discussion much easier. Also then we dont' get those annoying bot-generated signatures. --Wtshymanski (talk) 13:37, 13 August 2008 (UTC)[reply]
    No examples, no references, no indication that you've read either of the references I provided, but an admirable belief in the righteousness of your cause. What should I try next? You are painting me into a corner here. —Preceding unsigned comment added by 150.101.166.15 (talk) 08:41, 25 August 2008 (UTC)[reply]
    You must have at least one stop bit to distinguish between the end of this character and the start of a new one. The only reason that more than one stop bit was implemented was to allow the receiver time to process the just-received character and prepare for the next one. This would largely be for historical reasons when chips processed data quite slowly compared to today. Bear in mind that the indeterminate amount of time between characters (since this is an asynchronous system) effectively means that there could be hundreds of "stop bits" before the next character, so there could not be any major reason not to send more stop bits than the receiver is expecting, excepting a slight inefficiency. Thus, sending two stop bits when the receiver is expecting one would not cause problems, unless you count wasting the time spent to send the extra bit as a problem. However sending one stop bit when the receiver is expecting two could cause major problems if (and only if) the hardware actually needed the extra time to process the just-received character. I don't believe the UART would raise a framing error if it didn't get the second stop bit, although I can't find any authoritative reference off-hand. For it to check that it got two stop bits would defeat the purpose of having two stop bits in the first place - to give the UART time to move the completed character into its buffer and notify the processor that the incoming character had arrived. If it could do all that and also check that it got two stop bits it may as well just require one stop bit. From what I have seen, the framing error is simply caused by not having a logic 1 at the point when the stop bit should have arrived (and not that you check for a second logic 1 when the second stop bit should have arrived). Putting it another way, at the point you should be checking for the stop bit value you would be half-way through the stop bit, and thus have half a bit time to prepare for the next start bit. Some of that might be chewed up by slight clock differences between sender and receiver. So, depending on the speed of the chips in question, and the bit rate, you might specify two stop bits to give you that extra time. Zoewyr (talk) 04:12, 3 January 2011 (UTC)[reply]
    "However sending one stop bit when the receiver is expecting two could cause major problems if (and only if) the hardware actually needed the extra time to process the just-received character. I don't believe the UART would raise a framing error if it didn't get the second stop bit... " If the UART is configured for two stop bits and it only sees one stop bit and then the start bit for the next character, yes, some in my experience will raise a framing error because this could indicate noise on the line. Hook up a serial terminal to a computer, configure the terminal for two stop bits and the computer for one, and send a stream of data... the terminal will display a series of white blobs or whatever its notation for "framing error" is. In my experience. A link to a manual for a very recent microcontroller that says "the receiver ignores the second stop bit" is fine but does not prove that this is universal practice. Jeh (talk) 21:01, 15 November 2016 (UTC)[reply]
    "This would largely be for historical reasons when chips processed data quite slowly compared to today. " Highly doubtful. Even UARTs built from random logic in the pre-LSI, DTL and RTL days weren't that slow. But 110 bps Teletype machines needed the two bit times to allow the UART, which was electromechanical in such machines, along with the rest of the mechanism time to come back to "zero". Try sending one of those a steady stream with just one stop bit and it prints garbage. Jeh (talk) 21:39, 15 November 2016 (UTC)[reply]
    Yes. All electronic UARTs that I know, will accept one stop bit. It is the mechanical UART, as noted, for ones like the ASR-33, the need two. There is a mechanical device that rotates once per character, and has to physically stop and start. There were also ones that used 1.5 stop bits, such as the IBM 2741. The 2741 uses a six-bit code with shift/unshift (which rotate the type ball 180 degrees). Also, the 2741 locks the keyboard when printing. The ATTN key, equivalent to BREAK on ASCII terminals, is the only thing one can do while receiving. Gah4 (talk) 19:07, 12 April 2023 (UTC)[reply]

    TAE?

    [edit]

    The term "TAE" is mentioned but this term is not defined in this article or in other articles in Wikipedia (in a relevant meaning, anyway). I would love to rectify this but I don't know what's TAE and neither Google "define" nor the free acronyms dictionary had the answer :-( 22:07, 2 September 2008 (UTC)

    It caught my attention too. I also couldn't find any other reference to TAE so I checked the page history and it was the last change made. I checked the author's recent contributions and they are all acts of vandalisms. My guess is a vandal got newly assigned that IP (some earlier contributions are a legitimite ones). I reverted the edit. If anybody can come with a good explanation what TAE is, just put it back, please. 24.80.185.126 (talk) 10:13, 10 September 2008 (UTC)[reply]

    Baud vs Bits

    [edit]

    Bits per second is the payload, not the symbol rate. 9600 baud has 9600 symbols per second. For example for a UART set to 8N1, there is it requires 1 baud of start, 8 baud of data, 1 baud of stop, for a total of 10 baud time. The bit per second is 9600 * (8/10) = 7680 bits per second. If you look at any UART data sheet, they use the term baud everywhere, such as the "baud rate" generator. • SbmeirowTalk09:26, 15 November 2016 (UTC) https://en.wikibooks.org/wiki/Serial_Programming/Complete_Wikibook#Data_Transmission_Rates[reply]

    First, thank you for starting discussion along with your revert.
    We don't consider Wikibooks (or Wikipedia itself) a RS as it is user-generated content. See WP:USERG.
    " The bit per second is 9600 * (8/10) = 7680 bits per second." Sorry but that is wrong. The relationship between bit rate and baud is not due to the start and stop framing bits. It's due to the difference between bits per second and symbols per second.
    Yes, it is true that in async serial there is a difference between total channel bits (which include the start and stop bits) and what you are calling "payload bits" (which don't). But "baud" doesn't have anything to do with that. (Nor, in more complex protocols, does it have anything to do with packet framing, checksums, or other differences between "payload" and total bits sent or received.")
    See our article Baud. "The term baud has sometimes incorrectly been used to mean bit rate,[2] ..."
    Please also see http://electronicdesign.com/communications/what-s-difference-between-bit-rate-and-baud-rate - that's from EDN magazine.
    "Any UART data sheet" - What about [1], [2], [3], ... If they say "baud rate generator", well, they're wrong. Any good book on data communications theory will explain this. The confusion of baud with bit/s is a very old and very well known canard. The unit of information is the bit, not the baud. One baud is a symbol rate of one symbol per second. The baud of a link is related to its information transfer speed (bit/s) but is not necessarily the same. You have to know the bits per symbol to relate the two. One symbol can convey more than one bit, or fewer than one bit, or exactly one bit.
    A UART doesn't know anything about the symbol rate that might be in use further down the line, so there is no point in talking about "baud" in connection with a UART. If the serial input and output of the UART are used directly, then yes, one bit = one symbol and so the bit rate in bit/s = baud. But there's still no reason to not use "bit rate", "bit/s", etc. Again, UARTs deal with bits. When you program a bit rate generator to create the clock signal for a UART you are setting the bits per second on the serial pins - this includes the start and stop bits.
    Let's say we set it to 9600 bit/s. Now let's say we attach an old async modem capable of 9600 bit/s (again, that includes start and stop bits). Those modems used a coding scheme that sent four bits per symbol. Thus the 9600 bit/s was sent and received on the phone line at 2400 baud. That is the difference between "bit rate" and "baud". You still set the serial port on the DTE for 9600. Again, the UART is unaware of the symbol rate. The UART knows only bits.
    Note that the "baud" spec does not distinguish between start, stop, and data bits. If it's 2400 baud (2400 symbols/sec) and each symbol carries four bits, then the bit rate is 9600 bit/s. It doesn't matter to either figure that some of those bits are start and stop bits.
    Language and units: It is vital to understand that a measurement expressed in Baud is always a rate, symbols per second. Not a simple number of things. One baud is not one bit, not ever. One baud is not even ever one symbol. One baud = one symbol per second. So we shouldn't write "9600 baud per second", we just write "9600 baud". So when you wrote "1 baud of start" you were writing "1 symbol per second of start", which makes no sense. If you want to express the time, the duration, of the start bit, that is simply expressed in (probably fractional) seconds, not in "symbols per second": A duration does not have a "per second" in it. "10 baud time" would mean "10 symbols per second time" - i.e. the duration of, not 10 symbols, but of 10 symbol changes per second. Which again makes no sense if you're talking about a duration, a measurement of elapsed time. You can write "10 symbols per second" (10 baud), or you can write "the time it takes for 10 symbols" (which would be 1 s in this case), but not "the duration of 10 symbols per second". A number of symbols per second is a rate. It is not something that has a duration.
    So re the table column heading "time per baud" - aside from the fact that "baud" refers to symbols and not bits, since "baud" means "symbols per second", "time per baud" is really saying "time per symbol per second". Which is like saying "time per mile per hour". It makes no sense. e.g. you can say "time per mile", or "miles per time" for that matter. But the only reason to say "time per mile per hour" would be if you were talking about something like acceleration. (Like "0 to 60 mph in 10 seconds", an acceleration - rate of change of speed - of 10 miles per hour per second.) If you're measuring a simple duration, you don't put "per unit time" after it, but that's what "baud" implies. If the data in the column is expressing time in seconds (or in microfortnights for that matter), "bit duration" or "bit time" or "time per bit" would be correct headings.
    Finally, see our Manual of Style at WP:COMPUNITS. We're required to use bit/s to express data transmission rates, not baud. QED.
    Fact is that even the term "baud rate", common though it is, is incorrect, just as it would be incorrect to say "Hertz rate". A symbol rate expressed as a number with "baud" after it already has "per second" in it, so e.g. "9600 baud" is already "9600 symbols per second". "Baud rate" would be talking about the number of symbols per second per second, or something like that. You wouldn't talk about the "velocity rate" of a projectile; "velocity" is already a "rate" of displacement over time. Nor do you talk about the "amp rate" of an electrical load, since the ampere already has a "per unit time" built into it; it's already a rate. But "baud rate" (like the similarly wrong "rate of speed") is so pervasive that it's probably hopeless to try to fix that one. We can avoid it, though, by using the more correct "bit rate". Jeh (talk) 10:08, 15 November 2016 (UTC)[reply]
    SBMEIROW RESPONSE

    A) In WP:COMPUNITS, it says "the definition most relevant to the article should be chosen as primary for that article". A vast majority of recent UART documents clearly show the term "baud" is very relevant term.

    B) I understand that multiple bits can be packed into one "baud" or "time unit" in dial-up modems and a mountain of wireless communication protocols, but this article is NOT about them, instead it is specifically about the UART. The only thing that matters in this article is the UART itself and the serial data that is arriving/leaving in/out of the actual UART.

    C) My use of "any datasheet", means RECENT datasheets for RECENT parts. Cherry-picking OBSOLETE ANCIENT datasheets are important from a historical sense, but recent technical nomenclature should be used to describe things in 2016. The following 12 links to newer documents from 12 different companies clearly show how prevalent the term BAUD is used with UARTs. Also, I'm confident that I could post a mountain of similar datasheets since almost every microcontroller has a UART inside of it. This evidence alone is a very strong reason of why the BAUD nomenclature should be used as much as possible in this Wikipedia UART article in 2016.

    UART chip
    USB to UART chip
    Microcontroller chip
    FPGA "soft" blocks

    D) Signed • SbmeirowTalk01:50, 16 November 2016 (UTC)[reply]

    a) Re COMPUNITS, the provision you quote, "The definition most relevant to the article should be chosen as primary for that article", is in a section that concerns only the choice of binary vs. decimal prefixes. e.g. does Mb mean 1048576 bits or 1000000 bits. It is in no way license to pick something that isn't in the preceding table of "Specific units". I know; I was part of the last great IEC prefix discussion, so I know very well where that sentence came from and to what it refers.
    b) "The only thing that matters in this article is the UART itself and the serial data that is arriving/leaving in/out of the actual UART." Exactly! And the UART does not speak in symbols, only in bits, so bit/s is correct. Use of the term "baud" implies that some other bits-to-symbol mapping may be going on. In addition to the other articles already linked, look at our article on Symbol rate.
    c) Re data sheets, yes, ignorance is everywhere. (Why, some people I've encountered even believe that framing bits don't count in the channel bit rate!) Just because a lot of tech writers apparently never took any formal classes in communication theory doesn't mean it's our job to perpetuate their ignorance. I notice you do not actually dispute that "baud" means "symbols per second". Well then, it is not synonymous with "bits per second" and certainly does not mean "bit" or "bits" as you claimed.
    d) In particular this confusion over rate vs. quantity makes the "time per baud" column heading in the table an absurdity. The correct heading is time per bit, or bit duration. You cannot put a measurement of duration under a heading that says "time per baud". It isn't even wrong. I suppose one could write "time per baud•second", which ends up as time per symbol, but that seems like a lot of work. Kind of like writing "ampere•seconds" instead of "Coulombs". But "time per baud•second" still wouldn't work for this article because nothing here is talking about symbols, only bits.
    e) Do you actually disagree that bit/s would be a technically correct term, if not the term best supported by the data sheets you happened to quote? Do you disagree that "bit duration" would be a technically correct column heading in the table?
    btw, you don't actually have to write "signed". Just type the four tildes. The distinctive formatting of the resulting signature is sufficient. Jeh (talk) 02:41, 16 November 2016 (UTC)[reply]
    I've been busy most of last 7 days. Next 2-3 days will be busy because of Holiday. I'm also trying to get caught up on Kansas article reviews too. I'll get back to you by this weekend or sooner, so we can move this along. • SbmeirowTalk07:54, 23 November 2016 (UTC)[reply]
    I am not-idle also, and WP:NODEADLINE. :) Jeh (talk) 12:12, 23 November 2016 (UTC)[reply]
    Concerning my sign, if you looked closer, there was a reason why I included the word "signed" in this situation. My sign has bullets in it and the last line of my comment started with a bullet, thus I did something to prevent them from visually blending together. • SbmeirowTalk07:26, 3 December 2016 (UTC)[reply]
    I converted the top of the table to "BIT" but I included "BAUD" in parens in the left column, since the term "baud" is so widely used across the UART industry. • SbmeirowTalk07:32, 3 December 2016 (UTC)[reply]
    Whether right or wrong, the term "BAUD" is so widely used across the UART and Microcontroller industry, that it would be disservice to remove it entirely from a UART article. Across many technical sciences, the use of some terms have changed over time. If you compare 100 to 150 year old books to recent books in chemistry and physics you will notice numerous terms have changed. Even in electronics, "condenser" was replaced by "capacitor" in the early part of the last century. In the UART world, the term "BAUD" is firmly entrenched in 2016, thus is why it should not be completely removed from this article. • SbmeirowTalk07:50, 3 December 2016 (UTC)[reply]

    UART models section

    [edit]

    What is the purpose of the UART models section? I am sure there are thousands of different chips which support UART and which are not shown there. I think that section should be deleted entirely. --IngenieroLoco (talk) 11:15, 29 November 2016 (UTC)[reply]

    I think that a few famous/significant historical units should be represented, plus a couple of the chips commonly (well, "commonly" until serial ports went away) used in PCs (e.g. 8250 and 16550), at least. And some indication of the difference between those (which are addressable devices that can sit on a bus) and the old chips should be included. On the other hand, I agree that Wikipedia should not be a parts list. Jeh (talk) 11:32, 29 November 2016 (UTC)[reply]

    When did microcontrollers absorb the UART?

    [edit]

    I flagged "UARTs are now commonly included in microcontrollers", the article is 16 years old! It would be nice to cite a reference. The History section stops short of covering this advance. -- Skierpage (talk) 23:13, 18 April 2018 (UTC)[reply]

    I don't know exactly when, but intel 8051 included UART-like serial interface at 1980. 2400:4051:4CE0:C400:922B:34FF:FED6:A16D (talk) 09:17, 6 August 2021 (UTC)[reply]

    USART

    [edit]

    Note that the 8251 is actually a USART, which can run in either asynchronous or synchronous mode. I believe some of the others listed along with it are also USART or similar. Gah4 (talk) 01:46, 13 April 2023 (UTC)[reply]

    Protocol vs. Device

    [edit]

    I work with electrical engineers (I'm not one myself) and they often remind me that "UART" is a data link protocol, not a device. This is also emphasized in other sources. As one example, the UART protocol can be used with an LVDS transmitter / receiver, and it can also be used over an acoustic modem, as anyone who has configured a modem for start, stop, parity and data bits might know. I don't think an acoustic modem operating over the PSTN would qualify as a "UART device", at least not as used in this article. Should this article emphasize that UART is a protocol, rather than take it as granted that it is a device? --Alan.A.Mick 15:25, 22 May 2024 (UTC)[reply]

    It could also be argued that the word "UART" in some contexts refers to the device and in other contexts refers to the protocol. For instance, if one says "transfer the data over UART" then that word "UART" refers to the protocol, while if someone says "transfer the data over the UART" then that word "UART" would refer to a device or module implementing the protocol.
    As far as this article is concerned, I might suggest changing the section named "Transmitting and receiving serial data" to just be "UART protocol". Em3rgent0rdr (talk) 16:23, 22 May 2024 (UTC)[reply]
    UART is hardware device, or a peripheral inside a micrcontroller, or emulated by software; but in the context of protocols it only means the protocol uses some type of UART to transmit & receive asynchronous data. • SbmeirowTalk13:15, 23 May 2024 (UTC)[reply]