There's a fascinating story about ego in science buried here.
William Henry Preece was Engineer-In-Chief at the British Post Office in the late 19th Century (basically heading their telegraphy efforts) and controversially stuck to Thomson's model, or the "KR Law" as it was called back then, relying on poor quality experimental results to direct the organization.
Oliver Heaviside publicly criticized Preece, who in turn blocked publication of Heaviside's writing to preserve his own reputation (according to Wikipedia). When Preece retired in 1899 and received a knighthood, Heaviside wrote in the preface of Volume II of "Electromagnetic Theory":
"It is to be hoped and expected that the late important removals in the British Telegraph Department will lead to much improvement in the quality of official science."
Writing by hand on his own copy, Heaviside suggested that the preface (and thus the book's publication) were "held back" so as to "allow W H Preece to make sure of his knighthood".
So! On the one hand, we have an esteemed engineer who ran the country's telegraphs and refused to admit he made a mistake, became a knight and faded somewhat from history; on the other hand, we have a self-taught physicist who made massive contributions to our understanding and analysis of electromagnetism, lived in "relative poverty" for fifty years while doing so, didn't shy away from calling out bad science, and he gets the analogue of heaven named after him in the musical "CATS".
Not a religious person, but there's something deeply spiritual about Katalin Karikó type characters, who throughout the ages stuck with their curiosities and craft for the craft's sake.
A note on "craft" here is that Heaviside's other thing was inventing a kind of magical math that seems like it shouldn't work (even though it did) and ignoring everyone who complained about it.
Controlled impedance took me a long time to wrap my head around when starting PCB design, the moment when it finally clicked was watching this excellent AlphaPhoenix video https://www.youtube.com/watch?v=2AXv49dDQJw& asking and practically demonstrating the simple question:
When you flip on a switch, to turn on anything, send data, morse something etc, how does the circuit know how much current the load at the other end needs?
Spoiler: Given that information can't travel faster than light, the simple answer is: it doesn't. So it just guesses and adjusts, which you don't want as it gives you exactly the ringing etc Heaviside identified. The video is a nice complement, as it perfectly visualizes the issues at play.
It's a pretty wild bit of understanding to have, even in simple situations like flipping on a light switch.
That video is good for the water like wave explanation that is a very useful lens. If you want a more in-depth explanation, particularly how the field is in the dielectric and the wires/traces are simply the wave guide, this long presentation by Rick Hartley will help move to the next level.
The dramatic shift in behavior above the audio frequency range is where the water wave lens starts to fall down IMHO.
I was looking at my brothers memory card from a Cray 1a the other day and that video popped in my head. They had the timing traces snaking through several flat-pack chips legs. No wonder they had to move from parity to Hamming code even with exclusively using differential twisted pairs between modules.
That one‘s gold too, but for me it was the other way round - needed Hartley to fully grasp Alpha Phoenix.
Understanding waves feels a bit like the bell curve meme for me: you start with the mental water model, and eventually end up with it again.
Or Feynman: you hear him helpfully talk about bouncy rubber balls, then learn a bunch of stuff over the next decade, and randomly listen to the same lecture again, and suddenly all sorts of „aaaah, that’s what he meant“ lightbulbs go off.
I should also clarify something above, Oliver Heaviside did discover the energy flows through the dielectric, most explanations like that video use other lenses to communicate the very real need to consider voltage and current.
The original link side stepped that as to be honest it is to complicated for the intended use case.
All models are wrong, some are useful, and the water wave model is very useful for very real needs.
I personally wasted a lot of time confusing the map for the territory, but yes everyones path will be different. I confused the "electron flow" and water wave model as being absolute ground truth for way longer than I would like to admit.
Watching the Alpha Phoenix video I had a sort of realization that waves (as phenomena in general) are basically nature’s calculator/probes. If nature doesn’t “know” what will happen there’s probably some wave involved to figure it out.
And bias, especially in science and politics travels as a wave through time like a pendulum. Solving these systems, with the resistance measured in human lives, can take many generations.
> Controlled impedance took me a long time to wrap my head around (...) When you flip on a switch, to turn on anything, send data, morse something etc, how does the circuit know how much current the load at the other end needs?
This kind of thing has always been my hurdle when learning electronics. Whether through tutorials or high-school level physics, all the explanations I read tend to simplify and omit things in exactly the way as to break down when you start asking questions like this. My pet peeve are various equations that people flip around seemingly arbitrarily to calculate just the thing they need at the spot, from the two "knowns" that happen to be unknown at the same spot a moment later. Everything is obviously affecting everything else, but no one thought to mention feedback loops and how to correctly deal with them (even if by simplifying them away).
Or maybe I'm just a naturally imperative thinker, and I don't feel comfortable with declarative explanations which I can't "step through" mentally to understand the underlying process. Which, in case of electronics, involves voltages propagating around the circuit at finite speeds.
In programming, this hit me wrt. non-deterministic programming in Prolog. Usually explained as magic. "You can assume program will compute X, because it's structured so that if it wouldn't, it would hit this 'can never happen' statement, and because that - literally - can never happen, it magically must take the correct path".
Became immediately obvious to me once I realized that the runtime is just hiding a big fat loop that takes every path for you, and the magic instruction just tells it to silently discard the current path and try another one. Overall, the moment I felt I finally understand Prolog was when I realized the runtime is doing depth-first search in the background.
They didn't know much about transmission line theory, and even burned out the cable at one point. Heaviside was only about 8 years old when the first cable was laid down.
It's probably more helpful to not use the idea of 'guessing' though - the transition from off to on is a signal with a certain frequency content and that signal is reacting to the capacitance and inductance etc. of the wires and propagating through them basically the only way it can. Once the signal has propagated through and the ringing has all been absorbed it settles on the DC condition.
For years and years I thought the "heavy side" function was 1 on one side of the origin and 0 on the other. I also thought the "pointing" vector pointed in the direction the wave was travelling.
Ooooh that's so much better. I'm gonna use that from now on. The speed at which you can communicate and the other can understand your statement is very important. I wonder what other common phrases have shorter versions
>One of Heaviside's achievements is that he converted Maxwell's twenty mathematical formulas into a more accessible set of just four, which nowadays are taught at all universities as "Maxwell's equations".
Anyone know of an accessible guide to the translation of Maxwell's quaternions to Heaviside's vectors? Was Heaviside's compression of Maxwell's work — lossless?
This is the one that finally made impedance "click" in my head.
OP's, however, treats the subject of "loading coils", which I remember hearing about in the context of telephone lines but never really understood until just now.
This guy was awesome! He innovated ways to solve DEs with Laplace operator for inputs with discontinuities eg, what is now called, the Dirac delta function. Years before the math for generalized functions was developed by mathematicians. When asked about the theory behind the method he happily announced that he doesn’t k ow but that it just worked!
This exact theory is also used to model the electrical behavior of neurons in the brain, with some slight differences (no inductances, non-linear resistances), under the name "cable theory" https://en.wikipedia.org/wiki/Cable_theory
I remember one professor mentioning the origin of this theory in undersea cable modeling at some point.
Tried skimming the page but couldn't find the answer: do we know if the neural connection impedance is perfectly matched? It looks quite organic in shape, with teardrop connections and so on, but curious how nature did that job?
It is not always perfectly matched, because the mismatches can actually have a "computational" purpose, but e.g. the typical branching pattern of dendrites is pretty close to being matched . There is a chapter on this in the Dayan and Abbot textbook.
If this is your thing, you might also want to check out Christof Koch's Biophysics of Computation. Cable theory is introduced in one of the first few chapters.
That's an interesting question. One follow-up question I would have is whether impedance matching is a relevant concept here, given that the model has no inductance (I'm guessing that's because the flow of charge is in the form of ions moving radially through the membrane. If neurons were more like transmission lines, would we be susceptible to interference from distant lightning?)
I also skimmed the page and saw that equation 20 is not a wave equation (as the article says, it is a diffusion equation.) Again, I am not sufficiently knowledgeable to say whether that renders the question of impedance matching moot.
Update: I see from the sibling thread and its excellent reference that the refractory period, where the sodium and potassium ions are being pumped back to their starting positions, suppresses reflection.
Sorry if that's not clear - I was referring to phreeza's reply to your question and the link you had posted below it, which is as far as the discussion had gone at that time. The refractory period, and its role in suppressing reflections, is mentioned in the reference you provided a link to.
This looks different, as there is a completely resistive path from source to destination, which is not the case for transmission lines (as that would mean an instantaneous response which isn't possible due to the speed of light limit).
I think it's just a matter of scale, technically there is an inductance but the distances are so small and frequencies so low that they never really matter.
It will change instantaneously, but with a magnitude that decays exponentially with distance from the place the current is injected. The way signal propagation works is that you have "active" currents to ground that react to voltage changes in a nonlinear way. These lead to wave-like behavior in the transmission line, though it is quite nonlinear and harder to model than a straight up inductor.
One of the diagrams has the text "40 sections elk 100 m long". "Elk" (or "elke", depending on grammatical gender) is Dutch for "each", so it looks like a small oversight in translation. It's not about deer that somehow got caught up in transmission line theory.
Hope that clears up any confusion that might possibly arise.
In the context of this article, it seems clear to me that GP has the right translation. The other figures say "each" in the same position, and it looks like there was one word that didn't get translated from the figure. The fact that this is all on a .nl domain and the author says this is a translation from a Dutch magazine article lends even more weight to the idea that the author wasn't talking about a particular type of transmission line.
i'm pretty confident both of you are correct. I know some Dutch and spotted that as well.
I did find it an interesting overlap that in the field of transmission lines (electricity lines specifically) there is literally a conductor size names elk :)
Another Heaviside biography in addition to the one cited in the
article is "Oliver Heaviside: The Life, Work, and Times of an
Electrical Genius of the Victorian Age" by Paul J. Nahin. It's
profusely illustrated and doesn't omit the math.
"The best result of mathematics is to be able to do without it", he said, which seems strange but makes sense coming from the man who came up with functional operators.
>His neighbours related stories of Heaviside as a strange and embittered hermit who replaced his furniture with 'granite blocks which stood about in the bare rooms like the furnishings of some Neolithic giant. Through those fantastic rooms he wandered, growing dirtier and dirtier, and more and more unkempt - with one exception. His nails were always exquisitely manicured, and painted a glistening cherry pink'
Genius confirmed.
EDIT: one of the traits of creative genius according to Hans Eysenck is psychoticism which includes an indifference to social norms:
Thank you very much for sharing this. I've been trying to form a mental model of why reflection in transmission lines is a thing and never found a theoretical foundation that satisfied me. I've read a number of publications that just state that as fact, without further explanation. I'll spend some time sketching my own graphs and formulas to fully absorb the fine article.
PS: he discusses the impossibility of moving signals faster than light. I'm in fact interested in moving them way slower than they do, since that could make it possible to build very short antennas. I'm surely not the first to think of this. Maybe that's not even possible and that theoretical model could help to demonstrate why (even though it applies to transmission lines while I'm concerned about antennas).
For me, an optics analogy appealed to my intuition. A transmission line is analogous to a periodic crystalline material, perhaps a form of quartz. Impedance is analogous to index of refraction; if two materials of differing index of refraction are juxtaposed, some of the light will reflect back. If the indices of refraction match ("impedance match") then there is no reflection and maximum energy transfer takes place.
That's not just an analogy; impedance is index of refraction, with the same body of modern theory, except that "impedance" derives from the history of analysis of low frequency ("radio") waves whereas "index of refraction" derives from the history of analysis of high frequency ("optical") waves.
Thanks. I found experiments like this [0] helpful to visualize reflections and standing waves, but, like a sibling comment said, although these are similar phenomena and show what happens with the energy inside the wire, I couldn't see why these effects manifested at the electrical level.
Indeed, if you search for "ceramic antennas" you'll see that they are already being used and smaller than equivalent PCB antennas. They rely on a dielectric material with high e_r, which implies that speed of light is slower there.
Lots of portable devices use them nowadays!
It's the general theory of waves: whenever some value exists on a continuum, has a position and a velocity at each point, and the acceleration (change in velocity) brings each point towards the average of its neighbours.
When someone waves a skipping rope at one end, they directly move the portion of the rope closest to them, which pulls on the portion next to it, which pulls on the portion next to it, and so on. Some wave-shapes happen to be are sustainable enough to travel along the whole rope with low distortion. (Like when you draw random pixels in Conway's Game of Life and run it, you usually end up with lots of gliders and a few spaceships travelling off in various directions, because those happen to be the simplest travelling patterns and the rest of your scribble died out or turned into things that don't travel. There aren't any non-travelling wave shapes.)
In a rope, the usual wave packet is like a hump, and if the rope is infinitely long, the wave packet can travel forever, as sections at the front of the wave get raised up by the hump just behind them, and sections the hump passed through get pulled down by the rope in its default position behind them. If you now imagine the rope is cut in half and one end is tied to a wall, when the wave gets to this wall, the bit that is tied to the wall does NOT rise up because it's tied to the wall, so the bit just behind it gets pulled down more than it would be in an infinite rope, and after running the simulation for a short time, the net effect is that the back part of the wave doesn't just get pulled down to its equilibrium position like it would if the rope was infinite, but gets pulled down twice that, forming a negative copy of the original wave.
And you can have in-between values, where some section of the rope is harder but not impossible to move, which causes the back part of the wave to be pulled down more than usual, but not twice as much, forming a smaller inverse copy, and the part that is harder to move is pulled up, but less than usual, forming a smaller non-inverse copy.
You can also go the other way, and have a section of the rope that's easier to move than usual (or infinitely easy i.e. an open end), and when the wave gets to this point, the back part of the wave doesn't get pulled down as much as it normally would, leaving it still in the shape of the wave, i.e. a smaller non-inverse copy, instead of returning it fully to equilibrium.
And if you can visualize this with ropes it works similarly for electricity - just replace position by voltage and velocity by current - or any other imaginable system where each piece of a continuum has a second derivative that tries to bring its value back to the average value of its neighbors.
Yes! I’ve dabbled in digital electronics and never managed to form an intuitive understanding on ‘impedance’ in signal wires despite reading quite a few ‘basics’ texts about it. I’d ended up accepting that I wouldn’t really understand its basis without learning a lot more about analog electronics, but this explanation really worked for me.
Curious if anyone here is also familiar with loudspeakers built on the acoustic transmission line (TL) principle? It comes and goes in hi-fi circles, I have a bass instrument cabinet produced by the now-defunct Euphonic Audio. It's an interesting TL design featuring a whizzer cone on a 12" speaker. While it isn't the deepest-sounding bass cabinet I've heard, it is very balanced & detailed throughout its range with great projection, and no phase cancellation in the mid/high ranges due to the whizzer/no crossover design. Thanks!
A very interesting explanation with history and math combined! I’m not sure if most of HN cares about rf things at this level but I was very happy to see a discussion of transmission lines, and a lot of the discussion of the trades of parameters to improve a Morse code transmission.
RF is definitely interesting - it's the closest thing to black magic in electrical engineering (all went over my head during my degree...that's why I ended up in power systems :^) ).
Yes, especially since in power transmission the distances are large; if you turn on the power at point A, then point B will not immediately receive the power and this is quite measurable.
There's a fascinating story about ego in science buried here.
William Henry Preece was Engineer-In-Chief at the British Post Office in the late 19th Century (basically heading their telegraphy efforts) and controversially stuck to Thomson's model, or the "KR Law" as it was called back then, relying on poor quality experimental results to direct the organization.
Oliver Heaviside publicly criticized Preece, who in turn blocked publication of Heaviside's writing to preserve his own reputation (according to Wikipedia). When Preece retired in 1899 and received a knighthood, Heaviside wrote in the preface of Volume II of "Electromagnetic Theory":
"It is to be hoped and expected that the late important removals in the British Telegraph Department will lead to much improvement in the quality of official science."
Writing by hand on his own copy, Heaviside suggested that the preface (and thus the book's publication) were "held back" so as to "allow W H Preece to make sure of his knighthood".
https://outsideecho.com/DGT-BIO_files/PDFs/DGT13.pdf
So! On the one hand, we have an esteemed engineer who ran the country's telegraphs and refused to admit he made a mistake, became a knight and faded somewhat from history; on the other hand, we have a self-taught physicist who made massive contributions to our understanding and analysis of electromagnetism, lived in "relative poverty" for fifty years while doing so, didn't shy away from calling out bad science, and he gets the analogue of heaven named after him in the musical "CATS".
It wasn't just Cats. There really is a layer in the ionosphere called the Heaviside Layer, or more formally the Kennelly–Heaviside layer:
https://en.wikipedia.org/wiki/Kennelly–Heaviside_layer
Today this is more commonly known as the E layer or E region, and it is much beloved by ham radio operators because of its intermittent nature.
You never know when sporadic E skip will become available and you can use it to contact hams far away!
Not a religious person, but there's something deeply spiritual about Katalin Karikó type characters, who throughout the ages stuck with their curiosities and craft for the craft's sake.
A note on "craft" here is that Heaviside's other thing was inventing a kind of magical math that seems like it shouldn't work (even though it did) and ignoring everyone who complained about it.
https://deadreckonings.com/2007/12/07/heavisides-operator-ca...
https://en.wikipedia.org/wiki/Operational_calculus
Excellent post!
Controlled impedance took me a long time to wrap my head around when starting PCB design, the moment when it finally clicked was watching this excellent AlphaPhoenix video https://www.youtube.com/watch?v=2AXv49dDQJw& asking and practically demonstrating the simple question:
When you flip on a switch, to turn on anything, send data, morse something etc, how does the circuit know how much current the load at the other end needs?
Spoiler: Given that information can't travel faster than light, the simple answer is: it doesn't. So it just guesses and adjusts, which you don't want as it gives you exactly the ringing etc Heaviside identified. The video is a nice complement, as it perfectly visualizes the issues at play.
It's a pretty wild bit of understanding to have, even in simple situations like flipping on a light switch.
That video is good for the water like wave explanation that is a very useful lens. If you want a more in-depth explanation, particularly how the field is in the dielectric and the wires/traces are simply the wave guide, this long presentation by Rick Hartley will help move to the next level.
https://www.youtube.com/live/ySuUZEjARPY
The dramatic shift in behavior above the audio frequency range is where the water wave lens starts to fall down IMHO.
I was looking at my brothers memory card from a Cray 1a the other day and that video popped in my head. They had the timing traces snaking through several flat-pack chips legs. No wonder they had to move from parity to Hamming code even with exclusively using differential twisted pairs between modules.
That one‘s gold too, but for me it was the other way round - needed Hartley to fully grasp Alpha Phoenix.
Understanding waves feels a bit like the bell curve meme for me: you start with the mental water model, and eventually end up with it again.
Or Feynman: you hear him helpfully talk about bouncy rubber balls, then learn a bunch of stuff over the next decade, and randomly listen to the same lecture again, and suddenly all sorts of „aaaah, that’s what he meant“ lightbulbs go off.
I should also clarify something above, Oliver Heaviside did discover the energy flows through the dielectric, most explanations like that video use other lenses to communicate the very real need to consider voltage and current.
The original link side stepped that as to be honest it is to complicated for the intended use case.
All models are wrong, some are useful, and the water wave model is very useful for very real needs.
I personally wasted a lot of time confusing the map for the territory, but yes everyones path will be different. I confused the "electron flow" and water wave model as being absolute ground truth for way longer than I would like to admit.
Watching the Alpha Phoenix video I had a sort of realization that waves (as phenomena in general) are basically nature’s calculator/probes. If nature doesn’t “know” what will happen there’s probably some wave involved to figure it out.
And bias, especially in science and politics travels as a wave through time like a pendulum. Solving these systems, with the resistance measured in human lives, can take many generations.
> Controlled impedance took me a long time to wrap my head around (...) When you flip on a switch, to turn on anything, send data, morse something etc, how does the circuit know how much current the load at the other end needs?
This kind of thing has always been my hurdle when learning electronics. Whether through tutorials or high-school level physics, all the explanations I read tend to simplify and omit things in exactly the way as to break down when you start asking questions like this. My pet peeve are various equations that people flip around seemingly arbitrarily to calculate just the thing they need at the spot, from the two "knowns" that happen to be unknown at the same spot a moment later. Everything is obviously affecting everything else, but no one thought to mention feedback loops and how to correctly deal with them (even if by simplifying them away).
Or maybe I'm just a naturally imperative thinker, and I don't feel comfortable with declarative explanations which I can't "step through" mentally to understand the underlying process. Which, in case of electronics, involves voltages propagating around the circuit at finite speeds.
In programming, this hit me wrt. non-deterministic programming in Prolog. Usually explained as magic. "You can assume program will compute X, because it's structured so that if it wouldn't, it would hit this 'can never happen' statement, and because that - literally - can never happen, it magically must take the correct path".
Became immediately obvious to me once I realized that the runtime is just hiding a big fat loop that takes every path for you, and the magic instruction just tells it to silently discard the current path and try another one. Overall, the moment I felt I finally understand Prolog was when I realized the runtime is doing depth-first search in the background.
I remember reading about the first cable that was laid across the Atlantic.
https://en.wikipedia.org/wiki/Transatlantic_telegraph_cable
They didn't know much about transmission line theory, and even burned out the cable at one point. Heaviside was only about 8 years old when the first cable was laid down.
That is a great visualisation, yes!
It's probably more helpful to not use the idea of 'guessing' though - the transition from off to on is a signal with a certain frequency content and that signal is reacting to the capacitance and inductance etc. of the wires and propagating through them basically the only way it can. Once the signal has propagated through and the ringing has all been absorbed it settles on the DC condition.
That video (and channel, seemingly) is incredible, thank you for posting! I've never seen anything like the visualizations starting at ~10m.
For years and years I thought the "heavy side" function was 1 on one side of the origin and 0 on the other. I also thought the "pointing" vector pointed in the direction the wave was travelling.
And Li-ion batteries are good enough: https://en.wikipedia.org/wiki/John_B._Goodenough
Nominal Determinism at its finest
Nominative determinism. There's also the pithy Latin phrase nomen omen.
Ooooh that's so much better. I'm gonna use that from now on. The speed at which you can communicate and the other can understand your statement is very important. I wonder what other common phrases have shorter versions
[dead]
>One of Heaviside's achievements is that he converted Maxwell's twenty mathematical formulas into a more accessible set of just four, which nowadays are taught at all universities as "Maxwell's equations".
Anyone know of an accessible guide to the translation of Maxwell's quaternions to Heaviside's vectors? Was Heaviside's compression of Maxwell's work — lossless?
I doubt it, as pseudovectors are kind of a mess ?
But you can also further compress those 4 equations into just a single one by using slightly more complicated geometry !
http://www.av8n.com/physics/maxwell-ga.htm
There's another really good explanation of transmission line theory over here:
https://www.ibiblio.org/kuphaldt/electricCircuits/AC/AC_14.h...
This is the one that finally made impedance "click" in my head.
OP's, however, treats the subject of "loading coils", which I remember hearing about in the context of telephone lines but never really understood until just now.
This guy was awesome! He innovated ways to solve DEs with Laplace operator for inputs with discontinuities eg, what is now called, the Dirac delta function. Years before the math for generalized functions was developed by mathematicians. When asked about the theory behind the method he happily announced that he doesn’t k ow but that it just worked!
> Dirac delta function
and the Heaviside Step function looks like Heaviside's head lol.
https://shot.3e.org/ss-20250127_095213.png
This exact theory is also used to model the electrical behavior of neurons in the brain, with some slight differences (no inductances, non-linear resistances), under the name "cable theory" https://en.wikipedia.org/wiki/Cable_theory
I remember one professor mentioning the origin of this theory in undersea cable modeling at some point.
Tried skimming the page but couldn't find the answer: do we know if the neural connection impedance is perfectly matched? It looks quite organic in shape, with teardrop connections and so on, but curious how nature did that job?
It is not always perfectly matched, because the mismatches can actually have a "computational" purpose, but e.g. the typical branching pattern of dendrites is pretty close to being matched . There is a chapter on this in the Dayan and Abbot textbook.
<3 awesome - this one https://boulderschool.yale.edu/sites/default/files/files/Day... ?
Yes exactly. Chapter 6.3, though it is actually less detailed than I remembered.
Much appreciated! Maybe the impedance added some colorful garnish to your memory... :-)
If this is your thing, you might also want to check out Christof Koch's Biophysics of Computation. Cable theory is introduced in one of the first few chapters.
That's an interesting question. One follow-up question I would have is whether impedance matching is a relevant concept here, given that the model has no inductance (I'm guessing that's because the flow of charge is in the form of ions moving radially through the membrane. If neurons were more like transmission lines, would we be susceptible to interference from distant lightning?)
I also skimmed the page and saw that equation 20 is not a wave equation (as the article says, it is a diffusion equation.) Again, I am not sufficiently knowledgeable to say whether that renders the question of impedance matching moot.
Update: I see from the sibling thread and its excellent reference that the refractory period, where the sodium and potassium ions are being pumped back to their starting positions, suppresses reflection.
Sibling thread?
Sorry if that's not clear - I was referring to phreeza's reply to your question and the link you had posted below it, which is as far as the discussion had gone at that time. The refractory period, and its role in suppressing reflections, is mentioned in the reference you provided a link to.
Got it - haven't read it yet but, also thanks to your pointer, very much looking forward to!
This looks different, as there is a completely resistive path from source to destination, which is not the case for transmission lines (as that would mean an instantaneous response which isn't possible due to the speed of light limit).
I think it's just a matter of scale, technically there is an inductance but the distances are so small and frequencies so low that they never really matter.
Yes, but then it is not a transmission line.
They're modeling delay with capacitance to 'ground', it seems. So there's capacitive reactance.
But that's not a complete model, as the output will start changing the moment the input changes. And a short burst will not appear so on the output.
It will change instantaneously, but with a magnitude that decays exponentially with distance from the place the current is injected. The way signal propagation works is that you have "active" currents to ground that react to voltage changes in a nonlinear way. These lead to wave-like behavior in the transmission line, though it is quite nonlinear and harder to model than a straight up inductor.
One of the diagrams has the text "40 sections elk 100 m long". "Elk" (or "elke", depending on grammatical gender) is Dutch for "each", so it looks like a small oversight in translation. It's not about deer that somehow got caught up in transmission line theory.
Hope that clears up any confusion that might possibly arise.
Elk is a type of transmission line conductor. Example - https://www.lzcable.com/acsr-elk-conductor/
Others include moose and drake.
In the context of this article, it seems clear to me that GP has the right translation. The other figures say "each" in the same position, and it looks like there was one word that didn't get translated from the figure. The fact that this is all on a .nl domain and the author says this is a translation from a Dutch magazine article lends even more weight to the idea that the author wasn't talking about a particular type of transmission line.
i'm pretty confident both of you are correct. I know some Dutch and spotted that as well.
I did find it an interesting overlap that in the field of transmission lines (electricity lines specifically) there is literally a conductor size names elk :)
My original comment could do with an "also".
More info about the various types:
https://www.electricaldesks.com/2022/09/Types-of-Conductors-...
Another Heaviside biography in addition to the one cited in the article is "Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age" by Paul J. Nahin. It's profusely illustrated and doesn't omit the math.
> doesn't omit the math.
"The best result of mathematics is to be able to do without it", he said, which seems strange but makes sense coming from the man who came up with functional operators.
I knew this quotation in my undergrad school days; just now checked the net and there is a source: https://hsm.stackexchange.com/questions/7332/a-peculiar-quot...
I just finished reading this one. It was fantastic. I particularly enjoyed the spicy editorial excerpts from The Electrician ( https://en.m.wikipedia.org/wiki/The_Electrician ).
Paul Nahins books are so good.
See also The Science of Radio And the Mathematical Radio (Similar content but one more focussed at elec eng people and one more at math(s) people)
https://mathshistory.st-andrews.ac.uk/Biographies/Heaviside/
>His neighbours related stories of Heaviside as a strange and embittered hermit who replaced his furniture with 'granite blocks which stood about in the bare rooms like the furnishings of some Neolithic giant. Through those fantastic rooms he wandered, growing dirtier and dirtier, and more and more unkempt - with one exception. His nails were always exquisitely manicured, and painted a glistening cherry pink'
Genius confirmed.
EDIT: one of the traits of creative genius according to Hans Eysenck is psychoticism which includes an indifference to social norms:
https://philipperushton.net/wp-content/uploads/2015/02/Impur...
Thank you very much for sharing this. I've been trying to form a mental model of why reflection in transmission lines is a thing and never found a theoretical foundation that satisfied me. I've read a number of publications that just state that as fact, without further explanation. I'll spend some time sketching my own graphs and formulas to fully absorb the fine article.
PS: he discusses the impossibility of moving signals faster than light. I'm in fact interested in moving them way slower than they do, since that could make it possible to build very short antennas. I'm surely not the first to think of this. Maybe that's not even possible and that theoretical model could help to demonstrate why (even though it applies to transmission lines while I'm concerned about antennas).
For me, an optics analogy appealed to my intuition. A transmission line is analogous to a periodic crystalline material, perhaps a form of quartz. Impedance is analogous to index of refraction; if two materials of differing index of refraction are juxtaposed, some of the light will reflect back. If the indices of refraction match ("impedance match") then there is no reflection and maximum energy transfer takes place.
https://en.m.wikipedia.org/wiki/Refractive_index
> Impedance is analogous to index of refraction
That's not just an analogy; impedance is index of refraction, with the same body of modern theory, except that "impedance" derives from the history of analysis of low frequency ("radio") waves whereas "index of refraction" derives from the history of analysis of high frequency ("optical") waves.
Thanks. I found experiments like this [0] helpful to visualize reflections and standing waves, but, like a sibling comment said, although these are similar phenomena and show what happens with the energy inside the wire, I couldn't see why these effects manifested at the electrical level.
[0] - https://youtu.be/1PsGZq5sLrw
Yeah, that's easy to work with so it's a great working model. But as an explanation, it leaves a lot to be desired.
In fact, I'd say it's easier to explain optical refraction with an electrical model than the other way around.
Indeed, if you search for "ceramic antennas" you'll see that they are already being used and smaller than equivalent PCB antennas. They rely on a dielectric material with high e_r, which implies that speed of light is slower there. Lots of portable devices use them nowadays!
I had never heard about them. Will do some research right away, thanks
It's the general theory of waves: whenever some value exists on a continuum, has a position and a velocity at each point, and the acceleration (change in velocity) brings each point towards the average of its neighbours.
When someone waves a skipping rope at one end, they directly move the portion of the rope closest to them, which pulls on the portion next to it, which pulls on the portion next to it, and so on. Some wave-shapes happen to be are sustainable enough to travel along the whole rope with low distortion. (Like when you draw random pixels in Conway's Game of Life and run it, you usually end up with lots of gliders and a few spaceships travelling off in various directions, because those happen to be the simplest travelling patterns and the rest of your scribble died out or turned into things that don't travel. There aren't any non-travelling wave shapes.)
In a rope, the usual wave packet is like a hump, and if the rope is infinitely long, the wave packet can travel forever, as sections at the front of the wave get raised up by the hump just behind them, and sections the hump passed through get pulled down by the rope in its default position behind them. If you now imagine the rope is cut in half and one end is tied to a wall, when the wave gets to this wall, the bit that is tied to the wall does NOT rise up because it's tied to the wall, so the bit just behind it gets pulled down more than it would be in an infinite rope, and after running the simulation for a short time, the net effect is that the back part of the wave doesn't just get pulled down to its equilibrium position like it would if the rope was infinite, but gets pulled down twice that, forming a negative copy of the original wave.
And you can have in-between values, where some section of the rope is harder but not impossible to move, which causes the back part of the wave to be pulled down more than usual, but not twice as much, forming a smaller inverse copy, and the part that is harder to move is pulled up, but less than usual, forming a smaller non-inverse copy.
You can also go the other way, and have a section of the rope that's easier to move than usual (or infinitely easy i.e. an open end), and when the wave gets to this point, the back part of the wave doesn't get pulled down as much as it normally would, leaving it still in the shape of the wave, i.e. a smaller non-inverse copy, instead of returning it fully to equilibrium.
And if you can visualize this with ropes it works similarly for electricity - just replace position by voltage and velocity by current - or any other imaginable system where each piece of a continuum has a second derivative that tries to bring its value back to the average value of its neighbors.
Those are interesting insights, thanks for the extensive reply
Yes! I’ve dabbled in digital electronics and never managed to form an intuitive understanding on ‘impedance’ in signal wires despite reading quite a few ‘basics’ texts about it. I’d ended up accepting that I wouldn’t really understand its basis without learning a lot more about analog electronics, but this explanation really worked for me.
Am partial to the following vintage video on transmission lines from Tektronix:
https://www.youtube.com/watch?v=I9m2w4DgeVk
Curious if anyone here is also familiar with loudspeakers built on the acoustic transmission line (TL) principle? It comes and goes in hi-fi circles, I have a bass instrument cabinet produced by the now-defunct Euphonic Audio. It's an interesting TL design featuring a whizzer cone on a 12" speaker. While it isn't the deepest-sounding bass cabinet I've heard, it is very balanced & detailed throughout its range with great projection, and no phase cancellation in the mid/high ranges due to the whizzer/no crossover design. Thanks!
I still cannot believe that the Heaviside function is named after it’s creator, rather than simply being a literal description of what it is.
He's buried near where I grew up. Must pay him a visit at some point.
Topics like these are amusing filters for gauging who took an undergrad degree in EE vs. who is self-taught.
A very interesting explanation with history and math combined! I’m not sure if most of HN cares about rf things at this level but I was very happy to see a discussion of transmission lines, and a lot of the discussion of the trades of parameters to improve a Morse code transmission.
RF is definitely interesting - it's the closest thing to black magic in electrical engineering (all went over my head during my degree...that's why I ended up in power systems :^) ).
But you still have transmission lines, just at a much lower frequency!
Yes, especially since in power transmission the distances are large; if you turn on the power at point A, then point B will not immediately receive the power and this is quite measurable.
[dead]
[dead]