So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
What I would actually like to see is how this performs in a more real world situation. Like does this increase line error rates, causing the transport or application to have to resend at a higher rate, which would erase all savings by having lower latency. Also if they are really signaling these in the multi GHz are these passive cables acting like antenna, and having a cabinet full of them just killing itself on crosstalk?
high speed links all have forward error correction now (even PCIe); nothing in my small rack full of 40Gbe devices connected with DACs has any link level errors reported
There's also hollow core fiber, which is pretty close to speed of light in a vacuum. 2.0e8 m/s for fiber, 2.3e8 m/s for copper, pretty close to the full 3e8 m/s for hollow core.
No glass, just some reflective coating on the inside of a waveguide (hollow tube).
Storage over copper used to be sub optimal but not necessarily due to the cable. UDP QUIC is much closer to wire speed. so 10 GB copper and 10 GB fiber are probably the same, but 40+ GB fiber is quite common now.
> So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
Surely resignaling should be the fixed cost they calculate at about 1ns? Why does it also incur a 0.4ns/m cost?
It's both. Those links try to minimise deviation from the straight link (and invest significant money to get antenna locations to do that), but they also use copper/coax cables for connecting radios as well as hollow core fibre for other connections to the modems.
I misremembered the speed of electrical signal propagation from high school physics. It's around 2/3rds the speed of light in a vacuum not 1/3rd. The speed of light in an optical fibre is also around 2/3rds the speed in a vacuum.
c is the speed of light in a vacuum, but it is not really about light, it is a property of spacetime itself, and light just happens to be carried by a massless particle, which, according to Einstein's equations, make it go at c (when undisturbed by the medium). Gravity also goes at c.
I've always considered C the speed of light and gravity goes at the speed of light, not that light and gravity both go C, which is a property of spacetime. This is a much simpler mental model, thanks for the simple explanation!
You can think of c as the conversion rate between space and time; then, light (and anything else without mass, such as gravity or gluons) travels at a speed of 1. Everything else travels at a speed of less than 1.
(Physicists will in fact use the c=1 convention when keeping track of the distinction between distance units and time units is not important. A related convention is hbar=1.)
You can tell that c is fundamental, rather than just a property of light, from how it appears in the equations for Lorentz boosts (length contraction and time dilation).
Don't know why you were downvoted, this is true. RF energy is carried primarily (solely?) by the dielectric, not the copper itself, simply by virtue of the fact that this is where the E and M fields (and therefore Poynting vector) are nonzero. It's therefore the velocity factor of the dielectric which is relevant.
> I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables
Especially since physics imposes a ~1.67ns/m penalty on fiber. The best-case inverse speed of light in copper is ~3.3ns/m, while it's ~5ns/m in fiber optics.
Aside from that you've got a linear scrambler into balanced drivers into twisted pair. It's about as noise immune as you can get. Unless you put the noise right up next to the cable itself.
The chip has a phy built into it on-die you mean. This affects timing for getting the signal from memory to the phy, but not necessary the switching times of transistors in the phy, nor the timings of turning the light on and off.
"Has lower latency than" fiber. Which is not so shocking. And, yes, technically a valid use of the word "faster" but I think I'm far from the only one who assumed they were going to make a bandwidth claim rather than a latency claim.
Latency to the first byte is one thing, latency to the last byte, quite another. A slow-starting high-throughput connection will bring you the entire payload faster than an instantaneously starting but low-throughput connection. The larger the payload, the more pronounced is the difference.
ehh... latency is an objective term that, for me at least, has always meant something like "how quickly can you turn on a light bulb at the other end of this system"
I think its just because ISPs have engrained in people that "speed" means bandwidth when it comes to the internet. Improving bandwidth is pretty cheap compared to improving latency because the latter requires changing the laws of physics.
If only the bottleneck was the laws of physics. In reality, it's mostly legacy infrastructure, which is of course much harder to change than the laws of physics.
Until pretty recently, throughput dominated the actual human-relevant latency of time-until-action-completes on most connections for most tasks. "Fast" means that your downloads complete quickly, or web pages load quickly, or your e-mail client gets all of your new mail quickly. In the dialup age, just about everything took multiple seconds if not minutes, so the ~200ish ms of latency imposed by the modem didn't really matter. Broadband brought both much greater throughput and much lower latency, and then web pages bloated and you were still waiting for data to finish downloading.
This coming from Arista is unsurprising because their original niche was low-latency, and the first industries that they made in-roads in against the 'incumbents' was finance:
> The low-latency of Arista switches has made them prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange[50] (largest U.S. options exchange) and RBC Capital Markets.[51] As of October 2009, one third of its customers were big Wall Street firms.[52]
They've since expanded into more areas, and are said to be fairly popular with hyper-scalers. Often recommended in forums like /r/networking (support is well-regarded).
One of the co-founders is Andy Bechtolsheim, also a co-founder of Sun, and who wrote Brin and Page one of the earliest cheques to fund Google:
Its not copper that's faster, it's the dielectric in between the twisted pair that has a lower index of refraction.
And, if we neglect how long the signal can travel like the authors do, copper is always going to win this fight vs. fiber because copper can use air as its dielectric but fiber cannot.
IIRC, the passive copper SFP Direct Attach cables are basically just a fancy "crossover cable" (for those old enough to remember those days). Essentially there is no medium conversion.
Its been long known that Direct Attach Copper (DAC's) are faster for short runs. It makes sense since there does not need to be an analog-digital conversion.
I suppose you are right, but we may not say "it has been widely known". Lots of us who read HN come from the the software side and we coders often hand wave on these topics when shooting the breeze -- much like how a casual car enthusiast might not imagine it was possible for a 6-cylinder engine to have more more horsepower than a V8.
For the parent: and not only bottlenecked at single hops but also hampered by the propagation of latency as the hops increase, depending on the complexity of the distributed system design.
High-frequency trading is the primary application, where 5ns can represent millions in profit as firms compete to execute trades first, but you'll also see benefits in distributed database synchronization, real-time financial risk calculations, and some specialized scientific computing workloads.
any high-utilization workload with a chatty protocol dominated by small IOs such as:
* distributed filesystems such as MooseFS, Ceph, Gluster used for hyperconverged infrastructure.
* SANs hosting VMs with busy OLTP databases
* OLTP replication
* CXL memory expansion where remote memory needs to be as close to inter-NUMA node latency as possible
Faster only because the distances involved are short enough that the PHY layer adds significant overhead. But if you somehow could wave a magic wand and make optical computing work, then fiber would be faster (& generate less heat).
> Faster only because the distances involved are short enough that the PHY layer adds significant overhead.
This specifically mentions the 7130 model, which is a specialized bit of kit, and which Arista advertises for (amongst other things):
> Arista's 7130 applications simplify and transform network infrastructure, and are targeted for use cases including ultra-low latency exchange trading, accurate and lossless network visibility, and providing vendor or broker based shared services. They enable a complete lifecycle of packet replication, multiplexing, filtering, timestamping, aggregation and capture.
It is advertised as a "Layer 1" device and has a user-programmable FPGA. Some pre-built applications are: "MetaWatch: Market data & packet capture, Regulatory compliance (MiFID II - RTS 25)", "MetaMux: Market data fan-out and data aggregation for order entry at nanosecond levels", "MultiAccess: Supporting Colo deployments with multiple concurrent exchange connection", "ExchangeApp: Increase exchange fairness, Maintain trade order based on edge timestamps".
Latency matters (and may even be regulated) in some of these use cases.
The PHY contributes only 1ns difference, but the results also show 400ps/m advantage for copper which I can only assume to come from difference in EM propagation speed in the medium.
No. Look at the graph -- the offset when extrapolated back to zero length is the PHY's contribution.
The differing slope of the lines is due to velocity factor in the cable. The speed of light in vacuum is much faster than in other media. And the lines diverge the longer you make them.
It's true, but also if you go look at their product catalog you will see none of their direct attach cables are longer then 5m, and the high bandwidth ones are 2m. So, again, it's true, but also limiting in other ways.