• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
rss
  • Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.

    We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.

    Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.

    The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.

    (I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)

    As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.

    Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.


  • Unfortunately what’s shipping today seems it would offer maybe half that.

    For the batteries that were announced this past week, a larger-than-refrigerator-sized cabinet held a capacity of around 15kWh.

    Around half the energy density by mass of Lithium batteries, and in the order of a sixth of the density by volume.

    Now if only we could come up with a system where your car could be charged while stopped at traffic lights, we might be onto a winner (:

    Considering however that the price of sodium is around 1-2% that of lithium, I expect we will see significant R&D and those numbers quickly start to improve.





  • I’d be curious to see how much cooling a SAS HBA would get in there. Looking at Broadcom’s 8 external port offerings, the 9300-8e reports 14.5W typical power consumption, 9400-8e 9.5W, and 9500-8e only 6.1W. If you were considering one of these, definitely seems it’d be worth dropping the money on the newest model of HBA.

    I’m definitely curious, would only personally need it to be NAS + Plex server for which either of the CPUs they’re offering is a bit overkill, but it’s nice that it fits a decent amount of RAM, and you’re not forced to choose between adding storage or networking.




  • Free for personal use, so yes-ish. That’ll certainly be a deal-breaker for some.

    Realistically, people who are using it for personal use would probably be upgrading to the next LTS shortly after it’s released (or in Ubuntu fashion, once the xxxx.yy.1 release is out). People who don’t qualify to be using it for free anyway are more likely to be the ones keeping the same version for >5 years.






  • Specs look good for the price, and those machine work great with Linux (I’m using Ubuntu 22.04 on the slightly earlier 9310 right now).

    The only slight downside of the 9315 is that the SSD is soldered to the motherboard. Make sure you back up your data regularly, because there might be no way to get anything off the machine if it breaks.

    There’s also something of a lack of IO; just one USB-C on each side (which is nice, because you can plug the charger into either side). But I have no issues with Bluetooth headphones, and monitors with USB-C have always worked great for plugging larger numbers of peripherals in.



  • To expand on @doeknius_gloek’s comment, those categories usually directly correlate to a range of DWPD (endurance) figures. I’m most familiar with buying servers from Dell, but other brands are pretty similar.

    Usually, the split is something like this:

    • Read-intensive (RI): 0.8 - 1.2 DWPD (commonly used for file servers and the likes, where data is relatively static)
    • Mixed-use (MU): 3 - 5 DWPD (normal for databases or cache servers, where data is changing relatively frequently)
    • Write-intensive (WI): ≥10 DPWD (for massive databases, heavily-used write cache devices like ZFS ZIL/SLOG devices, that sort of thing)

    (Consumer SSDs frequently have endurances only in the 0.1 - 0.3 DWPD range for comparison, and I’ve seen as low as 0.05)

    You’ll also find these tiers roughly line up with the SSDs that expose different capacities while having the same amount of flash inside; where a consumer drive would be 512GB, an enterprise RI would be 480GB, and a MU/WI only 400GB. Similarly 1TB/960GB/800GB, 2TB/1.92TB/1.6TB, etc.

    If you only get a TBW figure, just divide by the capacity and the length of the warranty. For instance a 1.92TB 1DWPD with 5y warranty might list 3.5PBW.





  • As of USB-PD 3.1 there are now nine fixed voltages - 5, 9, 12, 15, 20, 28, 36, and 48V - and two variable-voltage modes; PPS with 3.3 - 21V in 0.02V increments, and AVS with 15 - 48V in 0.1V increments.

    Combined with a few different current limits, some of these features being optional, and then doubling down with what your cable does or doesn't support, amazing anything gets charged at all.