TORmem is proud to announce that we are joining the OpenCAPI consortium, a new, leading group within the industry dedicated to enabling the future of low-latency disaggregated memory.
Current and emerging workloads are pushing the bounds of the local memory that is available to them on a typical server. AI and ML workloads alone require vast amounts of fast memory that are impossible or impractical to address with the legacy model of small amounts of fast memory installed within the local server, with access to large amounts of slow memory via the network.
What is needed to make memory disaggregation practical is an interface which connects the processors within the server to this external disaggregated memory, at speeds which make it equal or nearly equal to that server’s local memory. This is essential for workload performance.
TORmem and OpenCAPI
TORmem designs and manufactures disaggregated memory appliances across a range of capabilities and price points. To make these a reality, we need interconnection systems which are built to provide the performance that disaggregated memory requires. OpenCAPI’s Open Memory Interface (OMI) is a serial differential bus providing 64 gigabytes per second of bandwidth and supporting up to 256 gigabytes per channel. It is designed to provide high bandwidth, low latency access to fast memory by a CPU; these characteristics make it ideal for disaggregated memory and we are implementing it in our products, along with other new technologies such as differential DIMMs (DDIMMs).
By adopting OMI and working with each member of the OpenCAPI consortium, we will provide valuable real world experience with disaggregated memory technologies that will then help the consortium to see further adoption of OMI for a wide range of disaggregated memory use cases across multiple industries.
As memory disaggregation continues to be developed and begins to move into the mainstream, the focus across the industry will move from initial technical design wins to the economics of the technology when applied at both small and large scale. When dealing with new hardware centric technologies, the supply chain is a vital element of this economic equation. Disaggregation will introduce new appliances to the data center, and change how memory is purchased for servers.
The prices of the large amounts of fast DRAM that a memory disaggregation appliance will host are going to limit the adoption of this technology if all its end users are expected to acquire the memory themselves. TORmem believes that our expertise in supply chain operations and our relationships with key manufacturers and suppliers will make memory disaggregation practical for a wide range of customers, not just for those with the very largest deployments and teams.
By joining OpenCAPI, we plan to increase community adoption of the consortium’s technologies while simplifying the supply chain for everyone who is exploring disaggregated memory and its many use cases. This will allow us all to benefit from better pricing on these large amounts of cutting edge memory, making disaggregated memory more practical for everyone involved. Our approach will help small and large customers use disaggregated memory in an economic way.
At TORmem, we believe in One Memory for All, our vision of high-speed disaggregated memory at data center scale, for enterprise, cloud and HPC use cases. Decouple your memory from your servers, speeding up today’s applications and enabling tomorrow’s, while optimizing costs.