
SolarFlare claims to achieve the same performance as ROCE systems, at around 120μs (read) and 46μs (write).

These then translate storage I/O requests into TCP/IP packets, in theory at least as rapidly as do ROCE cards, but with the advantage of sending packets over the network that are routable in Ethernet. In the case of SolarFlare, it supplies its XtremeScale adapters, which appear as NVMe drives. Last July, SolarFlare had the same idea as LightBits – to offer NVMe-over-fabrics that is directly compatible with 100 million Ethernet ports installed every year in the enterprise. On the other hand, this technique needs dedicated ROCE cards and switches, which are expensive compared with traditional network switches – a drawback that the LightBits solution avoids. In this way, it compensates for the relative slowness of Ethernet compared to the internal bus that NVMe relies on. ROCE reduces the number of protocol layers in TCP/IP to maximise useful data in traffic. Normally, NVMe and Ethernet-based array-type storage products use controller cards such as ROCE – in other words, a protocol that sends the contents of server RAM over the network. Dell offers NVMe drives while HPE reserves it for use as storage-class memory as a cache layer. Part one of two: All-flash is mainstream, with NVMe also offered.NVMe flash offers blistering performance gains but so far the big five storage array makers have tended to opt for gradual implementations rather than radical new architectures.He also co-founded Annapurna Labs, which was bought by Amazon in 2015. The attraction is that LightBits will allow iSCSI for the NVMe generation.įounder and chairman is Avigdor Willenz, who previously developed an Ethernet switch controller chip at startup Galileo Technologies, which was sold to Marvell Semiconductor in 2000. LightBits’ project has drawn attention from Dell EMC, Cisco and Micron, which have already invested $50m.

That’s a rate of about 4x that obtained when using just the server’s x86 processor. The Lightbox can be deployed with a LightField acceleration card that, with the help of an ASIC, compresses and rehydrates data at around 20GBps. But LightOS also brings advanced functionality that includes thin provisioning, compression, RAID, erasure coding, compression, balancing writes across media to extend life, and a multi-tenant management function that allows capacity to be divided between storage from different suppliers. In the LightBox array, the key function of the OS is to route parallel communications between Ethernet ports and NVMe cards. This set-up allows hosts to access storage with a latency of less than 200μs, which is a similar level of performance to flash drives directly attached to servers. Components are an x86 server with NVMe drives deployed and Ethernet ports – the LightBox – that is accessible to Linux servers on the same network and which have an NVMe-overTCP/IP driver installed. LightBits’ solution is based on its LightOS operating system, which converts TCP/IP packets to NVMe streams on the fly. Its savings come from use of already-deployed standard network switches with no need to install any card in the server. LightBits Labs aims to compete with NVMe-over-fabrics (NVMf) solutions that are coming to market but which are based on more costly NVMf schemes, such as RDMA over converged Ethernet (ROCE) and iWARP on Infiniband. LightBits claims its hardware can achieve 5 million IOPS.

Internal latency of the drives is lower than 100μs, while disk latency from a 100Gbps-connected server is lower than 200μs. Super SSD can accommodate two LightField cards per node, with each connected to 12 NVMe drives. There are two 100Gbps Ethernet ports per node. Raw maximum capacity is 264TB, which translates to 1PB of effective storage after compression/deduplication. LightBits’ Super SSD comes in 2U format nodes, which can house up to 24 NVMe drives of between 4TB and 11TB.
