September 15, 2020



We’ve got more to show you.

KALISTA IO will be presenting at SNIA Storage Developer Conference this year. Join us for an in depth technical discussion on what makes our Phalanx storage system unique. We will talk about creating a more user friendly Host Managed SMR experience and enabling storage devices to perform at their best. See you next Wednesday 09.23.20!

August 14, 2020



We’ve got something awesome to show you.

KALISTA IO and Western Digital are innovating to address the explosive growth of digital data.
Join us for a discussion and demo on reducing tail latency and cost of ownership with Phalanx storage system and Ultrastar® Host-Managed SMR HDDs.

May 21, 2020



Kalista IO and Western Digital
Enabling optimal performance and TCO at scale


Uncompromising reliability and performance,
with effortless simplicity.

Western Digital Ultrastar® DC HC620 Host Managed SMR HDD

Built for data center class workloads, Ultrastar DC HC620 is ideal for dense scale-out storage systems. It delivers the uncompromising product reliability necessary for private and public cloud enterprise applications. Ultrastar DC HC620 is built on the proven and mature HelioSeal® platform to deliver an exceptional watts/TB power footprint for online storage.

Kalista IO Phalanx Storage System

An intelligent storage system built for software-defined environments and next generation storage devices. Phalanx is engineered to deliver consistent and predictable performance at scale. Most importantly, it is designed to fit easily into existing workflows and orchestration/virtualization environments without disruption.

Phalanx and Ultrastar DC HC620 for Optimal Total Cost of Ownership (TCO) and Performance at Scale

Western Digital and Kalista IO are working together to help customer meet the challenges of big data. We are collaborating to enable, simplify and optimize distributed storage systems with Host Managed SMR devices. Our joint solution delivers performance consistency, predictability and optimal TCO at scale.

November 03, 2019



Hadoop and Ceph with Host Managed SMR.
No application changes, no kernel modifications.
A solution that just works.


16x

more IOPS
with fio random write1

19%

faster throughput
with Hadoop TestDFSIO read2

58%

higher IOPS
with Ceph Rados write bench3

10x

better performance consistency
with Ceph Rados write bench3


Kalista IO is excited to announce the industry's first storage system that enables Apache Hadoop and Ceph storage clusters to finally benefit from Host Managed SMR technology.

As amount of digital data grows exponentially, the ability to store and retrieve it all in a cost effective and performant manner becomes paramount. Host Managed SMR devices offers performance consistency, predictability and a lower TCO than conventional magnetic recording devices. However, these benefits come at the cost of software changes and incompatibility, requiring additional investments in the stack by the user. This is because Host Managed SMR devices are not natively compatible with existing storage stacks and applications. These factors have long hindered the adoption of Host Managed SMR, frustrating both storage vendors and users.

Designed and optimized to work with commodity hardware and next generation storage technologies, Kalista Phalanx enables simple and transparent use of Host Managed SMR devices - allowing users to benefit from Host Managed SMR without the associated deficiencies. With no application nor kernel changes required, Phalanx has minimized the friction and disruption to deploying Host Managed SMR and future storage technologies.

Kalista IO is enabling the democratization of affordable and performant storage solutions world wide. We are looking forward to have you join us!

  1. Testing conducted by Kalista IO in August 2019 using preproduction Olympus (Phalanx) software with Linux kernel 4.18.0-25-generic, and Intel® Core™ i7-4771 CPU 3.50GHz with 16GiB DDR3 Synchronous 2400 MHz RAM, and Western Digital Ultrastar DC HC620 host managed SMR and Ultrastar DC HC530 CMR drives connected through SATA 3.2, 6.0 Gb/s interface. Tested with Flexible I/O tester (fio) version 3.14-11-g308a. Random write bench ran for 1800 seconds with 4KB block and 200GB file size, 64 concurrent threads each with queue depth of 1. Executed 3 times to capture average and standard deviation IOPS values.
  2. Testing conducted by Kalista IO in August 2019 using preproduction Olympus (Phalanx) software with Linux kernel 5.0.0-25-generic, and Intel® Core™ i7-4771 CPU 3.50GHz with 16GiB DDR3 Synchronous 2400 MHz RAM, and Western Digital Ultrastar DC HC620 host managed SMR and Ultrastar DC HC530 CMR drives connected through SATA 3.2, 6.0 Gb/s interface. Tested with Apache Hadoop version 3.2.0 in single node pseudodistributed mode with single block replica, and TestDFSIO version 1.8 on OpenJDK version 1.8.0_222. TestDFSIO read benchmark ran with 32 files, 16GB each for a 512GB dataset. Executed 3 times to capture average and standard deviation throughput values.
  3. Testing conducted by Kalista IO in August 2019 using preproduction Olympus (Phalanx) software with Linux kernel 5.0.0-25-generic, and Intel® Core™ i7-4771 CPU 3.50GHz with 16GiB DDR3 Synchronous 2400 MHz RAM, and Western Digital Ultrastar DC HC620 host managed SMR and Ultrastar DC HC530 CMR drives connected through SATA 3.2, 6.0 Gb/s interface. Tested with Ceph version 13.2.6 Mimic in single node mode with single object replica. Rados write bench ran with 4MB object and block (op) size with 16 concurrent operations for 1800 seconds to capture average and standard deviation IOPS values.

"Great deeds are usually wrought at great risks." — Herodotus