【SSD Market Review】A Look at the 2014 SSD Controller Market
Published Jul.29 2014,10:28 AM (GMT+8)
Last year, we discussed with our readers the challenges that the SSD market would face following the release of new Flash Memory products and the advancements in their technologies; we also touched on the importance of controller IC design, as well as explained the general impact that it would have on the performance of various types of SSDs. For this year, we will be bringing the IC design discussion a step further by focusing on the current state of controller developments and their role in shaping the future of SSD products.
Rise of the PCI-Express Interface
For a long time, what many of the industry's most prominent IC manufacturers and designers believed was that the read and write speeds of SSDs are determined mostly by their interface rather than by the quality of their internal components or IC design. The belief for the most part appears to be a well-founded one: Since 2012, mainstream SSDs known to be based on the Serial ATA 6.0 interface were generally unable to surpass the 550 MB/s limit regardless of the effectiveness of the components used or the speed of the controller ICs. As it became increasingly clear that interface-related limitations are what's restricting a SSD's linear reading and writing speed, the priority of many of the manufacturers had shifted from the traditional SATA interface to alternative solutions that are capable of improving performances as well as transfer speeds. The option that appeared to be the most readily available and flexible for such a purpose was PCI-Express. Unfortunately, as much as it solved speed related issues and boosted hardware performances, there are a few well known problems with the PCI-Express interface that are worth pointing out.
The first major problem, as has been reported in the past, is the extra steps that need to be taken to truly optimize its performance. As the entire line of SSD controllers made before 2013 had been based on the Serial ATA interface, a user who wanted to switch to using PCI-Express would need to apply an extra "bridge chip" on their device in order to convert the Serial ATA and PCI-Express signals. Though the signal conversion between SATA and PCI-e is ultimately faster than that performed between two different SATA interfaces (ie. SATA to SATA), the speed isn’t necessarily fast enough to justify the extra cost incurred from the relatively expensive bridge chips.
Another PCI-e related problem that is worth mentioning concerns hardware specification and compatibility. The SSD's 2.5 inch design, as many will know, is not directly compatible with personal computing devices larger than notebooks. Those who want to use SSDs with their personal desktop computers will either have to spend extra money on installing 2.5-inch-to-3.5-inch adapter brackets or equipping a specially designed expansion chasis. The number of modifications that are required to be implemented before using SSDs in widely used PCs, along with their generally high cost, makes the already limited market for PCI-Express SSDs even smaller, and in turn causes their prices to stay high.
Thankfully, after Intel released the specifications for NVMe in 2008, various manufacturers had begun to develop special drives that potentially lets users boot from their PCI-Express SSDs. This is among the major developments within the IC industry that's boosting the adoption of the PCI-e interface among the masses. Looking at the 2014 data released by the industry's existing IC manufacturers, one would find that virtually all of the controller ICs today have begun to support the PCI-Express interface. Under such support, hardware makers can potentially create thin as well as highly effective devices that are easily compatible with the PCI-e format. Should a consensus eventually be reached among manufacturers to follow a universal set of specs, the PCI-Express interface could eventually become more mainstream within the industry, and be adopted by more and more consumers in the near future.
Key Technologies Used by New LSI SandForce SSDs
LSI SandForce’s recently announced 3000 series is among the company's latest products to support the earlier mentioned PCI-Express interface. Aside from possessing PCI-e compatibility, what’s also noteworthy about the 3000 series’ is its over-provisioning and dynamic over-provisioning features. Before going over the details, let us first take a look at the table below (data obtained from LSI):
In the case of the 3000 series, one particularly interesting feature that’s worth talking about is the ability for its Error Code Correction (ECC) function to be adjusted. This feature is made possible through a uniquely implemented technology known "LSI Shield," and is reportedly meant to facilitate the controller manufacturer's entrance into the high-end enterprise SSD market. Through the appropriate adjustments of a SSD's ECC code involving the use of digital signal processing (DSP), both LSI and its manufacturing clients can decide on the intensity of the ECC function for a particular SSD. Stronger intensity generally means better verification codes, higher calculation speeds, and quicker data saving times. These will all end up affecting the overall value of a SSD's available storage capacity.
So, why bother dealing with all the complications? The reason is relatively straightforward-- in the future, many of the flash memories used in SSDs will likely be based on TLC, which is known for their short life spans compared to MLC. In the absence of a strong, reliable protection mechanism, users will unlikely want to place their data inside of any storage systems, let alone pay for them. As for the enterprise level products, even if the longer lived MLC format were ultimately chosen in place of TLC, a user may still end up wanting a few more extra layers of security in their devices. The two internal adjustment features that were mentioned earlier are generally able to improve the reliability of SSDs, but do so at the cost of available capacity and performance. LSI has formally responded to this issue in one of their technical white papers by stating that “there are no perfect storage systems out there.” As having both strong performance and reliability at the same time is technically not possible, manufacturers often have to think hard about striking a proper balance between the two.
One other major LSI features that isn't particularly new but is still worth mentioning is LSI Raise (Redundant Array of Independent Silicon Elements). The main idea behind this feature is that when a SSD "naturally" or "accidentally" fails, it will enter into a "read only” mode which ensures the contents of the storage “can still be properly retrieved." The only thing that can't be reliably performed during this mode, not surprisingly, is the data writing function. Through the proper use of LSI's Raise technology, a user can effectively extend the life of their entire data by cloning it from one SSD device into another.
Marvell's New Application Technology
Where Marvell is currently spending a good majority of its resources these days appears to be in the promotion of the new "SATA Express" interface, which is currently believed to have the potential to exceed the limits of the Serial ATA 6.0Gb/s format. The controller that Marvell is reportedly using to help promote this interface is the 88SS1083 controller, as it is the first to successfully integrate the SATA and PCI-Express signals and is able handle two PCI-Express lanes to achieve ideal transfer speeds.