Whist solid-state drive (SSD) technology has been around since the 90s; it has taken manufacturers a long time to convince businesses that they are safe to use to store sensitive data, writesPhilip Bridge, President, Ontrack.
Upon launch, the SSD was marketed as a step up from the traditional hard disk drive that relied on a magnetic plate to save data. Instead, the SSD had no moving parts and consisted of just an electronic controller and several storage chips.
Chips with everything
The use of the SSD continues to gather pace. The main benefit of electronic chips for storage is that they are much faster than a legacy HDD. A standard HDD consists of many mechanical parts and rotating discs. When data needs to be accessed, the re-positioning of the read/write head takes much more time than just pushing data through electronic interfaces.
SSDs, by contrast, have a short access time, making them perfect for being used in environments where real-time access and transfer is a necessity – which describes most digitally transformed businesses today.
As we know from electronic devices in our private life, the downside of SSDs with the NAND Flash-based chips is that they have a limited life span. While standard HDDs can – in theory – last forever, an SSD has a built-in “time of death” that you can’t ignore. An electric effect results in data only being able to be written on the storage cells inside the chips a finite number of times. After that, the cells ‘forget’ new data.
Because of this – and to prevent certain cells from getting used all the time while others aren’t – manufacturers use wear-levelling algorithms to distribute data evenly over all cells by the controller. Businesses are encouraged to regularly check the current SSD status by using the SMART analysis tool, which shows the remaining life span of an SSD in a similar way you would check the tread depth of the tyres on your car.
New data every day
When it comes to the time of death, manufacturers try to give an estimate with the so-called terabyte(s) written (TBW). Because of wear-levelling, the data is distributed evenly over all cells. The TBW figure is, therefore, supposed to tell you how much data can be written in total on all cells inside the storage chips over its life span.
A typical TBW figure for a 250 GB SSD lies between 60 and 150 terabytes written. That means, to get over a guaranteed TBW of 70, a user would have to write 190 GB daily over one year (in other words, to fill two-thirds of the SSD with new data every day). While in a consumer environment, this is highly unlikely, in a 21st-century business, it is highly plausible.
For example, the Samsung SSD 850 PRO SATA is stated to be “built to handle 150 terabytes written (TBW), which equates to a 40 GB daily read/write workload over a ten-year period.” Samsung promises that the product is “withstanding up to 600 terabytes written (TBW).” If we consider a normal office user to write somewhere between 10 and 35 GB a day, even if one raises this amount up to 40 GB, it means that they could write for more than five years until they reach the 70 TBW limit.
The most recent estimates from Google and the University of Toronto after testing SSDs over a multi-year period put the age limit as somewhere between five and ten years depending on usage – around the same time as the average washing machine. The study confirmed that the age of the SSD was the primary determinant of when an SSD stopped working.
What if the worst happens?
So, what do you do if the worst happens and your SSD does indeed stop working? It is no exaggeration to say that in this era where data is king, not having access to that data could prove to be catastrophic. To mitigate the impact, it is best to contact a professional data recovery service provider where possible.
When it comes to a physical fault, it will not be possible for a user to recover or rescue their data themselves, however well-intentioned they may be. In fact, when the controller or storage chip is malfunctioning, any attempt to recover data with a specialised data recovery software tool could be even more dangerous as it can lead to permanent data loss with no chance of recovering it ever again.
Even though the average SSD lifespan is longer than that of legacy HDDs, using this storage medium still poses a serious threat, as recovering data from failed SSDs is far more challenging. When the SSD controller chip is broken, the only solution is to find a functioning controller chip that is identical to the bad one and to remove and exchange it to gain access. What sounds quite simple is, in fact, very difficult. It’s not like merely changing a worn-down tyre, so beware!