No Data Corruption & Data Integrity
Discover what ‘No Data Corruption & Data Integrity’ suggests for the info inside your web hosting account.
The process of files getting corrupted caused by some hardware or software failure is called data corruption and this is among the main problems that hosting companies face because the larger a hard disk is and the more data is placed on it, the more likely it is for data to be corrupted. You will find different fail-safes, still often the information is damaged silently, so neither the file system, nor the administrators notice anything. Thus, a corrupted file will be handled as a regular one and if the hard disk is part of a RAID, the file will be duplicated on all other drives. In principle, this is done for redundancy, but in practice the damage will be even worse. Once a given file gets corrupted, it will be partially or fully unreadable, which means that a text file will not be readable, an image file will show a random blend of colors if it opens at all and an archive will be impossible to unpack, so you risk losing your site content. Although the most commonly used server file systems feature various checks, they are likely to fail to discover a problem early enough or require a long period of time to check all the files and the hosting server will not be operational for the time being.
-
No Data Corruption & Data Integrity in Cloud Website Hosting
We have resolved the issue of silent data corruption on our
cloud website hosting servers by using the cutting-edge Z file system, or ZFS. The latter is more advanced than other file systems because it is the only one out there which checks all the files immediately by using a checksum - a digital identifier which is unique for each and every file. When you upload content to your account, it'll be stored on several NVMe drives and constantly synced between them for redundancy. ZFS regularly compares the checksum of all files and when any file is detected as damaged, it's replaced right away with a good copy from another disk. As this happens in real time, there is no risk that a bad file may remain or may be duplicated on the other NVMes. ZFS needs a lot of physical memory in order to carry out the real-time checks and the advantage of our cloud hosting platform is that we work with multiple powerful servers working together. In case you host your websites with us, your information will be undamaged no matter what.
-
No Data Corruption & Data Integrity in Semi-dedicated Hosting
You will not experience any kind of silent data corruption issues whatsoever if you acquire one of our
semi-dedicated hosting plans because the ZFS file system that we use on our cloud hosting platform uses checksums to make sure that all the files are intact all the time. A checksum is a unique digital fingerprint that is given to each and every file stored on a server. Since we store all content on a number of drives simultaneously, the same file has the same checksum on all of the drives and what ZFS does is that it compares the checksums between the different drives right away. When it detects that a file is corrupted and its checksum is different from what it should be, it replaces that file with a healthy copy right away, avoiding any probability of the bad copy to be synchronized on the other hard disks. ZFS is the only file system you can find that uses checksums, which makes it much more reliable than other file systems which are not able to detect silent data corruption and copy bad files across drives.