You are going to create live zones on you server. Disk space is critical on this server so you need to reduce the amount of disk space required for these zones. Much of the data required for each of these zones is identical, so you want to eliminate the duplicate copies of data and store only data that is unique to each zone.
Which two options provide a solution for eliminating the duplicate copies of data that is common between all of these zones?
n Oracle Solaris 11, you can use the deduplication (dedup) property to remove redundant data from your ZFS file systems. If a file system has the dedup property enabled, duplicate data blocks are removed synchronously. The result is that only unique data is stored, and common components are shared between files.
Currently there are no comments in this discussion, be the first to comment!