A company with a first-generation deduplication appliance is migrating to an HP StoreOnce 6200 Backup System. After migration, what will be the size of the deduplication block?
A. 4 kb
B. 8 kb
C. 16 kb
D. 32 kb
Correct Answer: A
Explanation/Reference:
Explanation:
Deduplication Optimization
HP created a highly-optimized deduplication approach that introduces time- and space-saving techniques. With the goal of eliminating the maximum amount of redundancy in its data inspection, while also maintaining a small index to deliver the fastest performance, the company focused on two components of its deduplication approach: an average variable chunk size of 4K, and a sparse index. HP’s Adaptive Micro-Chunking uses variablelength data segments or chunks. The backup stream is broken down into approximately 4K variable-length segments that are examined for redundancy versus previously stored information. Smaller segments means there are more chunks and index comparisons, which also means a higher potential to locate and eliminate redundancy (and produce higher reduction ratios). Comparative solutions use block sizes that range from 8K to 32K. The tradeoff with small chunk sizes is a greater number of index look-ups–which could mean slower deduplication performance. However, HP Labs developed HP Predictive Acceleration technology to maintain performance and reduce RAM requirements. By using a subset of key values stored in memory, StoreOnce determines a small number of sequences already stored on disk that are similar to any given input sequence–what HP refers to as sparse indexing. Then each input sequence is only deduplicated against those few sequences. This minimizes disk IO and uses less disk and little memory, creating more efficiency and enabling faster ingest and, importantly, restoration of data. HP’s approach accelerates reads/writes, and delivers rapid ingest rates of up to 28 TB/hour. Predictive Acceleration has enabled HP to require up to 50% less RAM than comparable solutions.
Explanation/Reference:
Explanation:
Deduplication Optimization
HP created a highly-optimized deduplication approach that introduces time- and space-saving techniques. With the goal of eliminating the maximum amount of redundancy in its data inspection, while also maintaining a small index to deliver the fastest performance, the company focused on two components of its deduplication approach: an average variable chunk size of 4K, and a sparse index. HP’s Adaptive Micro-Chunking uses variablelength data segments or chunks. The backup stream is broken down into approximately 4K variable-length segments that are examined for redundancy versus previously stored information. Smaller segments means there are more chunks and index comparisons, which also means a higher potential to locate and eliminate redundancy (and produce higher reduction ratios). Comparative solutions use block sizes that range from 8K to 32K. The tradeoff with small chunk sizes is a greater number of index look-ups–which could mean slower deduplication performance. However, HP Labs developed HP Predictive Acceleration technology to maintain performance and reduce RAM requirements. By using a subset of key values stored in memory, StoreOnce determines a small number of sequences already stored on disk that are similar to any given input sequence–what HP refers to as sparse indexing. Then each input sequence is only deduplicated against those few sequences. This minimizes disk IO and uses less disk and little memory, creating more efficiency and enabling faster ingest and, importantly, restoration of data. HP’s approach accelerates reads/writes, and delivers rapid ingest rates of up to 28 TB/hour. Predictive Acceleration has enabled HP to require up to 50% less RAM than comparable solutions.