Independant Scratch Journal Settings
Thanks for reading the second part of this blog all about Zerto 9.5, in this post in going to look at another great feature Zerto has released, the ability to split out journal volumes and scratch volumes settings.
So for people who are newer to Zerto, what is a scratch volume and what does it do? Well a scratch volume is used inside of the Zerto solution to provide a temporary place for data to be written during operations such as a failover test, allowing users to perform any type of validation on their instantly available copy that zerto has spun up. The scratch volume is used in most of Zerto’s recovery operations, it allows for simple rollbacks from live failovers or move operations by ensuring. until a failover or move is committed any writes made are written to the scratch volume and not overwriting production data, if a rollback is triggered then the scratch volume simply gets removed and Zerto’s automation and orchestration kicks in to clean up and rollback recovery as if nothing ever happened.
So why is separating these settings good news? well historically, the scratch volume settings were directly tied to the journal volume settings, so parameters such as datastore location and maximum size were identical to that of the journal volume, this means that extra planning had to be made during extended failover tests or to make sure the underlying data store had enough free capacity for the scratch volume to write into.
Now there is separation between these two objects inside of the Zerto solution, users are now able to specify a different datastore – therefore a potentially different class or type of storage SSD over HDD for example, and also specify different hard limits to that of the journal, this is incredibly useful if a user wants to run a failover test for an extended period this allows them to create a scratch volume far larger than before to extend the period of time a failover test can last.
This setting is located inside the VPG settings so can be customised on a per VPG basis and on a per VM basis giving an mazing amount of flexibility.

That is all from Part 2, Keep an eye out for the next instalment which will be along soon
Thanks for Reading
Chris
















