EVMS Release 2.3.4
==================

See the INSTALL file for installation instructions. The instructions are also
available at http://evms.sourceforge.net/install/.

See the Users-Guide at http://evms.sourceforge.net/users_guide/ for detailed
usage information.

Important notes concerning this release:

1. Move / Replace

   "Move" operations can now be performed with your volumes online/mounted. The
   DOS, GPT, and s390 segment manager plugins now support online move, object
   "replace" can now be performed online, and the LVM plugin can now perform
   PE-moves and PV-moves online.

   Currently, online moves can only be performed on a 2.4 kernel. The
   Device-Mapper mirroring code for 2.6 is available for testing, but still
   considered experimental.

2. Snapshotting

   Snapshots from EVMS 1.x:

   The snapshot metadata format changed between EVMS 1.x and EVMS 2.x, due to
   a difference in the way Device-Mapper and the old EVMS kernel driver handle
   snapshotting. Since snapshots are generally short-lived, we don't think this
   should present a big problem. If you are upgrading from EVMS 1.x, you should
   delete them before upgrading, and then recreate them under EVMS 2.x. If you
   don't delete them, EVMS 2.x will simply ignore those snapshot objects.

   Snapshot Expand:

   The storage space for a snapshot volume can now be expanded (with or without
   the snapshot volume mounted). To expand a snapshot, the object on which the
   snapshot is built must be expandable. For example, if the snapshot is built
   on an LVM region with extra freespace available in the LVM container, the
   LVM region can be expanded, and the snapshot object will make use of that
   new space.

   Snapshot Reset:

   Snapshot volumes can now be reset without deleting and recreating the volume
   on top of the snapshot object. This command will reset the snapshot to the
   current state of the origin volume. It may only be performed if the snapshot
   volume is unmounted.

   Snapshot Rollback:

   Snapshot volumes can now be "rolled-back" onto their origin volume. This
   function copies the saved contents of the snapshot back to the origin. Both
   the snapshot and the origin volume must be unmounted to perform this
   function, and must remain unmounted until the rollback is complete.

   Snapshots of Software-RAID volumes.

   Snapshots cannot be taken of compatibility or EVMS volumes that are made
   directly from MD RAID-1 and RAID-5 regions or full disks. In order to take
   a snapshot of a volume, the top object in that volume must be a Device-
   Mapper-managed device. This is necessary because that object's mapping must
   be modified to include hooks for copy-on-write to the snapshot device. Since
   RAID objects are handled by the MD kernel driver, and full disks are managed
   by the IDE or SCSI drivers, their "mappings" cannot change.

   For now, the snapshot plugin will simply not give the option of taking
   snapshots of these types of volumes. Future releases of EVMS will try to
   get around this restriction.

   Snapshots on 2.6 kernels.

   An experimental version of snapshotting is now available for 2.6 kernels.
   EVMS will work with this new snapshot code just as it does on 2.4. Since
   this kernel code is still experimental, you should use caution when
   creating snapshot volumes on a 2.6 kernel. However, basic tests with the
   2.6 snapshot code have been working correctly. Also, the VFS-locking patch
   has not yet been ported to 2.6. Thus, taking snapshots of mounted
   filesystems may not work properly. You may have to unmount your filesystem,
   take the snapshot, and then mount the filesystem again. The VFS-lock patch
   is currently being ported to 2.6, and will be available on the EVMS web site
   as soon as it is ready.
   
3. Software-RAID

   If you have existing Software-RAID devices that you would like to migrate
   to using EVMS, please make sure you are not using RAID auto-detect. EVMS
   requires volume discovery to be done in user-space. Having the kernel
   auto-detect just the RAID arrays will cause some inconsistencies in the
   RAID superblocks.

   If you are using auto-detect, you will need to use fdisk to change the
   partition types from 0xfd to 0x83.

   RAID-1 Reconfigure

   This new function allows a new object to be added as an active member to a
   RAID-1 region. In effect, this allows changing an n-way mirror to an n+1-way
   mirror.

   RAID-1 Resize

   This new function allows a RAID-1 region to be resized if all of its child
   objects can be resized.

   RAID-Linear Resize

   This new function allows a RAID-linear region to be resized in three possible
   ways. The last child object may be resized (if allowed by that object's
   plugin), an entirely new object may be added to the end of the linear region,
   or the last child object may be removed from the linear region.

   Restrictions:

   There is an unfortunate restriction for the three new RAID functions just
   described. To explain this, it's best to start by describing some of the
   core differences between the RAID driver and Device-Mapper (the term MD will
   not be used here to avoid naming confusion with Device-Mapper (DM)).

   A DM device has a mapping to some lower-level device(s). This mapping can be
   changed "online". This is done by temporarily suspending I/O to the device,
   replacing the existing mapping with a new mapping, and resuming the suspended
   I/O. This is the process used, for example, when expanding a volume.

   The RAID driver can also be thought of as having a mapping to lower-level
   devices. However, the RAID driver has no way to change this mapping "online",
   because it has no way to suspend I/O to its device while it changes the map.
   Thus, the only way to change this map (e.g. to expand a RAID or to add a new
   active member) is to deactivate the RAID device, write new superblocks
   indicating the change in the device, and reactivate the RAID device. On the
   surface this doesn't sound too bad. However, the key is knowing that the RAID
   driver will refuse to deactivate a device that is open. This is the case if
   the device is mounted, or the device is in use by a higher level device, 
   which will happen if you have LVM on top of RAID (as is your case).

   Therefore, there will be some restrictions on when raid-reconfig and 
   raid-resize can be used. These functions are allowed on a RAID object that is
   not a volume, or on a RAID object that is a compatibility volume and is not
   mounted. These function are currently NOT allowed on a RAID object that is an
   EVMS volume, or on a RAID object that is consumed by another plugin's
   object (such as LVM), because in these situations the RAID device will have
   a Device-Mapper device on top of it.

   Clearly, this is a pretty horrible restriction, but it is currently due to 
   limitations of the RAID driver. There has been talk of "porting" the RAID 
   driver to use Device-Mapper. However, this work definitely won't begin until
   the 2.7 kernel, which is not much help in the immediate future.

4. Multipath

   The MD-multipath plugin in EVMS now uses the new Device-Mapper multipath
   module instead of the MD driver's multipath personality. Thus, you need
   to have the appropriate Device-Mapper patches applied to your kernel in
   order to use multipath. For this release, EVMS will treat all paths as
   equal and load-balance across all of them. A future release will add the
   ability to indicate that one or more paths should only be used as backup
   paths.

