Manual Page Result
0
Command: softraid | Section: 4 | Source: OpenBSD | File: softraid.4
SOFTRAID(4) FreeBSD Kernel Interfaces Manual SOFTRAID(4)
NAME
softraid - software RAID
SYNOPSIS
softraid0 at root
DESCRIPTION
The softraid device emulates a Host Bus Adapter (HBA) that provides RAID
and other I/O related services. The softraid device provides a scaffold
to implement more complex I/O transformation disciplines. For example,
one can tie chunks together into a mirroring discipline. There really is
no limit on what type of discipline one can write as long as it fits the
SCSI model.
softraid supports a number of disciplines. A discipline is a collection
of functions that provides specific I/O functionality. This includes I/O
path, bring-up, failure recovery, and statistical information gathering.
Essentially a discipline is a lower level driver that provides the I/O
transformation for the softraid device.
A volume is a virtual disk device that is made up of a collection of
chunks.
A chunk is a partition or storage area of fstype "RAID". disklabel(8) is
used to alter the fstype.
Currently softraid supports the following disciplines:
RAID 0
A striping discipline. It segments data over a number of chunks to
increase performance. RAID 0 does not provide for data loss
(redundancy).
RAID 1
A mirroring discipline. It copies data across more than one chunk
to provide for data loss. Read performance is increased, though at
the cost of write speed. Unlike traditional RAID 1, softraid
supports the use of more than two chunks in a RAID 1 setup.
RAID 5
A striping discipline with floating parity across all chunks. It
stripes data across chunks and provides parity to prevent data loss
of a single chunk failure. Read performance is increased; write
performance does incur additional overhead.
CRYPTO
An encrypting discipline. It encrypts data on a single chunk to
provide for data confidentiality. CRYPTO does not provide
redundancy.
CONCAT
A concatenating discipline. It writes data to each chunk in
sequence to provide increased capacity. CONCAT does not provide
redundancy.
RAID 1C
A mirroring and encrypting discipline. It encrypts data to provide
for data confidentiality and copies the encrypted data across more
than one chunk to prevent data loss in case of a chunk failure.
Unlike traditional RAID 1, softraid supports the use of more than
two chunks in a RAID 1C setup.
installboot(8) may be used to install boot(8) in the boot storage area of
the softraid volume. All chunks in the volume will then be bootable.
Boot support is currently limited to the CRYPTO, RAID 1 disciplines on
the amd64, arm64, i386, riscv64 and sparc64 platforms. amd64, arm64,
riscv64 and sparc64 also have boot support for the RAID 1C discipline.
On sparc64, bootable chunks must be RAID partitions using the letter `a'.
At the boot(8) prompt, softraid volumes have names beginning with `sr'
and can be booted from like a normal disk device. CRYPTO and 1C volumes
will require a decryption passphrase or keydisk at boot time.
The status of softraid volumes is reported via sysctl(8) such that it can
be monitored by sensorsd(8). Each volume has one fourth level node named
hw.sensors.softraid0.driveN, where N is a small integer indexing the
volume. The format of the volume status is:
value (device), status
The device identifies the softraid volume. The following combinations of
value and status can occur:
online, OK
The volume is operating normally.
degraded, WARNING
The volume as a whole is operational, but not all of its
chunks are. In many cases, using bioctl(8) -R to rebuild
the failed chunk is advisable.
rebuilding, WARNING
A rebuild operation was recently started and has not yet
completed.
failed, CRITICAL
The device is currently unable to process I/O.
unknown, UNKNOWN
The status is unknown to the system.
EXAMPLES
An example to create a 3 chunk RAID 1 from scratch is as follows:
Initialize the partition tables of all disks:
# fdisk -iy wd1
# fdisk -iy wd2
# fdisk -iy wd3
Now create RAID partitions on all disks:
# echo 'RAID *' | disklabel -wAT- wd1
# echo 'RAID *' | disklabel -wAT- wd2
# echo 'RAID *' | disklabel -wAT- wd3
Assemble the RAID volume:
# bioctl -c 1 -l /dev/wd1a,/dev/wd2a,/dev/wd3a softraid0
The console will show what device was added to the system:
scsibus0 at softraid0: 1 targets
sd0 at scsibus0 targ 0 lun 0: <OPENBSD, SR RAID 1, 001> SCSI2
sd0: 1MB, 0 cyl, 255 head, 63 sec, 512 bytes/sec, 3714 sec total
It is good practice to wipe the front of the disk before using it:
# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
Initialize the partition table and create a filesystem on the new RAID
volume:
# fdisk -iy sd0
# echo '/ *' | disklabel -wAT- sd0
# newfs /dev/rsd0a
The RAID volume is now ready to be used as a normal disk device. See
bioctl(8) for more information on configuration of RAID sets.
Install boot(8) on the RAID volume, writing boot loaders to all 3 chunks:
# installboot sd0
At the boot(8) prompt, load the /bsd kernel from the RAID volume:
boot> boot sr0a:/bsd
SEE ALSO
bio(4), bioctl(8), boot_sparc64(8), disklabel(8), fdisk(8),
installboot(8), newfs(8)
HISTORY
The softraid driver first appeared in OpenBSD 4.2.
AUTHORS
Marco Peereboom.
CAVEATS
The driver relies on underlying hardware to properly fail chunks.
The RAID 1 discipline does not initialize the mirror upon creation. This
is by design because all sectors that are read are written first. There
is no point in wasting a lot of time syncing random data.
The RAID 5 discipline does not initialize parity upon creation, instead
parity is only updated upon write.
Stacking disciplines (CRYPTO on top of RAID 1, for example) is not
supported at this time.
Currently there is no automated mechanism to recover from failed disks.
Certain RAID levels can protect against some data loss due to component
failure. RAID is not a substitute for good backup practices.
FreeBSD 14.1-RELEASE-p8 April 25, 2024 FreeBSD 14.1-RELEASE-p8