Stornext file system tuning guide




















Small files could be steered clear of large files, or to narrow-stripe stripe groups. Note: Affinity names cannot be longer than eight characters. Matching the RAID stripe size is usually the most optimal setting. This concurrency can provide up to a 4X performance increase.

This typically requires some experimentation to determine the RAID characteristics. The lmdd utility can be very helpful. Note that this setting is not adjustable after initial file system creation. Optimal range for the StripeBreadth setting is K to multiple megabytes, but this varies widely.

This setting cannot be changed after being put into production, so its important to choose the setting carefully during initial configuration. BufferCacheSize This setting consumes up to 2X bytes of memory times the number specified. Increasing this value can reduce latency of any metadata operation by performing a hot cache access to directory blocks, inode information, and other metadata info.

We recommend sizing this according to how much memory is available; more is better. A higher setting is more affective if the CPU is not heavily loaded.

InodeCacheSize This setting consumes about bytes of memory times the number specified. You should try to size this according to the sum number of working set files for all clients. Optimal settings for InodeCacheSize range from 16K to K for a new file system and can be increased to K or K as a file system grows. Increasing this value can improve concurrency of metadata operations. A range from 32 to is recommended, depending on the amount of available memory.

It is recommended to size it according to the max threads FSM hourly statistic reported in the cvlog file. Therefore, a setting of 32 is too small when 16 is seen in the hourly logs. This value does not have to be a power of 2, but it should be even. Example Windows ThreadPoolSize For example, the FsBlockSize must be set correctly in order for the metadata sizing to be correct.

JournalSize is also dependent on the FsBlockSize. For FsBlockSize the optimal settings for both performance and space utilization are in the range of 16K or 64K. Values less than 16K are not recommended in most scenarios because startup and failover time may be adversely impacted.

Setting FsBlockSize to higher values is. However, values greater than 16K can severely consume metadata space in cases where the file-to- directory ratio is low e. For metadata disk size, you must have a minimum of 25 GB, with more space allocated depending on the number of files per directory and the size of your file system.

The following table shows suggested FsBlockSize FSB settings and metadata disk space based on the average number of files per directory and file system size.

The amount of disk space listed for metadata is in addition to the 25 GB minimum amount. Use this table to determine the setting for your configuration.

Average No. This setting is not adjustable after initial file system creation, so it is very important to give it careful consideration during initial configuration. Avoid values greater than 64M due to potentially severe impacts on startup and failover times. Values at the higher end of the 16MM range may improve performance of metadata operations in some cases, although at the cost of slower startup and failover time.

The following table shows recommended settings. Choose the setting that corresponds to your configuration. This setting is adjustable using the cvupdatefs utility.

For more information, see the cvupdatefs man page. Note: JournalSize should be evaluated after a few months of use by viewing the hourly statistics and looking for any journal waits. If there are many in a single hour, consider increasing the journal size and then reexamine the hourly statistics to see if the bottleneck has moved to some other part of the file system like ThreadPoolSize or cache misses or the hardware high sysmax and sysavg times.

Example Windows JournalSize 16M. Reducing extent fragmentation can be very beneficial for performance. You can use this utility to determine whether files are fragmented, and if so, fix them.

The global configuration settings InodeExpandMin, InodeExpandInc, and InodeExpandMax have been deprecated and settings are instead calculated on a file-by-file basis as allocations are performed. This results in better allocations for more files as the values are no longer a compromise if there are widely varying file types on the file system. However, if a majority of the files are still fragmented, then these values can be adjusted and will override the default behavior.

Note: Beginning with StorNext 4. Although the InodeExpand parameters can still be entered and used in StorNext 4. Another way to combat fragmentation is with the cachebufsize mount option increasing it from the default of 64k to something larger, such as K or K on the clients that are creating the fragmented files, or by altering the way the application writes data to the SAN. The InodeExpand parameters are file system wide and can be adjusted after the file system has been created.

The cachebufsize parameter is a mount option and can be unique for every client that mounts the file system. FSM hourly statistics reporting is another very useful tool. This information is easily accessed in the cvlog log files. All of the latency oriented stats are reported in microsecond units. It also possible to trigger an instant FSM statistics report by setting the Once Only debug flag using cvadmin.

If so, the snfsdefrag utility can be used to fix the fragmented files. This shows what type of metadata operations are most prevalent and most costly. These are also broken out per client, which can be useful to identify a client that is disproportionately loading the FSM. This provides many useful statistics counters for the SNFS client component.

To install, obtain a copy of cvfsperf. Then run rmperfreg. After these steps, the SNFS counters should be visible to the. Windows Perfmon utility. If not, check the Windows Application Event log for errors. The cvcp utility is a higher performance alternative to commands such as cp and tar.

However, it will not use Bulk Create in some scenarios, such as non-root invocation, managed file systems, quotas, or Windows security. When Bulk Create is utilized, it significantly boosts performance by reducing the number of metadata operations issued. For example, up to 20 files can be created all with a single metadata operation.

For more information, see the cvcp man page. The cvmkfile utility provides a command line tool to utilize valuable SNFS performance features. These features include preallocation, stripe alignment, and affinities. See the cvmkfile man page. Zeroing holes in files has a performance impact. If VFS throughput is inconsistent or significantly less than Device throughput, it might be caused by metadata operations.

If Device throughput is inconsistent or less than expected, it might indicate a slow disk in a stripe group, or that RAID tuning is necessary. Another way to avoid this is by truncating the file before writing.

This utility causes small messages to be exchanged between the FSM and clients as quickly as possible for a brief period of time, and reports the average time it took for each message to receive a response. Client index numbers are displayed by the cvadmin who command.

If all is specified, the test is run against each client in turn. The test is run for 2 seconds, unless a value for seconds is specified. However, for otherwise idle systems, differences in latency between different systems can indicate differences in hardware performance. Differences in latency over time for the same system can indicate new hardware problems, such as a network interface going bad. If a latency test has been run for a particular client, the cvadmin who long command includes the test results in its output, along with information about when the test was last run.

The default size of the buffer cache varies by platform and main memory size, and ranges between 32MB and MB. And, by default, each buffer is 64K so the cache contains between and buffers.

In general, increasing the size of the buffer cache will not improve performance for streaming reads and writes. However, a large cache helps greatly in cases of multiple concurrent streams, and where files are being written and subsequently read.

Buffer cache size is adjusted with the buffercachecap setting. The default setting is usually optimal; however, sometimes performance can be improved by increasing this setting to match the RAID 5 stripe size. You can combat fragmentation with the cachebufsize mount option increasing it from the default of 64k to something larger, such as K or K on the clients that are creating the fragmented files, or by altering the way the application writes data to the SAN.

The default setting is optimal in most scenarios. The lmdd utility is handy here. The dircachesize option sets the size of the directory information cache on the client.

This cache can dramatically improve the speed of readdir operations by reducing metadata network message traffic between the SNFS client and FSM. Increasing this value improves performance in scenarios where very large directories are not observing the benefit of the client directory cache. Optimistic Allocation Starting with StorNext 4. The InodeExpand values are still honored if they are in the. Furthermore, when converting to StorNext 4. Instead, the new formula is used. The original InodeExpand configuration was difficult to explain, which could lead to misconfigurations that caused either over or under allocations resulting in wasted space or fragmentation, which is why the new formula seeks to use allocations that are a percentage of the existing file's size to minimize wasted space and fragmentation.

In both cases, the InodeExpandMin value is saved in an internal data structure in the file's inode, to be used with subsequent allocations.

Subsequent DMA IOs that require more space to be allocated for the file add to the InodeExpandInc value saved in the inode, and the allocation is the larger of this value or the IO size. However, that 6MB allocation is likely contiguous and therefore the file has at most 2 fragments which is better than 8 fragments it would have had otherwise. This pattern repeats until the file's allocation value is equal to or larger than InodeExpandMax, at which point it's capped at InodeExpandMax.

Another possible problem is an InodeExpandMax that's too small, causing the file to be composed of fragments smaller than it otherwise could have been created with. With very large files, without increasing InodeExpandMax, it can create fragmented files due to the relatively small size of the allocations and the large number that are needed to create a large file. Another possible problem is an InodeExpandInc that's not aggressive enough, again causing a file to be created with more fragments than it could be created with, or to never reach InodeExpandMax because writes stop before it can be incremented to that value.

Optimistic Allocation The new formula is simple and is best explained as a table of values: Formula.

File Size in bytes Optimistic Allocation. To examine how well these allocation strategies work in your specific environment, use the snfsdefrag utility with the -e option to display the individual extents allocations in a file. Here is an example output from snfsdefrag -e testvideo2.

Usually it's all on the same stripe group, but not always. If you perform bandwidth expansion, this number is the old number of LUNs before bandwidth expansion, and signifies that those files aren't taking advantage of the bandwidth expansion. If the file is sparse, you will see "HOLE" displayed. Having holes in a file isn't necessarily a problem, but it does create extra fragments one for each side of the hole. Tuning to eliminate holes can reduce fragmentation, although it does that by using more disk space.

Neither TCP offload nor jumbo frames are required. Ensure that your network switches have enough internal bandwidth to handle all of the anticipated traffic between all Distributed LAN clients and servers connected to them.

A network switch that is dropping packets will cause TCP retransmissions. This can be easily observed on both Linux and Windows platforms by using the netstat -s command while Distributed LAN is in progress. Note that Distributed LAN server remounts are required after changing this parameter. A router between a Distributed LAN client and server could be easily overwhelmed by the data rates required.

This is indicated by bad segments indicated in the output of netstat -s. On Linux, the TCP offload state can be queried by running ethtool -k, and modified by running ethtool -K. On Windows it is configured through the Advanced tab of the configuration properties for a network interface.

The internal bus bandwidth of a Distributed LAN client or server can also place a limit on performance. For example, some NICs might be able to transmit or receive each packet at Gigabit speeds, but not be able to sustain the maximum needed packet rate.

It can be useful to use a tool like netperf to help verify the performance characteristics of each Distributed LAN network. When using netperf, on a system with multiple NICs, take care to specify the right IP addresses in order to ensure the network being tested is the one you will be running Distributed LAN over.

Multiple copies of netperf can also be run in parallel to determine the performance characteristics of multiple NICs. Step 4. Configure the common fsports file. Using the example from step 3 of where 10 ports are needed starting at Install the resulting file on all servers from Step 1. Also install the file on all clients if the AltPmap directive was used. Then restart StorNext. This does not result in conflicts since each network address is comprised of an IP address and a port number and is therefore unique even when using the same port number as another network address.

This also requires that clients outside of the firewall use an fsports file. The fsports file does not constrain the ports used by the client end of connections. Ephemeral ports are used instead. Therefore, the fsports file is only useful on clients when the AltPmap directive is used.

When using fsports files, if services fail to start or clients fail to connect, to debug the problem, try slightly increasing the range of open StorNext ports on the firewall and, correspondingly, in the fsports files. Running netstat on the servers may reveal that unex- pected processes are binding to ports within the range specified in the fsports file.

Also, if services are restarted on Windows servers, in some cases ports may not be reusable for several minutes. If you would like to bypass the System Control screen and display the Configuration Wizard Welcome Screen, you can run the command service cvfs start. After entering license information, you should start services on the System Control screen. Installing a StorNext 4. Steps are run on both the primary and secondary nodes, and are indicated accordingly.

Note: The actual build numbers may be different than those shown in the example. Note: The fsnameservers file on the secondary must be the same as the fsnameservers file on the primary.

Before doing this, ensure that the Secondary is turned off and unplugged. There is no benefit from doing this when the machine is connected to a network that can resolve its name to an IP address. For this reason, the This chapter describes how to install the StorNext client software. The StorNext client software lets you mount and work with StorNext file systems.

To ensure successful operation, make sure the client system meets all operating system and hardware requirements see Client System Requirements on page After downloading the client software, install and configure it using the appropriate method for your operating system see Installing the StorNext Client on Linux or Unix on page 31 or Installing the StorNext Client on Windows on page For more information, see Chapter 1, Installing StorNext.

Client System Requirements To run the StorNext client software, the client system must meet all operating system and hardware requirements. Operating System The operating systems, releases and kernels, and hardware platforms Requirements supported by the StorNext client software are presented in Table 7. Make sure the client system uses a supported operating system and platform, and if necessary update to a supported release or kernel version before installing StorNext.

Windows R2 SP2 x86 bit Server x86 bit. Red Hat Enterprise 2. EL Update 6 x86 bit Linux 4 2. EL Update 7 x86 bit 2. EL Update 1 x86 bit Linux 5 2. SUSE Linux 2. Operating System Release or Kernel Platform.

Sun Solaris 10 Generic Opteron x86 bit and Intel x86 bit. Generic Sparc bit. If you do not want to upgrade to HP11iv3 and want to keep clients at the HP11iv2 level, those clients must remain on StorNext 3.

Uninstall StorNext 3. Upgrade to HP11iv3 3. Install StorNext 4. Hardware To install and run the StorNext client software, the client system must Requirements meet the following minimum hardware requirements.

A StorNext client buffer cache is created for each different cachebufsize. By default, all file systems have the same cachebufsize of 64K, so they all share the same buffer cache. The amount of memory consumed by default for each cachebufsize depends on the platform type and the amount of memory in the system.

Table 8 shows the default amount of memory consumed by cachebufsize. All other platforms 64MB MB. To see information about the buffer cache after mounting file systems, use the cvdb 1 command with the -b option. To change the amount of memory used by the buffer cache at mount time, use the buffercachecap parameter.

On Windows, the non-paged pool is used for buffer cache memory until it consumes up to 64 megabytes bit systems or 64 gigabytes bit systems. Any additional buffer cache memory comes from the paged pool. To download the client software, the client system must have network access to the MDC. The default username is admin, and the default password is password.

The StorNext home page appears. Caution: Before installing the StorNext client software, you must install the kernel source code. You can install the kernel source code by using the installation disks for your operating system. At the command prompt, type: ls -l Identify the correct package to install.

The correct package begins with snfs-client and ends with the. Note: The file that ends with. The fsnameservers file on the client must be exactly the same as on the MDC. If the fsnameservers file does not exist, use a text editor to create it.

After reboot, the StorNext file system is mounted at the mount point you specified. Note: To manually mount a file system, at the command prompt, type:. At the command prompt, type: pkgadd -d.

When installation is complete, type q to quit the installation program. Note: To manually mount a file system, at the command prompt type:. The correct package begins with snfs and ends with the. The StorNext file system is mounted at the mount point you specified. When you are ready, use the setup wizard to install StorNext see Running the Setup Wizard on page Optional After installation, restore the previous client configuration see Restoring a Previous Client Configuration on page Note: You must log on as an Administrator to install StorNext.

If you are installing on Windows Vista, answer Yes to any messages asking if you want to run the installation process with administrative privileges. Removing a Previous If a previous version of StorNext exists on the system, you must remove Version of StorNext it before installing the new version. This file is named SnfsSetup The StorNext Installation window appears Figure 6. A dialog box appears informing you that the current client configuration has been saved. Note: After installing the new version of StorNext, you can restore the saved client configuration see Restoring a Previous Client Configuration on page The StorNext Installation window appears Figure 7.

The StorNext setup wizard appears Figure 8. The License Agreement window appears Figure 9. The Customer Information window appears Figure The Choose Setup Type window appears, Figure When ready, click Next. The Ready to Install window appears Figure Wait while the setup wizard installs StorNext.

When installation is complete, the Finish window appears Figure You are prompted to reboot the system. You can now configure StorNext File System. Restoring a Previous If you saved a client configuration file for example, when removing a Client Configuration previous version of StorNext , you can import it after installing StorNext. This configures StorNext using the same settings as the previous installation.

The StorNext Installation window appears Figure The StorNext Configuration window appears Figure A message appears informing you the configuration settings were successfully added to the registry.

This chapter describes how to configure StorNext after installation. To configure StorNext, enter license information and create one or more file systems. In addition, on metadata controllers MDCs running StorNext Storage Manager, you can add storage devices and media, create storage policies, and set up e-mail notifications.

For systems running Windows, use the Windows-based configuration utilities to set up server and client features see Windows Configuration Utilities on page In addition, on metadata controllers running Storage Manager, you can use the StorNext GUI to configure storage devices and media, and to set up storage policies.

Note: The following browsers have been tested to work with StorNext. Browsers not listed may work but are not recommended. If you use a popup blocker, be sure to enable pop-ups in order to ensure that StorNext displays properly.

Use the name of the machine and port number you copied when you installed the StorNext software. Note: Typically, the port number is If port 81 is in use, use the next unused port number. After you enter the machine name and port number, the following window appears:. The initial StorNext System Control screen appears.

If not, click Start for each component to start them. Note: When you log into StorNext for the first time, you might see a message warning you about a security certificate.

The wizard guides you step-by-step through the process of configuring StorNext. All configuration tasks can be accessed at any time using the StorNext Setup menu. Using the The Configuration Wizard consists of nine steps. The wizard lets you Configuration Wizard navigate between steps and tracks your progress as you complete each step. You can also convert to a high availability HA system. The configuration utilities let you set up a Windows-based metadata controller, configure a StorNext client, and work with StorNext file systems.

Client Configuration The Client Configuration utility lets you view and modify properties for the StorNext client software. Use the utility to specify mount points and mount options, set various cache settings, and configure a distributed LAN server or a distributed LAN client.

After making changes on one or more tabs, click OK to save the changes. A message appears prompting you to restart the system. Click Yes. Most changes do not take affect until the system is restarted. Mount Point StorNext file systems can be mapped to a drive letter or to a folder. When mapping to a folder, the folder must be empty or non-existent, and must exist within an NTFS file system e.

Use the Mount Point tab Figure 20 to specify drive mappings. Table 9 describes the fields on the Mount Point tab. Note: Make sure the host list is the same for all clients on the SAN.

An incorrect or incomplete host list may prevent the client from connecting to the file system. Drive Letter Select the desired drive letter. Enter a directory path, or click Browse to navigate to the directory path. Note that separate licensing is required for DLAN. Do not run other applications or services on a system configured as a distributed LAN server.

Table 10 describes the fields on the Distributed LAN tab. If you select this check box, all other fields on the tab become unavailable. Caution: Changing the values on the Mount Options tab can affect system performance and stability.

Do not change mount options unless instructed to do so by the Quantum Technical Assistance Center. After a successful connection, this value is no longer used. The default is 1. Hard Mount When this box is checked, the driver attempts to mount the file system forever. The default is off Soft Mount. Allow Diskless When this box is checked, the file system can Mount be mounted and accessed without all the disks being accessible in the file system stripe groups.

In this mode, file metadata can be viewed and changed, but no data can be accessed. The default is to not allow a file system to be mounted unless all disks are accessible. Read Only When this box is checked, the file system is mounted in read-only mode. The default is off or not checked. Restrict Pre- When set to yes, non-administrator users are allocation API unable to use the preallocation ioctl.

The default is 8. The allowed range is 4 to Number of System Specify the number of threads created for Threads use by the file system. The default value is The default is 50 five seconds.

The default is The default value is 5 seconds. Fast Failover When this box is checked, the client uses a Detection more aggressive approach to detecting if the FSM has failed. With this option on, the client monitors the FSM heartbeat. If no heartbeat is detected within three 3 seconds, a failover request is initiated. This option is desirable for near realtime activities that cannot sustain more than 10 seconds of access loss to the server.

Quantum does not recommend enabling this option for normal file system operations. If this option is not enabled, the file system attempts to reconnect to the FSM for the number of times specified at the Mount Retransmit Limit field before failing the request. The default value is off. The default value is 60 seconds.

The parameter should be specified in five-second increments; if the parameter is not a multiple of five, it will be rounded up automatically. The Advanced Cache Options tab Figure 23 displays performance values that control how many file system lookup names are kept in memory. Caution: Changing the values on the Advanced Cache Options tab can affect system performance and stability. Do not change cache parameters unless instructed to do so by the Quantum Technical Assistance Center.

Enable Data Buffer When this box is not checked, the file system Cache will not buffer any data. All files will be read directly into the application's memory using DMA. Requests that are not properly aligned will be read into temporary buffers, then copied to the application's buffer. Individual Buffer This option sets the size of each cache buffer.

To determine optimal performance, try different sizes or contact your RAID vendor. This size must be divisible by the file system block size.

The maximum value allowed is K and the minimum value allowed is the file system block size. The value default is 64K. Minimum Total This value controls the amount of memory Cache Size used to cache data from the file system.

This parameter is shared by all file systems with the same block size. That is, any smaller transfer will always go through the buffer cache. The default value is 1MB. Number of Read- This option controls the size of the read- ahead Buffers ahead window used by the buffer cache. The default value is 16 buffers with a maximum of Using a value of 0 disables read-ahead. The default value is 8, with a minimum of 1 and a maximum of The default values are 30 seconds for non- shared files Attribute Flush Time, non shared , and 2 seconds for shared files Attribute Flush Time, shared.

Setting these parameters lower will greatly increase the amount of metadata traffic between clients and the FSM. However, in circumstances where files are frequently shared on the SAN, setting the Attribute Flush Time Shared can result in other clients seeing size updates more frequently if a file is being written on one client and read on another. A value of zero is invalid, and will result in using the default setting.

Delay Atime When this box is checked, the file system Updates delays Atime access time updates when reading a file to the time when the file is closed. This cuts down on FSM metadata updates at the expense of coherency. Entries Tuning the low and high water marks and the frequency of purge passes can help certain Purge Period large mix applications. Minimum Directory This option sets the size of the directory Cache Size cache. Directory entries are cached on the client to reduce client-FSM communications during directory reads.

The default value is 10 MB. Use the Disk Device Labeler to create a list of disk labels, associated device names, and optional the sectors to use. The file system uses the volume labels to determine which disk drives to use. The label name written to a disk device must match the disk name specified in the Server Configuration utility.

For more information, see Server Configuration on page Caution: Modifying the label of a system disk may render the system inoperable and require you to repair the volume. The Disk Labeler window Figure 24 appears. Use this feature to correctly identify disks before labeling them.

Labeling Disks When you select one or more disks and click Label, a confirmation screen appears asking if you are sure you want to proceed. Click OK to continue. The Disk Labeler dialog box appears Figure Table 13 describes the fields on the on the Disk Labeler dialog box. New Disk Label Type the label for the disk.

New Sectors Optional Type the number of sectors on the disk. Create Label Write the new label to the disk and proceed to the next selected disk. Skip Disk Do not write a label to the disk and proceed to the next selected disk. Cancel Close the Disk Labeler dialog box. License Identifier Use the License Identifier utility to display the host license identifier. The host license identifier is required to obtain a permanent license for StorNext. A dialog box displays the host license identifier.

Record this information. To obtain a permanent license, contact the Quantum Technical Assistance center at licenses quantum. A Quantum support representative will send you a license. If there is a temporary license file, rename the file or move it to a backup location. Note: To prevent accidentally losing a valid license, be sure to back up or rename any existing license.

Note: Before configuring a file system, you should label disk devices. For more information, see Disk Device Labeler on page System Configuration. The Simple Configuration Setup window appears Figure Table 14 describes the fields on the Simple Configuration Setup window.

To configure a simple file system, select the disks to use in the configuration. Specify settings file system name, block size, stripe size, and maximum connections , and then click Configure. Clear Selections Click to deselect all devices in the list. Select All Click to select all devices in the list. File System Name Type the name of the file system. This is the name used by clients when establishing a mount point for the file system. File System Block Select the file system block size in bytes.

Size This is the minimum allocation size used by the file system. Stripe Size in Blocks Select the stripe size in blocks. This is the number of file system blocks to write before switching to the next disk in the stripe group.

Maximum Type the maximum number of clients that Connections can simultaneously mount the file system. This value may be overridden by values in your product license code. Configure Click to save the configuration using the current settings. The configuration file is saved in the StorNext configuration directory. In addition, the StorNext services must be running to use the StorNext configuration utilities and to mount file systems using the client software.

Rebooting the system will not restart services. For more information, see Start File System Services. Run the utility on an MDC that contains the file system you want to check. Because the file system check is run in read-only mode, any problems that exist are not repaired. If the utility identifies errors or corruption in metadata, you must repair the file system see Repair a File System on page The File System Manager is a process that manages the name space, allocations, and metadata coherency for a file system.

Each file system uses its own FSM process. When there are multiple file systems and therefore multiple FSM processes , the FSM services list controls which FSM processes are run when the server starts up, and also sets the priority for each file system for failover configurations. New Click to add a file system to the FSM services list. Type the name of the file system and click OK. Delete Click to remove the selected file system from the FSM services list.

Host Name Optional Type the name of the host on which the file system is running. Priority Optional Select the priority for the selected file system. This priority is used for failover configurations. Initializing a file system prepares it for use. Caution: Re-initializing a file system will destroy all data on the file system. Repair a file system if errors were identified when checking the file system see Check Read-Only a File System on page The file system must be inactive in order to be repaired.

To stop a file system, use the Server Administration utility see Server Administration. Note: Multiple repair passes may be needed to fully fix a damaged file system. Run the repair utility repeatedly until no problems are reported by the utility. Server Administration The Server Administration utility lets you view and modify stripe group properties and set quotas.

A stripe group is a logical storage unit made up of one or more disks. A quota is a space limit that is set for specified users or groups. The Administrator window appears Figure The left pane shows file systems running on the currently connected MDC. Expand a file system to see stripe groups, quotas, and other properties.

Type the host name and click OK. File systems on the server appear in the left pane. File System Properties To view or change file system properties, click a file system in the left pane, and then click the file system name in the right pane. The File System Properties dialog box appears Figure Table 16 describes the fields on the File System Properties dialog box.

After making changes, click OK. Not all fields can be modified on this dialog box.



0コメント

  • 1000 / 1000