Newer
Older
# Information Lifecycle Managment (ILM) via GPFS policy engine
The GPFS policy engine is well described in this [white paper](https://www.ibm.com/support/pages/system/files/inline-files/Spectrum%20Scale%20ILM%20Policies_v10.2.pdf).
A good presentation overview of the policy file is [here](https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-ilm-policy-engine/).
The relavent [documentation is available from IBM](https://www.ibm.com/docs/en/spectrum-scale/4.2.0?topic=guide-information-lifecycle-management-spectrum-scale).
This project focuses on scheduled execution of lifecyle policies to gather and process data about
file system objects and issue actions against those objects based on policy.
Applying a policy to filesets is done through the `mmapplypolicy` command at a base level. This repo contains wrapper scripts to call that command with a specified policy file on a given fileset where each wrapper has different levels of functionality meant for different groups of users in RC. All scripts are stored in `src/run-policy`
- `run-mmpol`: the main script that calls `mmapplypolicy`. Generally not invoked on its own
- `submit-pol-job`: general wrapper that sets up the Slurm job `run-mmpol` executes in. Admins can execute a policy run from this level using any policy file they have defined
- `run-submit-pol-job.py`: a Python wrapper for `submit-pol-job` meant specifically for running list policy jobs. This wrapper can be run by specific non-admins who have been given `sudo` permissions on this file alone. It can only run one of two policies: `list-path-external` and `list-path-dirplus`.
The production version of these scripts are kept in `/data/rc/list-gpfs-dirs`. Admins can run any one of these scripts from anywhere, but non-admins are only granted `sudo` privileges on the `run-submit-pol-job.py` file in that directory.
Note: The command is aligned to run on specific nodes by way of arguments to mmapplypolicy. The command is technically not run inside of the job reservation so the resource constraints are imperfect. The goal is to use the scheduler to ensure the policy run does not conflict with existing resource allocations on the cluster.
### List Policies (non-admin)
A list policy can be executed using `run-submit-pol-job.py` using the following command:
``` bash
run-submit-pol-job.py [-h] [-o OUTDIR] [-f LOG_PREFIX] [--with-dirs]
[-N NODES] [-c CORES] [-p PARTITION] [-t TIME]
[-m MEM_PER_CPU]
device
```
- `outdir`: specifies the directory the output log should be saved to. Defaults to `/data/rc/gpfs-policy/data`
- `log-prefix`: string to begin the name of the policy output with. Metadata containing the policy file name, slurm job ID, and time run will be appended to this prefix. Defaults to `list-policy_<device>`. See below for `device`
- **Note: this is currently non-functional**
- `--with-dirs`: changes the policy file from `list-path-external` to `list-path-dirplus`. The only difference is that directories are included in the policy output.
- `device`: the fileset or directory to apply the policy to.
All other arguments are Slurm directives dictating resource requests. The default paramaters are as follows:
- `nodes`: 1
- `cores`: 16
- `partition`: `amd-hdr100, medium`
- `time`: `24:00:00`
- `mem-per-cpu`: `8G`
### Run Any Policies (admins)
Any defined policy file can be run using the `submit-pol-job` by running the following:
``` bash
sudo ./submit-pol-job [ -h ] [ -o | --outdir ] [ -f | --outfile ] [ -P | --policy ]
[ -N | --nodes ] [ -c | --cores ] [ -p | --partition ]
[ -t | --time ] [ -m | --mem ]
device
```
The only difference here is that a path to the policy file can be specified using `-P` or `--policy`. All other arguments are the same and have the same defaults
### Output
The list-policy-external policy provides an efficient tool to gather file stat data into a URL-encoded
ASCII text file. The output file can then be processed by down-stream to create reports on storage
patterns and use. Make sure the output dir has sufficient space to hold the resulting file listing (It could be 100's of Gigabytes for a large collection of files.)
The slurm job output file will be local to the directory from which this command executed. It can be watched to observe progress in the generation of the file list. A listing of 100's of millions of files may take a couple of hours to generate and consume serveral hundred gigabytes for the output file.
#### List Policy Specific Outputs
The raw output file for list policies in `outdir` will be named `list-<jobid>.list.gather-info`.
The output file contains one line per file object stored under the `device`. No directories or non-file objects are included in this listing unless the `list-path-dirplus` policy is used. Each entry is a space-seperated set of file attributes selected by the SHOW command in the LIST rule. Entries are encoded according to RFC3986 URI percent encoding. This means all spaces and special characters will be encoded, making it easy to split lines into fields using the space separator.
## Processing the output file
### Split and compress
### Pre-parse output for Python
Processing GPFS log outputs is controlled by the `run-convert-to-parquet.sh` script and assumes the GPFS log has been split into a number of files of the form `list-XXX.gz` where `XXX` is an incrementing numeric index. This creates an array job where each task in the array reads the quoted text in one file, parses it into a dataframe, and exports it as a parquet file with the name `list-XXX.parquet`.
While the file is being parsed, the top-level-directory (`tld`) is extracted for each entry and added as a separate column to make common aggregations easier.
This script is written to parse the `list-path-external` policy format with quoted special characters.
```
Usage: ./run-convert-to-parquet.sh [ -h ]
[ -o | --outdir ] [ -n | --ntasks ] [ -p | --partition]
[ -t | --time ] [ -m | --mem ]
gpfs_logdir"
```
- `outdir`: Path to save parquet outputs. Defaults to `${gpfs_logdir}/parquet`
- `gpfs_logdir`: Directory path containing the split log files as `*.gz`
All other options control the array job resources. The default resources can parse 5 million line files in approximately 3 minutes so should cover all common use cases.
## Running reports
### Disk usage by top level directies
A useful report is the top level directory (tld) report. This is akin to running a `du -s *` in a directory of interest, but much faster since there is no walking of directory trees. Only the list policy output file is used, reducing the operation to a parsing an summing of the data in the list policy output file.
### Comparing directory similarity
## Scheduling regular policy runs via cron
The policy run can be scheduled automatically with the cronwrapper script.
Simpley add append the above script and arguments to the crownwrapper in a crontab line.
For example to run it every morning at 4 am you would add:
0 4 * * * /path/to/cronwrapper submit-pol-job <outdir> <policy> <nodecount> <corespernode> <ram> <partition>