Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example GPFS Workflow\n",
"\n",
"This notebook aims to give an example GPFS workflow using an early version of the `rc_gpfs` package. This will start with a raw GPFS log file moving through stages for splitting and conversion to parquet using the Python API, although both steps could be done using the built-in CLI as well. Once the parquet dataset is created, it will create the basic `tld` summarized report on file count and storage use. From there, some example plots showing breakdowns of storage used by file age will be created.\n",
"\n",
"This notebook assumes being run in a job with at least one GPU, although some options can be changed to use a CPU-only backend. A GPU is recommended for analyzing larger logs, such as for `/data/project`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Suggested Compute Resources\n",
"\n",
"For analyzing a `/data/project` report, I suggest the following resources:\n",
"\n",
"- ntasks = 2\n",
"- cpus-per-task = 32\n",
"- ntasks-per-socket = 1\n",
"- gres = gpu:2\n",
"- partition = amperenodes\n",
"- mem = 256G\n",
"\n",
"When using more than 1 GPU, specifying 1 task per socket is highly recommended to guarantee matching the GPU socket socket affinity on the current A100 DGX. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Package and Input Setup"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"# Import the three main Python API submodules for analyzing a report\n",
"from rc_gpfs import process, policy\n",
"from rc_gpfs.report import plotting"
]
},
{
"cell_type": "code",
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
"metadata": {},
"outputs": [],
"source": [
"# File patch setup\n",
"gpfs_log_root = Path('/data/rc/gpfs-policy/data/list-policy_data-project_list-path-external_slurm-31035593_2025-01-05T00:00:24/')\n",
"raw_log = gpfs_log_root.joinpath('raw','list-policy_data-project_list-path-external_slurm-31035593_2025-01-05T00:00:24.gz')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Log Preprocessing\n",
"\n",
"This section shows how to use the `policy` module to split the large GPFS log into smaller parts and then convert each part to a separate parquet file for analysis elsewhere. You can specify different filepaths for the outputs of each stage if you want, but it's generally easier to let the functions use the standard log directory structure seen below:\n",
"\n",
"``` text\n",
".\n",
"└── log_root/\n",
" ├── raw/\n",
" │ └── raw_log.gz\n",
" ├── chunks/\n",
" │ ├── list-000.gz\n",
" │ ├── list-001.gz\n",
" │ └── ...\n",
" ├── parquet/\n",
" │ ├── list-000.parquet\n",
" │ ├── list-001.parquet\n",
" │ └── ...\n",
" ├── reports/\n",
" └── misc/\n",
"```\n",
"\n",
"The directory structure will be automatically created as needed by each function. It's generally easier to not specify output paths for consistency in organization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Split"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Split raw log\n",
"policy.split(log=raw_log)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Convert\n",
"\n",
"Opposed to split, it's much faster to submit the parquet conversion as an array job (built into the `cli` submodule), but it's possible to do here via the multiprocessing library as well."
]
},
{
"cell_type": "code",
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
"metadata": {},
"outputs": [],
"source": [
"from multiprocessing import Pool\n",
"from rc_gpfs.utils import parse_scontrol\n",
"\n",
"cores,_ = parse_scontrol()\n",
"split_files = list(gpfs_log_root.joinpath('chunks').glob('*.gz'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with Pool(cores) as pool:\n",
" pool.map(policy.convert,split_files)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Aggregate"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"pq_path = gpfs_log_root.joinpath('parquet')\n",
"delta_vals = list(range(0,21,6))\n",
"delta_unit = 'M'\n",
"report_name = 'agg_by_tld_atime.parquet'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When choosing what sort of compute we want to use, we can let the package infer the backend based on the in-memory dataset size as well as the available compute resources, but it's easier in this case to specify that we want to use `dask_cuda`. Backend initialization happens later in the notebook"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"# Compute backend setup\n",
"with_cuda = True\n",
"with_dask = True"
]
},
{
"cell_type": "code",
"source": [
"process.aggregate_gpfs_dataset(dataset_path=pq_path, delta_vals=delta_vals, delta_unit=delta_unit, report_name=report_name, with_cuda=with_cuda, with_dask=with_dask)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plotting"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_parquet(gpfs_log_root.joinpath('reports',report_name))\n",
"df.columns = ['tld','dt_grp','bytes','file_count']\n",
"agg = df.groupby('dt_grp',observed=True)[['bytes','file_count']].sum().reset_index()"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"exp,unit = plotting.choose_appropriate_storage_unit(agg['bytes'])\n",
"agg[unit] = agg['bytes']/(1024**exp)"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"agg[['file_count_cum',f'{unit}_cum']] = agg[['file_count',unit]].cumsum()\n",
"agg[[unit,f'{unit}_cum']] = agg[[unit,f'{unit}_cum']].round(3)"
]
},
{
"cell_type": "code",
"source": [
"plotting.pareto_chart(agg,x='dt_grp',y=unit)"
]
}
],
"metadata": {
"language_info": {
"name": "python",
}
},
"nbformat": 4,
"nbformat_minor": 2
}