Skip to content
Snippets Groups Projects
Commit 22776c4d authored by John-Paul Robinson's avatar John-Paul Robinson
Browse files

Notebook to convert policy run output to parquet data sets

This is intended to be run on URL encoded output lines from a
gpfs list policy run.  It creates panda structures that are
then saved as parquet format for ease of downstream processing.

Can be run in parallel across many inputs by wrapping with papermill
and have upstream split the input file.
parent b3a99478
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id:073ef418 tags:
# Convert raw policy lists into parquet
Having the raw list-policy output data converted to parquet binary reduces storage space, centralized and speeds later parallel processing and reporting via dask.
The script reads files that match the `glob_pattern` from the provided `dirname` and writes identical file names in pickled format to the `parquet`, optionally filtering lines by the `line_regex_filter`. If the default parameters aren't changed no files are read or written.
Some parsing progress is available via the `verbose` flag.
This converter assumes a policy show format defined in the [list-paths-external policy](https://code.rc.uab.edu/rc/gpfs-policy/-/blob/main/policy/list-path-external):
```
SHOW ('|size=' || varchar(FILE_SIZE) ||
'|kballoc='|| varchar(KB_ALLOCATED) ||
'|access=' || varchar(ACCESS_TIME) ||
'|create=' || varchar(CREATION_TIME) ||
'|modify=' || varchar(MODIFICATION_TIME) ||
'|uid=' || varchar(USER_ID) ||
'|gid=' || varchar(GROUP_ID) ||
'|heat=' || varchar(FILE_HEAT) ||
'|pool=' || varchar(POOL_NAME) ||
'|mode=' || varchar(MODE) ||
'|misc=' || varchar(MISC_ATTRIBUTES) ||
'|'
```
%% Cell type:code id:af015950 tags:
```
import datetime
import pandas as pd
import matplotlib.pyplot as plt
from urllib.parse import unquote
import sys
import os
import pathlib
import re
```
%% Cell type:markdown id:3781a0d6 tags:
## input vars
%% Cell type:code id:932707e6 tags:parameters
```
dirname="data/list-20191520.list.gather-info.d" # directory to fine files to pickle
glob_pattern = "*.gz" # file name glob pattern to match, can be file name for individual file
line_regex_filter = ".*" # regex to match lines of interest in file
verbose = True
```
%% Cell type:code id:833be559 tags:
```
pickledir=f"{dirname}/parquet"
```
%% Cell type:markdown id:47ea1d93 tags:
dirname="data/list-17404604.list.gather-info.d/" # directory to fine files to pickle
glob_pattern = "list-*.gz" # file name glob pattern to match, can be file name for individual file
line_regex_filter = ".*" # regex to match lines of interest in file
pickledir=f"{dirname}/pickles"
verbose = True
%% Cell type:markdown id:07ef745a tags:
dirname="data/list-16144464.list.gather-info.d/" # directory to fine files to pickle
glob_pattern = "list-*" # file name glob pattern to match, can be file name for individual file
line_regex_filter = ".*" # regex to match lines of interest in file
pickledir=f"{dirname}/pickles"
verbose = True
%% Cell type:code id:5599e260 tags:
```
# parse files with read_csv optionally filtering specific lines via regex
def parse_file(filename, pattern=".*"):
gen = pd.read_csv(filename, sep='\n', header=None, iterator=True)
df = pd.concat((x[x[0].str.contains(pattern, regex=True)] for x in gen), ignore_index=True)
return df
```
%% Cell type:code id:6542cb23 tags:
```
# parse rows according to the list-policy-external format
def parse_rows(df):
# split content on white space
df=df.rename(columns={0:"details"})
new=df["details"].str.split(expand=True)
# create a new dataframe and populate with parsed data
df = pd.DataFrame()
# split attribuignoring filename= prefix
df["showattr"] = new[3].map(lambda x: re.sub("\w+=", "", unquote(x)))
df[["ignore1", "size", "kballoc", "access", "create", "modify",
"uid", "gid", "heat", "pool", "mode", "misc", "ignore2"]] = df["showattr"].str.split("|", expand=True)
df["path"] = new[5].map(lambda x: unquote(x))
# drop temp columns
df = df.drop(["showattr", "ignore1", "ignore2"], axis=1)
df.reset_index(drop=True, inplace=True)
df = set_types(df)
return df
```
%% Cell type:code id:9730f207 tags:
```
# convert data to native pandas types
def set_types(df):
df["size"] = df["size"].astype('int64')
df["kballoc"] = df["kballoc"].astype('int64')
df["uid"] = df["uid"].astype('int64')
df["gid"] = df["gid"].astype('int64')
df["access"] = df["access"].astype('datetime64')
df["create"] = df["create"].astype('datetime64')
df["modify"] = df["modify"].astype('datetime64')
return df
```
%% Cell type:markdown id:2ed6bdc8 tags:
## Gather the files according to glob_pattern
%% Cell type:code id:7297f0d2 tags:
```
dirpath = pathlib.Path(dirname)
files = list()
for file in list(dirpath.glob(glob_pattern)):
files.append(str(file))
```
%% Cell type:markdown id:e4929a0f tags:
## Read, parse and pickle files
%% Cell type:code id:2ab7f7f5 tags:
```
for file in files:
if (verbose): print(f"parse: {file}")
filename=os.path.basename(file)
df = parse_rows(parse_file(file))
# rename for parquet (drop the read glob_pattern extension)
filename, _ = filename.split(".", 1)
## Write the pickled data
# only create dir if there is data to pickle
if (not os.path.isdir(pickledir)):
os.mkdir(pickledir)
if (verbose): print(f"pickling: {filename}")
df.to_parquet(f"{pickledir}/{filename}.parquet", engine="pyarrow")
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment