Skip to content
Snippets Groups Projects
Commit ca22f0f7 authored by John-Paul Robinson's avatar John-Paul Robinson
Browse files

Update BCM fetch with novalidate

The rest of the notebook is pretty much the same but with
different date values for the queries.
parent 32cb5a45
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# Explore Cluster Power Stats
Use rest api for now to avoid RPC issues with python client library.
Convert a curl command for currnet power to rest query
https://curl.trillworks.com/
This is the curl command that confirms authenticated access to Bright's CMDaeamon. It was derived from the [RestAPI intro in the Bright Developer Manual](https://support.brightcomputing.com/manuals/8.2/developer-manual.pdf).
```
curl --cert ~/.cm/cert.pem --key ~/.cm/cert.key --cacert pythoncm/etc/cacert.pem \
"https://master:8081/rest/v1/monitoring/latest?measurable=Pwr_Consumption&indent=1"
```
Getting the total instantaneous power used from all nodes monitored by cmd. The sum on the deals with weird outlier data from one of the nods:
```
curl --cert ~/.cm/cert.pem --key ~/.cm/cert.key --cacert pythoncm/etc/cacert.pem \
"https://master:8081/rest/v1/monitoring/latest?measurable=Pwr_Consumption&indent=1" | \
jq '.data[] | select(.raw < 10000) .raw' | \
awk '{sum=$1+sum} END {print sum}'
```
%% Cell type:code id: tags:
```
import requests
import pprint
import datetime
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
```
%% Cell type:code id: tags:
```
pp = pprint.PrettyPrinter(indent=2)
```
%% Cell type:markdown id: tags:
Set up credentials to query RestAPI. Bright controls access based on the user identity. The user's cert.pem and cert.key are automatically generated but the cacert.pem needs to be constructed from the certs returned by the master.
%% Cell type:code id: tags:
```
cert_file='/home/jpr/.cm/cert.pem'
key_file='/home/jpr/.cm/cert.key'
ca_file='/home/jpr/projects/power-study/pythoncm/etc/cacert.pem'
```
%% Cell type:code id: tags:
```
params = (
('measurable', 'Pwr_Consumption'),
('indent', '1'),
)
```
%% Cell type:code id: tags:
```
cert=(cert_file, key_file)
```
%% Cell type:code id: tags:
```
# define the client certs with the cert line, note the order is (cert, key)
# https://requests.readthedocs.io/en/master/user/advanced/#client-side-certificates
#
# define the verify bundle via verify, note False means do not verify
# https://stackoverflow.com/a/48636689/8928529
response = requests.get('https://master:8081/rest/v1/monitoring/latest', params=params, cert=cert, verify='cheaha-cmd-cabundle.pem')
response = requests.get('https://master:8081/rest/v1/monitoring/latest', params=params, cert=cert, verify=False)
```
%% Cell type:markdown id: tags:
Manually construct a pthyon datastructure hash of nodes and tuples of power sample.
%% Cell type:code id: tags:
```
debug=False
power=0.0
count=0
for num, doc in enumerate(response.json()["data"]):
if doc["age"] < 1000:
if debug: print("{}: {}\n".format(num, pp.pprint(doc)))
if doc["value"] != "no data":
power=power + float(doc["raw"])
count+=1
```
%% Cell type:code id: tags:
```
power
```
%% Cell type:code id: tags:
```
power/count
```
%% Cell type:code id: tags:
```
count
```
%% Cell type:markdown id: tags:
Note that the time is in milleseconds so the unix converstion needs to drop the last three digits.
%% Cell type:markdown id: tags:
## Get power use history
From beginning of july for starters. Based on the blog https://www.dataquest.io/blog/tutorial-time-series-analysis-with-pandas/.
%% Cell type:code id: tags:
```
params = (
('start', '2020/01/01 00:00'),
#('entity', 'c0109'),
('measurable', 'Pwr_Consumption'),
('indent', '1'),
)
```
%% Cell type:code id: tags:
```
response = requests.get('https://master:8081/rest/v1/monitoring/dump', params=params, cert=cert, verify='cheaha-cmd-cabundle.pem')
response = requests.get('https://master:8081/rest/v1/monitoring/dump', params=params, cert=cert, verify=False)
```
%% Cell type:markdown id: tags:
response.json()
%% Cell type:markdown id: tags:
It's easy to [convert a list of dictionaries to a pandas data frame](https://pbpython.com/pandas-list-dict.html). This conversion path is the default for the pandas DataFrame constructor and serves our currnt needs well. We can create data frame from the power dump response and easily enhance the data to serve our plotting needs.
%% Cell type:code id: tags:
```
df = pd.DataFrame(response.json()["data"])
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
convert result date to unix time stamp for easier plotting and time comparison.
%% Cell type:markdown id: tags:
```entity measurable raw time value datetime utime
428473 c0063 Pwr_Consumption 742849.191667 2020/05/17 23:22:00 742KW 2020-05-17 23:22:00 1589757720
```
%% Cell type:code id: tags:
```
# remove problematic entries, like unrealistic data points. Nodes can't consume 100's kW, like c0063 on 2020/05/17
# https://www.interviewqs.com/ddi_code_snippets/rows_cols_python
df.loc[df['raw'] > 100000]
```
%% Cell type:code id: tags:
```
df = df.loc[df['raw'] < 100000]
```
%% Cell type:code id: tags:
```
# improve performace by providing format string to avoid per-entry parsing deduction
df.loc['datetime'] = pd.to_datetime(df.time, format="%Y/%m/%d %H:%M:%S")
df['datetime'] = pd.to_datetime(df.time, format="%Y/%m/%d %H:%M:%S")
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
### Initial Data Viz with Seaborn Plots
%% Cell type:code id: tags:
```
# add column with datetime converted to unix time (in seconds)
# to preserve spacial relationships on the axis
# note: the original data has no time zone so our reference time stamp needs to be timezone free
df['utime'] = (df['datetime'] - pd.Timestamp("1970-01-01T00:00:00.000")) // pd.Timedelta('1s')
```
%% Cell type:markdown id: tags:
Our default utility function to fix labels on the x-axis of time series seaborn plots
%% Cell type:code id: tags:
```
def timeticks(ax, tformat="%H:%M:%S\n%Y-%m-%d"):
xticks = ax.get_xticks()
xticks_dates = [datetime.datetime.fromtimestamp(x).strftime(tformat) for x in xticks]
hush = ax.set_xticklabels(xticks_dates)
```
%% Cell type:markdown id: tags:
Plot each data point for power used. Observe that this plot does not aggregate power use across nodes. It simply plots power used at all available time plots.
Also note the time outliers. We requested data since July 1, 2020 but the results include information from 2018.
%% Cell type:code id: tags:
```
# build the replot and capture the handle
g = sns.relplot(x="utime", y="raw",
palette="bright",
#height=5,
aspect=2,
data=df,
s=100)
# update the axis labels
g = (g.set_axis_labels("Date", "Power (Watts)"))
# update the x tickmarks from unix time to hour minute seconds
ax = g.axes
ax = ax[0,0]
timeticks(ax)
```
%% Cell type:markdown id: tags:
## Explore Resampling to Hourly Sample
%% Cell type:markdown id: tags:
We are more interested in a plot of total power used over time.
We can [resample a data frame on a time interval](https://stackoverflow.com/a/52057318/8928529), which is our interest at this point. In particular we would find it interesting to see the hourly total (sum) of the raw power used across the cluster.
%% Cell type:code id: tags:
```
hourly = df.resample('H', on='datetime').size().reset_index(name='sum')
```
%% Cell type:code id: tags:
```
hourly
```
%% Cell type:code id: tags:
```
hourly["sum"].plot()
```
%% Cell type:markdown id: tags:
This is isn't quite the sum we wanted to add up.
Understand how setting the datatime index moves it out of the column collection
%% Cell type:code id: tags:
```
df.dtypes
```
%% Cell type:code id: tags:
```
daily = df.set_index('datetime')
```
%% Cell type:code id: tags:
```
daily.dtypes
```
%% Cell type:code id: tags:
```
daily
```
%% Cell type:code id: tags:
```
daily.index
```
%% Cell type:code id: tags:
```
daily['hourly'] = daily.index.hour
```
%% Cell type:code id: tags:
```
daily
```
%% Cell type:markdown id: tags:
The date indexing by setting the datetime index is helpful.
%% Cell type:code id: tags:
```
daily.loc['2020-07-05']
daily.loc['2020-10-05']
```
%% Cell type:code id: tags:
```
# Use seaborn style defaults and set the default figure size
sns.set(rc={'figure.figsize':(11, 4)})
```
%% Cell type:code id: tags:
```
daily['raw'].plot()
```
%% Cell type:code id: tags:
```
axes = daily['raw'].plot(marker='.', alpha=0.5, linestyle='None', figsize=(11, 4), subplots=True)
for ax in axes:
ax.set_ylabel('Usage (Watts)')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
daily.loc['2020-9', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06-14', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
daily.loc['2020-10-14', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06-15', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06-16', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
hourly = daily['raw'].resample('H').sum()
```
%% Cell type:code id: tags:
```
hourly
```
%% Cell type:code id: tags:
```
hourly.plot()
```
%% Cell type:code id: tags:
```
daily['raw'].resample('H').mean().plot()
```
%% Cell type:markdown id: tags:
These plots are interesting but don't seem to capture the data accurately.
It seems more appopriate to work with each inidividual node and structure its values into a standard bin arrangement.
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
Explore performance of an individual node.
%% Cell type:code id: tags:
```
node=df[df.entity=='c0002']
node=df[df.entity=='c0149']
```
%% Cell type:code id: tags:
```
node
```
%% Cell type:code id: tags:
```
node.dtypes
```
%% Cell type:code id: tags:
```
node = node.set_index("datetime")
```
%% Cell type:code id: tags:
```
node.dtypes
```
%% Cell type:code id: tags:
```
node.index
```
%% Cell type:code id: tags:
```
node
```
%% Cell type:markdown id: tags:
A series of plots for the single node shows times when the node was computing vs when not. Notice the saw tooth graph after the high load events.
%% Cell type:code id: tags:
```
node['raw'].plot()
```
%% Cell type:code id: tags:
```
node['raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:markdown id: tags:
Notice the dip in power use right at the the interesting time point.
%% Cell type:code id: tags:
```
node.loc['2020-06-15':'2020-06-16', 'raw'].plot(marker='o', linestyle='-')
node.loc['2020-01-15':'2021-2-16', 'raw'].plot(marker='o', linestyle='-')
```
%% Cell type:code id: tags:
```
hourly=pd.date_range('2020-05-01', 'now', freq='H')
```
%% Cell type:code id: tags:
```
hourly=node[['raw']].resample('H').mean()
```
%% Cell type:code id: tags:
```
hourly['raw'].plot()
```
%% Cell type:markdown id: tags:
Understand summing over the nodes.
%% Cell type:code id: tags:
```
hourly.index
```
%% Cell type:code id: tags:
```
hourly
```
%% Cell type:code id: tags:
```
hourly['raw'].isNull()
```
%% Cell type:code id: tags:
```
hourly_idx=pd.date_range('2020-05-01', '2020-07-09', freq='H')
```
%% Cell type:code id: tags:
```
len(hourly_idx)
```
%% Cell type:code id: tags:
```
np.zeros((1,10)).T
```
%% Cell type:code id: tags:
```
hourly_pwr=pd.DataFrame(np.zeros((1,len(hourly_idx))).T, index=hourly_idx, columns=['raw'])
```
%% Cell type:code id: tags:
```
hourly_pwr
```
%% Cell type:code id: tags:
```
hourly_pwr.dtypes
```
%% Cell type:code id: tags:
```
hourly_pwr.index
```
%% Cell type:markdown id: tags:
Get first or last entry from index. https://stackoverflow.com/a/31269098/8928529
It was easier to append columns to the data frame than to try an add them in a loop.
Do the addition after the columns are built.
%% Cell type:code id: tags:
```
hourly_pwr=pd.DataFrame(np.zeros((1,len(hourly_idx))).T, index=hourly_idx, columns=['raw'])
for num, entity in enumerate(df.entity.unique()):
if entity not in ['c0108', 'c0009']:
node_pwr=df[df.entity==entity].set_index("datetime")
node_pwr=node_pwr[['raw']].resample('H').mean()
node_pwr=node_pwr['2020-05-01':'2020-07-09'].fillna(method="ffill")
print(node_pwr)
missing = node_pwr['raw'].isnull().sum()
print("{}: {} missing {}\n".format(num, entity, missing))
if num < 149:
#hourly_pwr.add(node_pwr['2020-05-01':'2020-07-09'], ['raw'])#, axis='columns', fill_value=0.0)
#hourly_pwr+=node_pwr['2020-05-01':'2020-07-09']
# its easier to add columns and do the sum later
# https://www.geeksforgeeks.org/adding-new-column-to-existing-dataframe-in-pandas/
hourly_pwr[entity]= node_pwr['2020-05-01':'2020-07-09']
```
%% Cell type:code id: tags:
```
hourly_pwr
```
%% Cell type:code id: tags:
```
hourly_pwr.sum(axis=1)
```
%% Cell type:code id: tags:
```
hourly_pwr.sum(axis=1).plot()
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment