Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • jpr/power-study
1 result
Show changes
Commits on Source (2)
%% Cell type:markdown id: tags:
# Explore Cluster Power Stats
Use rest api for now to avoid RPC issues with python client library.
Convert a curl command for currnet power to rest query
https://curl.trillworks.com/
This is the curl command that confirms authenticated access to Bright's CMDaeamon. It was derived from the [RestAPI intro in the Bright Developer Manual](https://support.brightcomputing.com/manuals/8.2/developer-manual.pdf).
```
curl --cert ~/.cm/cert.pem --key ~/.cm/cert.key --cacert pythoncm/etc/cacert.pem \
"https://master:8081/rest/v1/monitoring/latest?measurable=Pwr_Consumption&indent=1"
```
Getting the total instantaneous power used from all nodes monitored by cmd. The sum on the deals with weird outlier data from one of the nods:
```
curl --cert ~/.cm/cert.pem --key ~/.cm/cert.key --cacert pythoncm/etc/cacert.pem \
"https://master:8081/rest/v1/monitoring/latest?measurable=Pwr_Consumption&indent=1" | \
jq '.data[] | select(.raw < 10000) .raw' | \
awk '{sum=$1+sum} END {print sum}'
```
%% Cell type:code id: tags:
```
import requests
import pprint
import datetime
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
```
%% Cell type:code id: tags:
```
pp = pprint.PrettyPrinter(indent=2)
```
%% Cell type:markdown id: tags:
Set up credentials to query RestAPI. Bright controls access based on the user identity. The user's cert.pem and cert.key are automatically generated but the cacert.pem needs to be constructed from the certs returned by the master.
%% Cell type:code id: tags:
```
cert_file='/home/jpr/.cm/cert.pem'
key_file='/home/jpr/.cm/cert.key'
ca_file='/home/jpr/projects/power-study/pythoncm/etc/cacert.pem'
```
%% Cell type:code id: tags:
```
params = (
('measurable', 'Pwr_Consumption'),
('indent', '1'),
)
```
%% Cell type:code id: tags:
```
cert=(cert_file, key_file)
```
%% Cell type:code id: tags:
```
# define the client certs with the cert line, note the order is (cert, key)
# https://requests.readthedocs.io/en/master/user/advanced/#client-side-certificates
#
# define the verify bundle via verify, note False means do not verify
# https://stackoverflow.com/a/48636689/8928529
response = requests.get('https://master:8081/rest/v1/monitoring/latest', params=params, cert=cert, verify='cheaha-cmd-cabundle.pem')
response = requests.get('https://master:8081/rest/v1/monitoring/latest', params=params, cert=cert, verify=False)
```
%% Cell type:markdown id: tags:
Manually construct a pthyon datastructure hash of nodes and tuples of power sample.
%% Cell type:code id: tags:
```
debug=False
power=0.0
count=0
for num, doc in enumerate(response.json()["data"]):
if doc["age"] < 1000:
if debug: print("{}: {}\n".format(num, pp.pprint(doc)))
if doc["value"] != "no data":
power=power + float(doc["raw"])
count+=1
```
%% Cell type:code id: tags:
```
power
```
%% Cell type:code id: tags:
```
power/count
```
%% Cell type:code id: tags:
```
count
```
%% Cell type:markdown id: tags:
Note that the time is in milleseconds so the unix converstion needs to drop the last three digits.
%% Cell type:markdown id: tags:
## Get power use history
From beginning of july for starters. Based on the blog https://www.dataquest.io/blog/tutorial-time-series-analysis-with-pandas/.
%% Cell type:code id: tags:
```
params = (
('start', '2020/01/01 00:00'),
#('entity', 'c0109'),
('measurable', 'Pwr_Consumption'),
('indent', '1'),
)
```
%% Cell type:code id: tags:
```
response = requests.get('https://master:8081/rest/v1/monitoring/dump', params=params, cert=cert, verify='cheaha-cmd-cabundle.pem')
response = requests.get('https://master:8081/rest/v1/monitoring/dump', params=params, cert=cert, verify=False)
```
%% Cell type:markdown id: tags:
response.json()
%% Cell type:markdown id: tags:
It's easy to [convert a list of dictionaries to a pandas data frame](https://pbpython.com/pandas-list-dict.html). This conversion path is the default for the pandas DataFrame constructor and serves our currnt needs well. We can create data frame from the power dump response and easily enhance the data to serve our plotting needs.
%% Cell type:code id: tags:
```
df = pd.DataFrame(response.json()["data"])
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
convert result date to unix time stamp for easier plotting and time comparison.
%% Cell type:markdown id: tags:
```entity measurable raw time value datetime utime
428473 c0063 Pwr_Consumption 742849.191667 2020/05/17 23:22:00 742KW 2020-05-17 23:22:00 1589757720
```
%% Cell type:code id: tags:
```
# remove problematic entries, like unrealistic data points. Nodes can't consume 100's kW, like c0063 on 2020/05/17
# https://www.interviewqs.com/ddi_code_snippets/rows_cols_python
df.loc[df['raw'] > 100000]
```
%% Cell type:code id: tags:
```
df = df.loc[df['raw'] < 100000]
```
%% Cell type:code id: tags:
```
# improve performace by providing format string to avoid per-entry parsing deduction
df.loc['datetime'] = pd.to_datetime(df.time, format="%Y/%m/%d %H:%M:%S")
df['datetime'] = pd.to_datetime(df.time, format="%Y/%m/%d %H:%M:%S")
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
### Initial Data Viz with Seaborn Plots
%% Cell type:code id: tags:
```
# add column with datetime converted to unix time (in seconds)
# to preserve spacial relationships on the axis
# note: the original data has no time zone so our reference time stamp needs to be timezone free
df['utime'] = (df['datetime'] - pd.Timestamp("1970-01-01T00:00:00.000")) // pd.Timedelta('1s')
```
%% Cell type:markdown id: tags:
Our default utility function to fix labels on the x-axis of time series seaborn plots
%% Cell type:code id: tags:
```
def timeticks(ax, tformat="%H:%M:%S\n%Y-%m-%d"):
xticks = ax.get_xticks()
xticks_dates = [datetime.datetime.fromtimestamp(x).strftime(tformat) for x in xticks]
hush = ax.set_xticklabels(xticks_dates)
```
%% Cell type:markdown id: tags:
Plot each data point for power used. Observe that this plot does not aggregate power use across nodes. It simply plots power used at all available time plots.
Also note the time outliers. We requested data since July 1, 2020 but the results include information from 2018.
%% Cell type:code id: tags:
```
# build the replot and capture the handle
g = sns.relplot(x="utime", y="raw",
palette="bright",
#height=5,
aspect=2,
data=df,
s=100)
# update the axis labels
g = (g.set_axis_labels("Date", "Power (Watts)"))
# update the x tickmarks from unix time to hour minute seconds
ax = g.axes
ax = ax[0,0]
timeticks(ax)
```
%% Cell type:markdown id: tags:
## Explore Resampling to Hourly Sample
%% Cell type:markdown id: tags:
We are more interested in a plot of total power used over time.
We can [resample a data frame on a time interval](https://stackoverflow.com/a/52057318/8928529), which is our interest at this point. In particular we would find it interesting to see the hourly total (sum) of the raw power used across the cluster.
%% Cell type:code id: tags:
```
hourly = df.resample('H', on='datetime').size().reset_index(name='sum')
```
%% Cell type:code id: tags:
```
hourly
```
%% Cell type:code id: tags:
```
hourly["sum"].plot()
```
%% Cell type:markdown id: tags:
This is isn't quite the sum we wanted to add up.
Understand how setting the datatime index moves it out of the column collection
%% Cell type:code id: tags:
```
df.dtypes
```
%% Cell type:code id: tags:
```
daily = df.set_index('datetime')
```
%% Cell type:code id: tags:
```
daily.dtypes
```
%% Cell type:code id: tags:
```
daily
```
%% Cell type:code id: tags:
```
daily.index
```
%% Cell type:code id: tags:
```
daily['hourly'] = daily.index.hour
```
%% Cell type:code id: tags:
```
daily
```
%% Cell type:markdown id: tags:
The date indexing by setting the datetime index is helpful.
%% Cell type:code id: tags:
```
daily.loc['2020-07-05']
daily.loc['2020-10-05']
```
%% Cell type:code id: tags:
```
# Use seaborn style defaults and set the default figure size
sns.set(rc={'figure.figsize':(11, 4)})
```
%% Cell type:code id: tags:
```
daily['raw'].plot()
```
%% Cell type:code id: tags:
```
axes = daily['raw'].plot(marker='.', alpha=0.5, linestyle='None', figsize=(11, 4), subplots=True)
for ax in axes:
ax.set_ylabel('Usage (Watts)')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
daily.loc['2020-9', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06-14', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
daily.loc['2020-10-14', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06-15', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
daily.loc['2020-06-16', 'raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:code id: tags:
```
hourly = daily['raw'].resample('H').sum()
```
%% Cell type:code id: tags:
```
hourly
```
%% Cell type:code id: tags:
```
hourly.plot()
```
%% Cell type:code id: tags:
```
daily['raw'].resample('H').mean().plot()
```
%% Cell type:markdown id: tags:
These plots are interesting but don't seem to capture the data accurately.
It seems more appopriate to work with each inidividual node and structure its values into a standard bin arrangement.
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
Explore performance of an individual node.
%% Cell type:code id: tags:
```
node=df[df.entity=='c0002']
node=df[df.entity=='c0149']
```
%% Cell type:code id: tags:
```
node
```
%% Cell type:code id: tags:
```
node.dtypes
```
%% Cell type:code id: tags:
```
node = node.set_index("datetime")
```
%% Cell type:code id: tags:
```
node.dtypes
```
%% Cell type:code id: tags:
```
node.index
```
%% Cell type:code id: tags:
```
node
```
%% Cell type:markdown id: tags:
A series of plots for the single node shows times when the node was computing vs when not. Notice the saw tooth graph after the high load events.
%% Cell type:code id: tags:
```
node['raw'].plot()
```
%% Cell type:code id: tags:
```
node['raw'].plot(marker='.', alpha=0.5, linestyle='None')
```
%% Cell type:markdown id: tags:
Notice the dip in power use right at the the interesting time point.
%% Cell type:code id: tags:
```
node.loc['2020-06-15':'2020-06-16', 'raw'].plot(marker='o', linestyle='-')
node.loc['2020-01-15':'2021-2-16', 'raw'].plot(marker='o', linestyle='-')
```
%% Cell type:code id: tags:
```
hourly=pd.date_range('2020-05-01', 'now', freq='H')
```
%% Cell type:code id: tags:
```
hourly=node[['raw']].resample('H').mean()
```
%% Cell type:code id: tags:
```
hourly['raw'].plot()
```
%% Cell type:markdown id: tags:
Understand summing over the nodes.
%% Cell type:code id: tags:
```
hourly.index
```
%% Cell type:code id: tags:
```
hourly
```
%% Cell type:code id: tags:
```
hourly['raw'].isNull()
```
%% Cell type:code id: tags:
```
hourly_idx=pd.date_range('2020-05-01', '2020-07-09', freq='H')
```
%% Cell type:code id: tags:
```
len(hourly_idx)
```
%% Cell type:code id: tags:
```
np.zeros((1,10)).T
```
%% Cell type:code id: tags:
```
hourly_pwr=pd.DataFrame(np.zeros((1,len(hourly_idx))).T, index=hourly_idx, columns=['raw'])
```
%% Cell type:code id: tags:
```
hourly_pwr
```
%% Cell type:code id: tags:
```
hourly_pwr.dtypes
```
%% Cell type:code id: tags:
```
hourly_pwr.index
```
%% Cell type:markdown id: tags:
Get first or last entry from index. https://stackoverflow.com/a/31269098/8928529
It was easier to append columns to the data frame than to try an add them in a loop.
Do the addition after the columns are built.
%% Cell type:code id: tags:
```
hourly_pwr=pd.DataFrame(np.zeros((1,len(hourly_idx))).T, index=hourly_idx, columns=['raw'])
for num, entity in enumerate(df.entity.unique()):
if entity not in ['c0108', 'c0009']:
node_pwr=df[df.entity==entity].set_index("datetime")
node_pwr=node_pwr[['raw']].resample('H').mean()
node_pwr=node_pwr['2020-05-01':'2020-07-09'].fillna(method="ffill")
print(node_pwr)
missing = node_pwr['raw'].isnull().sum()
print("{}: {} missing {}\n".format(num, entity, missing))
if num < 149:
#hourly_pwr.add(node_pwr['2020-05-01':'2020-07-09'], ['raw'])#, axis='columns', fill_value=0.0)
#hourly_pwr+=node_pwr['2020-05-01':'2020-07-09']
# its easier to add columns and do the sum later
# https://www.geeksforgeeks.org/adding-new-column-to-existing-dataframe-in-pandas/
hourly_pwr[entity]= node_pwr['2020-05-01':'2020-07-09']
```
%% Cell type:code id: tags:
```
hourly_pwr
```
%% Cell type:code id: tags:
```
hourly_pwr.sum(axis=1)
```
%% Cell type:code id: tags:
```
hourly_pwr.sum(axis=1).plot()
```
......
%% Cell type:markdown id: tags:
# Power Stats
Use RestAPI to read power consumption info for cluster nodes and generate usage reports. this is based on the [pandas time series tutorial by Jennifer Walker](https://www.dataquest.io/blog/tutorial-time-series-analysis-with-pandas/)
%% Cell type:code id: tags:
```
import requests
import pprint
import datetime
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
```
%% Cell type:code id: tags:
```
# https://stackoverflow.com/a/9031848
import warnings
warnings.filterwarnings('ignore')
```
%% Cell type:code id: tags:
```
plt.rcParams["figure.figsize"] = (20,6)
```
%% Cell type:markdown id: tags:
Set up credentials to query RestAPI. Bright controls access based on the user identity. The user's cert.pem and cert.key are automatically generated but the cacert.pem needs to be constructed from the certs returned by the master.
%% Cell type:code id: tags:
```
cert_file='~/.cm/cert.pem'
key_file='~/.cm/cert.key'
ca_file='cacert.pem'
```
%% Cell type:code id: tags:
```
cert=(os.path.expanduser(cert_file), os.path.expanduser(key_file))
```
%% Cell type:markdown id: tags:
## Gather Cluster Power Data
%% Cell type:code id: tags:
```
startdate = '2021/01/01 00:00:00'
enddate = '2021/04/8 00:00:00'
startdate = '2021/06/01 00:00:00'
enddate = '2021/10/20 00:00:00'
```
%% Cell type:code id: tags:
```
displaystart = '2021-02-01'
displaystop = '2021-04-08'
displaystart = '2021-06-01'
displaystop = '2021-10-20'
```
%% Cell type:code id: tags:
```
params = (
('start', startdate),
('measurable', 'Pwr_Consumption'),
('indent', '1'),
)
```
%% Cell type:code id: tags:
```
if os.path.exists("ipower_data.csv"):
df = pd.read_csv("power_data.csv")
else:
response = requests.get('https://master:8081/rest/v1/monitoring/dump', params=params, cert=cert, verify=False)
df = pd.DataFrame(response.json()["data"])
```
%% Cell type:markdown id: tags:
Simply read the json response into a dataframe for futher parsing.
%% Cell type:markdown id: tags:
## Clean Data and Resample
Some of data values report unrealistic power values. Any reading over 10kW is considered invalid.
Shouldn't do that until later since it implicitly filters out NaN
%% Cell type:code id: tags:
```
#df = df.loc[df['raw'] < 10000]
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
Create a datatime type column from the reported sample times.
%% Cell type:code id: tags:
```
df['datetime'] = pd.to_datetime(df.time, format="%Y/%m/%d %H:%M:%S")
```
%% Cell type:code id: tags:
```
df
```
%% Cell type:markdown id: tags:
Create an index for the hourly
%% Cell type:code id: tags:
```
hourly_idx=pd.date_range(startdate, enddate, freq='H')
```
%% Cell type:code id: tags:
```
hourly_idx
```
%% Cell type:code id: tags:
```
demodf=pd.DataFrame(np.zeros((1,len(hourly_idx))).T, index=hourly_idx, columns=['sum'])
```
%% Cell type:code id: tags:
```
demodf
```
%% Cell type:code id: tags:
```
df.entity
```
%% Cell type:code id: tags:
```
sorted(df.entity.unique())
```
%% Cell type:code id: tags:
```
debug=False
# prepare data frame to append to, use zeros for default column
m6_hourly_pwr=pd.DataFrame(np.zeros((1,len(hourly_idx))).T, index=hourly_idx, columns=['sum'])
for num, entity in enumerate(sorted(df.entity.unique())):
if entity not in ['c0009']:
node_pwr=df[df.entity==entity].set_index("datetime")
node_pwr=node_pwr[['raw']].resample('H').mean()
node_pwr=node_pwr[startdate:enddate].fillna(method="ffill")
node_pwr=node_pwr[startdate:enddate].fillna(method="bfill")
if debug:
print(node_pwr)
missing = node_pwr['raw'].isnull().sum()
print("{}: {} missing {}\n".format(num, entity, missing))
m6_hourly_pwr[entity]= node_pwr[startdate:enddate]
```
%% Cell type:code id: tags:
```
m6_hourly_pwr
```
%% Cell type:code id: tags:
```
m6_hourly_pwr.fillna(0)
```
%% Cell type:markdown id: tags:
## Plot Per-node Hourly for Row 5 Rack 1
This is just to see the data for each node in one plot and get a feel for how the nodes behave relative to each other. Plot nodes in individual subplotes to decern individual behavior of specific nodes. It does give a sense of how the total power adds up.
Inspect the nodes in the first rack.
Plot help on [shared x-axis](https://stackoverflow.com/a/37738851)
on [correct pandas legend use](https://stackoverflow.com/a/59797261)
and [subplot legend placement](https://stackoverflow.com/a/27017307)
%% Cell type:code id: tags:
```
num_nodes=36
dftest=m6_hourly_pwr[displaystart:displaystop].fillna(0).iloc[:,1:2]
```
%% Cell type:code id: tags:
```
type(dftest)
```
%% Cell type:code id: tags:
```
print(np.__version__)
```
%% Cell type:code id: tags:
```
!module
```
%% Cell type:code id: tags:
```
dftest.info()
```
%% Cell type:code id: tags:
```
startnode=35
stopnode=71
num_nodes=stopnode-startnode
fig, axes = plt.subplots(num_nodes,1, sharex=True, figsize=(20,30))
for i in range(num_nodes):
m6_hourly_pwr[displaystart:displaystop].iloc[:,i+1:i+2].plot(ax=axes[i], legend=True)
for i in range(startnode, startnode+1):
print(i)
dftmp=m6_hourly_pwr[displaystart:displaystop].fillna(0).iloc[:,i+1:i+2]
dftmp.plot(ax=axes[i], legend=True)
axes[i].legend(loc='lower left')
```
%% Cell type:markdown id: tags:
Overview plot reveals missing power data for a number of nodes. Inspect one up close.
%% Cell type:code id: tags:
```
select_node="c0022"
```
%% Cell type:code id: tags:
```
df[df.entity==select_node].set_index("datetime")["2020-09-01":"2020-10-04"]
```
%% Cell type:code id: tags:
```
m6_hourly_pwr[displaystart:displaystop].iloc[:,3:4]["2020-10-03":"2020-10-04"]
```
%% Cell type:code id: tags:
```
m6_hourly_pwr[displaystart:displaystop].iloc[:,3:4]["2020-09-28":"2020-10-14"].plot()
```
%% Cell type:code id: tags:
```
m6_hourly_pwr[displaystart:displaystop].iloc[:,3:4].plot()
```
%% Cell type:code id: tags:
```
df[df["entity"]=="c0001"]
```
%% Cell type:code id: tags:
```
df[df["entity"]=="c0001"]["datetime"].max()
```
%% Cell type:markdown id: tags:
## Identify nodes that have missing data
Identify nodes by ones that have NaN values over the past month.
%% Cell type:code id: tags:
```
nan_mask = m6_hourly_pwr["2021-03-22":"2021-03-23"].isna()
nan_mask = m6_hourly_pwr[startdate:enddate].isna()
```
%% Cell type:code id: tags:
```
power_missing = nan_mask[nan_mask].apply(lambda row: row[row == True].index, axis=1)[1]
```
%% Cell type:code id: tags:
```
print(*power_missing,sep=", ")
```
%% Cell type:code id: tags:
```
num_nodes=len(power_missing)
fig, axes = plt.subplots(num_nodes,1, sharex=True, figsize=(20,30))
for i, node in enumerate(power_missing):
m6_hourly_pwr[node].plot(ax=axes[i], legend=True)
axes[i].legend(loc='lower left')
```
%% Cell type:code id: tags:
```
node
```
%% Cell type:code id: tags:
```
df[df["entity"]==node]["datetime"].max()
```
%% Cell type:code id: tags:
```
lastreport = pd.DataFrame(columns=('node', 'datetime'))
for i, node in enumerate(power_missing):
lastreport.loc[i] = [node, df[df["entity"]==node]["datetime"].max()]
```
%% Cell type:code id: tags:
```
lastreport.sort_values(by="datetime") #["datetime"].sort()
```
%% Cell type:code id: tags:
```
print("{}:\t{}".format(node, df[df["entity"]==node]["datetime"].max()))
```
%% Cell type:markdown id: tags:
## Plot all nodes power
Create overview plot of all nodes to observe meta-patterns.
%% Cell type:code id: tags:
```
num_nodes=len(m6_hourly_pwr.iloc[:,1:].columns)
fig, axes = plt.subplots(num_nodes,1, sharex=True, figsize=(20,num_nodes))
for i, node in enumerate(m6_hourly_pwr.iloc[:,1:].columns):
if (i == num_nodes):
break
m6_hourly_pwr[node][displaystart:displaystop].plot(ax=axes[i], legend=True)
axes[i].legend(loc='lower left')
```
%% Cell type:code id: tags:
```
m6_hourly_pwr.iloc[:,133:199].columns
```
%% Cell type:markdown id: tags:
# Plot Power Usage Graph
Pick the start and end date for the plots from the data range selected above. Generate the sum and plot only it's values.
We skip over the first month of collection because it is uncommonly noisy.
%% Cell type:code id: tags:
```
kW = m6_hourly_pwr[displaystart:displaystop].sum(axis=1)/1000
kW = m6_hourly_pwr.iloc[:,133:199][displaystart:displaystop].sum(axis=1)/1000
```
%% Cell type:code id: tags:
```
ax = kW.plot()
ax.set_ylabel("Power (kW)")
ax.set_title("Cheaha compute and login node hourly power use")
```
%% Cell type:markdown id: tags:
Resample hourly sum to support the seven day average.
%% Cell type:code id: tags:
```
kW_d = kW.resample('D').mean()
```
%% Cell type:code id: tags:
```
# Compute the centered 7-day rolling mean
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html
kW_7d = kW_d.rolling(7, center=True).mean()
```
%% Cell type:code id: tags:
```
# Plot houry, daily, 7-day rolling mean
fig, ax = plt.subplots()
ax.plot(kW, marker='.', markersize=2, color='gray', linestyle='None', label='Hourly Average')
ax.plot(kW_d, color='brown', linewidth=2, label='1-day Average')
ax.plot(kW_7d, color='black', linewidth=4, label='7-day Rolling Average')
label='Trend (7 day Rolling Mean)'
ax.legend()
ax.set_ylabel('Power (kW)')
ax.set_title('Cheaha Trends in Electricity Consumption');
```
%% Cell type:markdown id: tags:
# Save Hourly Power to Dataframe
This makes it easy to use the data in other analysis and learning efforts.
%% Cell type:code id: tags:
```
m6_hourly_pwr.to_pickle("m6_hourly_pwr.gz")
```
%% Cell type:code id: tags:
```
df.to_pickle("power_stats_raw_df.gz")
```
......