Commit 1e4984d3 authored by Carsten Eie Frigaard's avatar Carsten Eie Frigaard
Browse files

pre-lesson-01-update

parent ce36fff0
%% Cell type:markdown id: tags:
# ITMAL Intro
## Mini Python Demo
REVISIONS||
---------||
2019-0128|CEF, initial.
2019-0820|CEF, E19 ITMAL update.
2019-0828|CEF, split into more cells.
2020-0125|CEF, F20 ITMAL update.
2020-0831|CEF, E20 ITMAL update, fixed typo in y.shape and make gfx links to BB.
2021-0201|CEF, F21 ITMAL update.
### Mini Python/Jupyternotebook demo
Build-in python array an Numpy arrays...
%% Cell type:code id: tags:
``` python
%reset -f
# import clause, imports numpy as the name 'np'
import numpy as np
# python build-in array
x = [[1, 2, 3], [4, 5, 6]]
# print using print-f-syntax, prefeed againts say print('x = ',x)
print(f'x = {x}')
print('OK')
```
%% Cell type:code id: tags:
``` python
# create a numpy array (notice the 1.0 double)
y = np.array( [[1.0, 2, 3, 4], [10, 20, 30, 42]] )
print(f'y = {y}')
print()
print(f'y.dtype={y.dtype}, y.itemsize={y.itemsize}, y.shape={y.shape}')
print('\nOK')
```
%% Cell type:code id: tags:
``` python
print("indexing...like a (m x n) matrix")
print(y[0,1])
print(y[0,-1]) # elem 0-from the 'right', strange but pythonic
print(y[0,-2]) # elem 1-from the 'right'
# print a column, but will display as 'row'
print(y[:,1])
print('\nOK')
```
%% Cell type:markdown id: tags:
#### Matrix multiplication
Just use Numpy as a matrix like class; create a (3 x 4) matrix and do some matrix operations on it...
<img src='https://blackboard.au.dk/bbcswebdav/courses/BB-Cou-UUVA-94506/Fildeling/L01/Figs/matrix.jpg' alt="WARNING: you need to be logged into Blackboard to view images">
<img src='https://itundervisning.ase.au.dk/GITMAL/L01/Figs/matrix.jpg' alt="WARNING: you need to be logged into Blackboard to view images">
(NOTE: do not use `numpy.matrix`, <a href='https://docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.html'>it is unfortunatly depricated.</a>)
(NOTE: do not use `numpy.matrix`, <a href='https:
//docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.html'>it is unfortunatly depricated.</a>)
%% Cell type:code id: tags:
``` python
x = np.array([ [2, -5, -11 ,0], [-9, 4, 6, 13], [4, 7, 12, -2]])
y = np.transpose(x)
print(f'x={x}\nx.shape={x.shape}\ny.shape={y.shape}')
# No direct * oprator in numpy,
# x*y will throw ValueError: operands could not be broadcast together with shapes (3,4) (4,3)
#z=x*y
# numpy dot is a typically combo python function;
# inner-product if x and y are 1D arrays (vectors)
# matrix multiplication if x and y are 2D arrays (matrices)
z = np.dot(x, y)
print(f'\nThe dot product, np.dot(x, y)={z}')
# alternatives to .dot:
print(np.matmul(x, y))
print(x @ y)
# the depricated numpy matrix
mx = np.matrix(x)
my = np.matrix(y)
mz = mx*my;
print(f'\nmatrix type mult: mx*my={mz}')
print('\nOK')
```
%% Cell type:markdown id: tags:
#### Writing pythonic, robust code
Range-checks and fail-fast...
%% Cell type:code id: tags:
``` python
import sys, traceback
print('Writing pythonic,robust code: range-checks and fail-fast...')
# python do all kinds of range-checks: robust coding
#print(y[:,-5]) # will throw!
print('a pythonic assert..')
assert True==0, 'notice the lack of () in python asserts'
print('\nOK')
```
%% Cell type:code id: tags:
``` python
def MyTrace(some_exception):
print(f'cauth exception e="{some_exception}"')
traceback.print_exc(file=sys.stdout)
print()
print('a try-catch block..')
try:
print(y[:,-5])
except IndexError as e:
MyTrace(e)
finally:
print('finally executed last no matter what..')
print('\nOK')
```
%% Cell type:code id: tags:
``` python
# This is python, but weird for C/C++/C# aficionados:
try:
import a_non_existing_lib
except:
print("you don not have the 'a_non_existing_lib' library!")
print("\nOK")
```
%% Cell type:markdown id: tags:
## Administration
REVISIONS||
---------||
2019-01-28| CEF, initial.
2019-08-20| CEF, E19 ITMAL update.
2019-08-28| CEF, split into more cells.
2020-01-25| CEF, F20 ITMAL update.
2020-08-31| CEF, E20 ITMAL update, fixed typo in y.shape and make gfx links to BB.
2021-02-01| CEF, F21 ITMAL update.
2021-08-02| CEF, update to E21 ITMAL.
......
%% Cell type:markdown id: tags:
# ITMAL Exercise
## Intro
We startup by reusing parts of `01_the_machine_learning_landscape.ipynb` from Géron [GITHOML]. So we begin with what Géron says about life satisfactions vs GDP per capita.
Halfway down this notebook, a list of questions for ITMAL is presented.
%% Cell type:markdown id: tags:
## Chapter 1 – The Machine Learning landscape
_This is the code used to generate some of the figures in chapter 1._
%% Cell type:markdown id: tags:
### Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
%% Cell type:code id: tags:
``` python
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "fundamentals"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("IGNORING: Saving figure", fig_id) # ITMAL: I've disabled saving of figures
#if tight_layout:
# plt.tight_layout()
#plt.savefig(path, format='png', dpi=300)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
print("OK")
```
%% Output
OK
%% Cell type:markdown id: tags:
### Code example 1-1
This function just merges the OECD's life satisfaction data and the IMF's GDP per capita data. It's a bit too long and boring and it's not specific to Machine Learning, which is why I left it out of the book.
%% Cell type:code id: tags:
``` python
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
print("OK")
```
%% Cell type:markdown id: tags:
The code in the book expects the data files to be located in the current directory. I just tweaked it here to fetch the files in datasets/lifesat.
%% Cell type:code id: tags:
``` python
import os
datapath = os.path.join("../datasets", "lifesat", "")
# NOTE: a ! prefix makes us able to run system commands..
# (command 'dir' for windows, 'ls' for Linux or Macs)
#
! dir
print("\nOK")
```
%% Cell type:code id: tags:
``` python
# Code example
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.linear_model
# Load the data
try:
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
except Exception as e:
print(f"ITMAL NOTE: well, you need to have the 'datasets' dir in path, please unzip 'datasets.zip' and make sure that its included in the datapath='{datapath}' setting in the cell above..")
raise e
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
y_pred = model.predict(X_new)
print(y_pred) # outputs [[ 5.96242338]]
print("OK")
```
%% Cell type:markdown id: tags:
## ITMAL
Now we plot the linear regression result.
Just ignore all the data plotter code mumbo-jumbo here (code take dirclty from the notebook, [GITHOML])...and see the final plot.
%% Cell type:code id: tags:
``` python
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
#oecd_bli.head(2)
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
#gdp_per_capita.head(2)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
#full_country_stats
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
sample_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
#missing_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[remove_indices]
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
position_text = {
"Hungary": (5000, 1),
"Korea": (18000, 1.7),
"France": (29000, 2.4),
"Australia": (40000, 3.0),
"United States": (52000, 3.8),
}
for country, pos_text in position_text.items():
pos_data_x, pos_data_y = sample_data.loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "ro")
#save_fig('money_happy_scatterplot')
plt.show()
from sklearn import linear_model
lin1 = linear_model.LinearRegression()
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
lin1.fit(Xsample, ysample)
t0 = 4.8530528
t1 = 4.91154459e-05
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
M=np.linspace(0, 60000, 1000)
plt.plot(M, t0 + t1*M, "b")
plt.text(5000, 3.1, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 2.2, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
#save_fig('best_fit_model_plot')
plt.show()
print("OK")
```
%% Cell type:markdown id: tags:
## Ultra-brief Intro to the Fit-Predict Interface in Scikit-learn
OK, the important lines in the cells above are really just
```python
#Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
y_pred = model.predict(X_new)
print(y_pred) # outputs [[ 5.96242338]]
```
What happens here is that we create model, called LinearRegression (for now just a 100% black-box method), put in our data training $\mathbf{X}$ matrix and corresponding desired training ground thruth vector $\mathbf{y}$ (aka $\mathbf{y}_{true})$, and then train the model.
After training we extract a _predicted_ $\mathbf{y}_{pred}$ vector from the model, for some input scalar $x$=22587.
### Supervised Training via Fit-predict
The train-predict (or train-fit) process on some data can be visualized as
<img src="https://blackboard.au.dk/bbcswebdav/courses/BB-Cou-UUVA-94506/Fildeling/L01/Figs/supervised_learning.png" alt="WARNING: you need to be logged into Blackboard to view images" style="height:250px">
<img src="https://itundervisning.ase.au.dk/GITMAL/L01/Figs/supervised_learning.png" alt="WARNING: you need to be logged into Blackboard to view images" style="height:250px">
In this figure the untrained model is a `sklearn.linear_model.LinearRegression` python object. When trained via `model.fit()`, using some know answers for the data, $\mathbf{y}_{true}~$, it becomes a blue-boxed trained model.
The trained model can be used to _predict_ values from new, yet-unseen, data, via the `model.predict()` function.
In other words, how high is life-satisfaction for Cyprus' GDP=22587 USD?
Just call `model.predict()` on a matrix with one single numerical element, 22587, well, not a matrix really, but a python list-of-lists, `[[22587]]`
```y_pred = model.predict([[22587]])```
Apparently 5.96 the models answers!
(you get used to the python built-in containers and numpy on the way..)
%% Cell type:markdown id: tags:
### Qa) The $\theta$ parameters and the $R^2$ Score
Géron uses some $\theta$ parameter from this linear regression model, in his examples and plots above.
How do you extract the $\theta_0$ and $\theta_1$ coefficients in his life-satisfaction figure form the linear regression model, via the models python attributes?
Read the documentation for the linear regressor at
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
Extract the score=0.734 for the model using data (X,y) and explain what $R^2$ score measures in broad terms
$$
\begin{array}{rcll}
R^2 &=& 1 - u/v\\
u &=& \sum (y_{true} - y_{pred}~)^2 ~~~&\small \mbox{residual sum of squares}\\
v &=& \sum (y_{true} - \mu_{true}~)^2 ~~~&\small \mbox{total sum of squares}
\end{array}
$$
with $y_{true}~$ being the true data, $y_{pred}~$ being the predicted data from the model and $\mu_{true}~$ being the true mean of the data.
What are the minimum and maximum values for $R^2~$?
Is it best to have a low $R^2$ score or a high $R^2$ score? This means, is $R^2$ a loss/cost function or a function that measures of fitness/goodness?
NOTE$_1$: the $R^2$ is just one of many scoring functions used in ML, we will see plenty more other methods later.
NOTE$_2$: there are different definitions of the $R^2$, 'coefficient of determination', in linear algebra. We stricly use the formulation above.
OPTIONAL: Read the additional in-depth literature on $R^2~$:
> https://en.wikipedia.org/wiki/Coefficient_of_determination
%% Cell type:code id: tags:
``` python
# TODO: add your code here..
assert False, "TODO: solve Qa, and remove me.."
```
%% Cell type:markdown id: tags:
## The Merits of the Fit-Predict Interface
Now comes the really fun part: all methods in Scikit-learn have this fit-predict interface, and you can easily interchange models in your code just by instantiating a new and perhaps better ML model.
There are still a lot of per-model parameters to tune, but fortunately, the built-in default values provide you with a good initial guess for good model setup.
Later on, you might want to go into the parameter detail trying to optimize some params (opening the lid of the black-box ML algo), but for now, we pretty much stick to the default values.
Let's try to replace the linear regression now, let's test a _k-nearest neighbour algorithm_ instead (still black boxed algorithm-wise)...
### Qb) Using k-Nearest Neighbors
Change the linear regression model to a `sklearn.neighbors.KNeighborsRegressor` with k=3 (as in [HOML:p21,bottom]), and rerun the `fit` and `predict` using this new model.
What do the k-nearest neighbours estimate for Cyprus, compared to the linear regression (it should yield=5.77)?
What _score-method_ does the k-nearest model use, and is it comparable to the linear regression model?
Seek out the documentation in Scikit-learn, if the scoring methods are not equal, can they be compared to each other at all then?
Remember to put pointer/text from the Sckikit-learn documentation in the journal...(did you find the right kNN model etc.)
%% Cell type:code id: tags:
``` python
# this is our raw data set:
sample_data
```
%% Cell type:code id: tags:
``` python
# and this is our preprocessed data
country_stats
```
%% Cell type:code id: tags:
``` python
# Prepare the data
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
print("X.shape=",X.shape)
print("y.shape=",y.shape)
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select and train a model
# TODO: add your code here..
assert False, "TODO: add you instatiation and training of the knn model here.."
# knn = ..
```
%% Cell type:markdown id: tags:
### Qc) Tuning Parameter for k-Nearest Neighbors and A Sanity Check
But that not the full story. Try plotting the prediction for both models in the same graph and tune the `k_neighbor` parameter of the `KNeighborsRegressor` model.
Choosing `k_neighbor=1` produces a nice `score=1`, that seems optimal...but is it really so good?
Plotting the two models in a 'Life Satisfaction-vs-GDP capita' 2D plot by creating an array in the range 0 to 60000 (USD) (the `M` matrix below) and then predict the corresponding y value will sheed some light to this.
Now reusing the plots stubs below, try to explain why the k-nearest neighbour with `k_neighbor=1` has such a good score.
Does a score=1 with `k_neighbor=1`also mean that this would be the prefered estimator for the job?
Hint here is a similar plot of a KNN for a small set of different k's:
<img src="https://blackboard.au.dk/bbcswebdav/courses/BB-Cou-UUVA-91831/Fildeling/L01/Figs/regression_with_knn.png" alt="WARNING: you need to be logged into Blackboard to view images" style="height:150px">
<img src="https://itundervisning.ase.au.dk/GITMAL/L01/Figs/regression_with_knn.png" alt="WARNING: you need to be logged into Blackboard to view images" style="height:150px">
%% Cell type:code id: tags:
``` python
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
# create an test matrix M, with the same dimensionality as X, and in the range [0;60000]
# and a step size of your choice
m=np.linspace(0, 60000, 1000)
M=np.empty([m.shape[0],1])
M[:,0]=m
# from this test M data, predict the y values via the lin.reg. and k-nearest models
y_pred_lin = model.predict(M)
y_pred_knn = knn.predict(M) # ASSUMING the variable name 'knn' of your KNeighborsRegressor
# use plt.plot to plot x-y into the sample_data plot..
plt.plot(m, y_pred_lin, "r")
plt.plot(m, y_pred_knn, "b")
# TODO: add your code here..
assert False, "TODO: try knn with different k_neighbor params, that is re-instantiate knn, refit and replot.."
```
%% Cell type:markdown id: tags:
### Qd) Trying out a Neural Network
Let us then try a Neural Network on the data, using the fit-predict interface allows us to replug a new model into our existing code.
There are a number of different NN's available, let's just hook into Scikit-learns Multi-Layer Perceptron for regression, that is an 'MLPRegressor'.
Now, the data-set for training the MLP is really not well scaled, so we need to tweak a lot of parameters in the MLP just to get it to produce some sensible output: with out preprocessing and scaling of the input data, `X`, the MLP is really a bad choice of model for the job since it so easily produces garbage output.
Try training the `mlp` regression model below, predict the value for Cyprus, and find the `score` value for the training set...just as we did for the linear and KNN models.
Can the `MLPRegressor` score function be compared with the linear and KNN-scores?
%% Cell type:code id: tags:
``` python
from sklearn.neural_network import MLPRegressor
# Setup MLPRegressor, can be very tricky for the tiny-data
mlp = MLPRegressor( hidden_layer_sizes=(10,), solver='adam', activation='relu', tol=1E-5, max_iter=100000, verbose=True)
mlp.fit(X,y.ravel())
# lets make a MLP regressor prediction and redo the plots
y_pred_mlp = mlp.predict(M)
plt.plot(m, y_pred_lin, "r")
plt.plot(m, y_pred_knn, "b")
plt.plot(m, y_pred_mlp, "k")
# TODO: add your code here..
assert False, "TODO: predict value for Cyprus and fetch the score() from the fitting."
```
%% Cell type:markdown id: tags:
### [OPTIONAL] Qe) Neural Network with pre-scaling
Now, the neurons in neural networks normally expects input data in the range `[0;1]` or sometimes in the range `[-1;1]`, meaning that for value outside this range the you put of the neuron will saturate to it's min or max value (also typical `0` or `1`).
A concrete value of `X` is, say 22.000 USD, that is far away from what the MLP expects. To af fix to the problem in Qd) is to preprocess data by scaling it down to something more sensible.
Try to scale X to a range of `[0;1]`, re-train the MLP, re-plot and find the new score from the rescaled input. Any better?
%% Cell type:code id: tags:
``` python
# TODO: add your code here..
assert False, "TODO: try prescale data for the MPL...any better?"
```
%% Cell type:markdown id: tags:
REVISIONS||
---------||
2018-1218|CEF, initial.
2019-0124|CEF, spell checked and update.
2019-0130|CEF, removed reset -f, did not work on all PC's.
2019-0820|CEF, E19 ITMAL update.
2019-0826|CEF, minor mod to NN exercise.
2019-0828|CEF, fixed dataset dir issue, datapath"../datasets" changed to "./datasets".
2020-0125|CEF, F20 ITMAL update.
2020-0806|CEF, E20 ITMAL update, minor fix of ls to dir and added exception to datasets load, udpated figs paths.
2020-0924|CEF, updated text to R2, Qa exe.
2020-0928|CEF, updated R2 and theta extraction, use python attributes, moved revision table. Added comment about MLP.
2021-0112|CEF, updated Qe.
2011-0208|CEF, added ls for Mac/Linux to dir command cell.
2018-12-18|CEF, initial.
2019-01-24|CEF, spell checked and update.
2019-01-30|CEF, removed reset -f, did not work on all PC's.
2019-08-20|CEF, E19 ITMAL update.
2019-08-26|CEF, minor mod to NN exercise.
2019-08-28|CEF, fixed dataset dir issue, datapath"../datasets" changed to "./datasets".
2020-01-25|CEF, F20 ITMAL update.
2020-08-06|CEF, E20 ITMAL update, minor fix of ls to dir and added exception to datasets load, udpated figs paths.
2020-09-24|CEF, updated text to R2, Qa exe.
2020-09-28|CEF, updated R2 and theta extraction, use python attributes, moved revision table. Added comment about MLP.
2021-01-12|CEF, updated Qe.
2021-02-08|CEF, added ls for Mac/Linux to dir command cell.
2021-08-02|CEF, update to E21 ITMAL.
......
No preview for this file type
%% Cell type:markdown id: tags:
# ITMAL Exercise
## Python Basics
### Modules and Packages in Python
Reuse of code in Jupyter notebooks can be done by either including a raw python source as a magic command
```python
%load filename.py
```
but this just pastes the source into the notebook and creates all kinds of pains regarding code maintenance.
A better way is to use a python __module__. A module consists simply (and pythonic) of a directory with a module init file in it (possibly empty)
```python
libitmal/__init__.py
```
To this directory you can add modules in form of plain python files, say
```python
libitmal/utils.py
```
That's about it! The `libitmal` file tree should now look like
```
libitmal/
├── __init__.py
├── __pycache__
│   ├── __init__.cpython-36.pyc
│   └── utils.cpython-36.pyc
├── utils.py
```
with the cache part only being present once the module has been initialized.
You should now be able to use the `libitmal` unit via an import directive, like
```python
import numpy as np
from libitmal import utils as itmalutils
print(dir(itmalutils))
print(itmalutils.__file__)
X = np.array([[1,2],[3,-100]])
itmalutils.PrintMatrix(X,"mylabel=")
itmalutils.TestAll()
```
#### Qa Load and test the `libitmal` module
Try out the `libitmal` module from [GITMAL]. Load this module and run the function
```python
from libitmal import utils as itmalutils
itmalutils.TestAll()
```
from this module.
##### Implementation details
Note that there is a python module ___include___ search path, that you may have to investigate and modify. For my Linux setup I have an export or declare statement in my .bashrc file, like
```bash
declare -x PYTHONPATH=~/ASE/ML/itmal:$PYTHONPATH
```
but your ```itmal```, the [GITMAL] root dir, may be placed elsewhere.
For ___Windows___, you have to add `PYTHONPATH` to your user environment variables...see screenshot below (enlarge by modding the image width-tag or find the original png in the Figs directory).
<img src="https://blackboard.au.dk/bbcswebdav/courses/BB-Cou-UUVA-94506/Fildeling/L01/Figs/Screenshot_windows_enviroment_variables.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:350px">
<img src="https://itundervisning.ase.au.dk/GITMAL/L01/Figs/Screenshot_windows_enviroment_variables.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:350px">
or if you, like me, hate setting up things in a GUI, and prefer a console, try in a CMD on windows
```bash
CMD> setx.exe PYTHONPATH "C:\Users\auXXYYZZ\itmal"
```
replacing the username and path with whatever you have. If everything fails you could programmatically add your path to the libitmal directory as
```python
import sys,os
sys.path.append(os.path.expanduser('~/itmal'))
from libitmal import utils as itmalutils
print(dir(itmalutils))
print(itmalutils.__file__)
```
For the journal: remember to document your particular PATH setup.
%% Cell type:code id: tags:
``` python
# TODO: Qa...
```
%% Cell type:markdown id: tags:
#### Qb Create your own module, with some functions, and test it
Now create your own module, with some dummy functionality. Load it and run you dummy function in a Jupyter Notebook.
Keep this module at hand, when coding, and try to capture reusable python functions in it as you invent them!
For the journal: remember to document your particular library setup (where did you place files, etc).
%% Cell type:code id: tags:
``` python
# TODO: Qb...
```
%% Cell type:markdown id: tags:
#### Qc How do you 'recompile' a module?
When changing the module code, Jupyter will keep running on the old module. How do you force the Jupyter notebook to re-load the module changes?
%% Cell type:code id: tags:
``` python
# TODO: Qc...
```
%% Cell type:markdown id: tags:
#### [OPTIONAL] Qd Write a Howto on Python Modules a Packages
Write a short description of how to use modules in Python (notes on modules path, import directives, directory structure, etc.)
%% Cell type:code id: tags:
``` python
# TODO: Qd...
```
%% Cell type:markdown id: tags:
### Classes in Python
Good news: Python got classes. Bad news: they are somewhat obscure compared to C++ classes.
Though we will not use object-oriented programming in Python intensively, we still need some basic understanding of Python classes. Let's just dig into a class-demo, here is `MyClass` in Python
```python
class MyClass:
myvar = "blah"
def myfun(self):
print("This is a message inside the class.")
myobjectx = MyClass()
```
NOTE: The following exercise assumes some C++ knowledge, in particular the OPRG and OOP courses. If you are an EE-student, then ignore the cryptic C++ comments, and jump directly to some Python code instead. It's the Python solution here, that is important!
#### Qe Extend the class with some public and private functions and member variables
How are private function and member variables represented in python classes?
What is the meaning of `self` in python classes?
What happens to a function inside a class if you forget `self` in the parameter list, like `def myfun():` instead of `def myfun(self):` and you try to call it like `myobjectx.myfun()`? Remember to document the demo code and result.
[OPTIONAL] What does 'class' and 'instance variables' in python correspond to in C++? Maybe you can figure it out, I did not really get it reading, say this tutorial
> https://www.digitalocean.com/community/tutorials/understanding-class-and-instance-variables-in-python-3
%% Cell type:code id: tags:
``` python
# TODO: Qe...
```
%% Cell type:markdown id: tags:
#### Qf Extend the class with a Constructor
Figure a way to declare/define a constructor (CTOR) in a python class. How is it done in python?
Is there a class destructor in python (DTOR)? Give a textual reason why/why-not python has a DTOR?
Hint: python is garbage collection like in C#, and do not go into the details of `__del__, ___enter__, __exit__` functions...unless you find it irresistible to investigate.
%% Cell type:code id: tags:
``` python
# TODO: Qf...
```
%% Cell type:markdown id: tags:
#### Qg Extend the class with a to-string function
Then find a way to serialize a class, that is to make some `tostring()` functionality similar to a C++
```C++
friend ostream& operator<<(ostream& s,const MyClass& x)
{
return os << ..
}
```
If you do not know C++, you might be aware of the C# way to string serialize
```
string s=myobject.tostring()
```
that is a per-class buildin function `tostring()`, now what is the pythonic way of 'printing' a class instance?
%% Cell type:code id: tags:
``` python
# TODO: Qg...
```
%% Cell type:markdown id: tags:
#### [OPTIONAL] Qh Write a Howto on Python Classes
Write a _How-To use Classes Pythonically_, including a description of public/privacy, constructors/destructors, the meaning of `self`, and inheritance.
%% Cell type:code id: tags:
``` python
# TODO: Qh...
```
%% Cell type:markdown id: tags:
## Administration
REVISIONS||
---------||
2018-1219| CEF, initial.
2018-0206| CEF, updated and spell checked.
2018-0207| CEF, made Qh optional.
2018-0208| CEF, added PYTHONPATH for windows.
2018-0212| CEF, small mod in itmalutils/utils.
2019-0820| CEF, E19 ITMAL update.
2020-0125| CEF, F20 ITMAL update.
2020-0806| CEF, E20 ITMAL update, udpated figs paths.
2020-0907| CEF, added text on OPRG and OOP for EE's
2020-0929| CEF, added elaboration for journal in Qa+b.
2021-0206| CEF, fixed itmalutils.TestAll() in markdown cell.
2018-12-19| CEF, initial.
2018-02-06| CEF, updated and spell checked.
2018-02-07| CEF, made Qh optional.
2018-02-08| CEF, added PYTHONPATH for windows.
2018-02-12| CEF, small mod in itmalutils/utils.
2019-08-20| CEF, E19 ITMAL update.
2020-01-25| CEF, F20 ITMAL update.
2020-08-06| CEF, E20 ITMAL update, udpated figs paths.
2020-09-07| CEF, added text on OPRG and OOP for EE's
2020-09-29| CEF, added elaboration for journal in Qa+b.
2021-02-06| CEF, fixed itmalutils.TestAll() in markdown cell.
2021-08-02| CEF, update to E21 ITMAL.
......
%% Cell type:markdown id: tags:
# ITMAL Exercise
## Mathematical Foundation
### Vector and matrix representation in python
Say, we have $d$ features for a given sample point. This $d$-sized feature column vector for a data-sample $i$ is then given by
$$
\newcommand\rem[1]{}
\rem{ITMAL: CEF def and LaTeX commands, remember: no newlines in defs}
\newcommand\eq[2]{#1 &=& #2\\}
\newcommand\ar[2]{\begin{array}{#1}#2\end{array}}
\newcommand\ac[2]{\left[\ar{#1}{#2}\right]}
\newcommand\st[1]{_{\scriptsize #1}}
\newcommand\norm[1]{{\cal L}_{#1}}
\newcommand\obs[2]{#1_{\mbox{\scriptsize obs}}^{\left(#2\right)}}
\newcommand\diff[1]{\mbox{d}#1}
\newcommand\pown[1]{^{(#1)}}
\def\pownn{\pown{n}}
\def\powni{\pown{i}}
\def\powtest{\pown{\mbox{\scriptsize test}}}
\def\powtrain{\pown{\mbox{\scriptsize train}}}
\def\bX{\mathbf{M}}
\def\bX{\mathbf{X}}
\def\bZ{\mathbf{Z}}
\def\bw{\mathbf{m}}
\def\bx{\mathbf{x}}
\def\by{\mathbf{y}}
\def\bz{\mathbf{z}}
\def\bw{\mathbf{w}}
\def\btheta{{\boldsymbol\theta}}
\def\bSigma{{\boldsymbol\Sigma}}
\def\half{\frac{1}{2}}
\bx\powni =
\ac{c}{
x_1\powni \\
x_2\powni \\
\vdots \\
x_d\powni
}
$$
or typically written transposed to save as
$$
\bx\powni = \left[ x_1\powni~~ x_2\powni~~ \cdots~~ x_d\powni\right]^T
$$
such that $\bX$ can be constructed of the full set of $n$ samples of these feature vectors
$$
\bX =
\ac{c}{
(\bx\pown{1})^T \\
(\bx\pown{2})^T \\
\vdots \\
(\bx\pownn)^T
}
$$
or by explicitly writing out the full data matrix $\bX$ consisting of scalars
$$
\bX =
\ac{cccc}{
x_1\pown{1} & x_2\pown{1} & \cdots & x_d\pown{1} \\
x_1\pown{2} & x_2\pown{2} & \cdots & x_d\pown{2}\\
\vdots & & & \vdots \\
x_1\pownn & x_2\pownn & \cdots & x_d\pownn\\
}
$$
but sometimes the notation is a little more fuzzy, leaving out the transpose operator for $\mathbf x$ and in doing so just interpreting the $\mathbf{x}^{(i)}$'s to be row vectors instead of column vectors.
The target column vector, $\mathbf y$, also has the dimension $n$
$$
\by = \ac{c}{
y\pown{1} \\
y\pown{2} \\
\vdots \\
y\pownn \\
}
$$
#### Qa Given the following $\mathbf{x}^{(i)}$'s, construct and print the $\mathbf X$ matrix in python.
$$
\ar{rl}{
\bx\pown{1} &= \ac{c}{ 1, 2, 3}^T \\
\bx\pown{2} &= \ac{c}{ 4, 2, 1}^T \\
\bx\pown{3} &= \ac{c}{ 3, 8, 5}^T \\
\bx\pown{4} &= \ac{c}{-9,-1, 0}^T
}
$$
##### Implementation Details
Notice that the ```np.matrix``` class is getting deprecated! So, we use numpy's ```np.array``` as matrix container. Also, __do not__ use the built-in python lists or the numpy matrix subclass.
%% Cell type:code id: tags:
``` python
# Qa
import numpy as np
y = np.array([1,2,3,4]) # NOTE: you'll need this later
# TODO..create and print the full matrix
assert False, "TODO: solve Qa, and remove me.."
```
%% Cell type:markdown id: tags:
### Norms, metrics or distances
The $\norm{2}$ Euclidian distance, or norm, for a vector of size $n$ is defined as
$$
\norm{2}:~~ ||\bx||_2 = \left( \sum_{i=1}^{n} |x_i|^2 \right)^{1/2}\\
$$
and the distance between two vectors is given by
$$
\ar{ll}{
d(\bx,\by) &= ||\bx-\by||_2\\
&= \left( \sum_{i=1}^n \left| x_{i}-y_{i} \right|^2 \right)^{1/2}
}
$$
This Euclidian norm is sometimes also just denoted as $||\bx||$, leaving out the 2 in the subscript.
The squared $\norm{2}$ for a vector can compactly be expressed via
$$
\norm{2}^2: ||\bx||_2^2 = \bx^\top\bx
$$
by the general dot or inner-product (taking $\by=\bx$ in the $\norm{2}^2$ above)
$$
\ar{ll}{
\langle\bx,\by\rangle &= \bx\cdot\by\\
&=\bx^\top \by\\
&= \sum_{i=1}^n x_{i} y_{i} \\
&= ||\bx|| ~ ||\by|| \cos{\theta}
}
$$
taking $\theta$ to be zero, and hence $cos(\theta)=1$ when calculating $\langle\bx,\bx\rangle$
The $\norm{1}$ 'City-block' norm is given by
$$
\norm{1}:~~ ||\bx||_1 = \sum_i |x_i|
$$
but $\norm{1}$ is not used as intensive as its more popular $\norm{2}$ cousin.
Notice that $|x|$ in code means ```fabs(x)```.
#### Qb Implement the $\norm{1}$ and $\norm{2}$ norms for vectors in python.
First implementation must be a 'low-level'/explicit implementation---using primitive/build-in functions, like ```+```, ```*``` and power ```**``` only! The square-root function can be achieved via power like ```x**0.5```.
Do NOT use any methods from libraries, like ```math.sqrt```, ```math.abs```, ```numpy.linalg.inner```, ```numpy.dot()``` or similar. Yes, using such libraries is an efficient way of building python software, but in this exercise we want to explicitly map the mathematichal formulaes to python code.
Name your functions L1 and L2 respectively, they both take one vector as input argument.
But test your implementation against some built-in functions, say ```numpy.linalg.norm```
When this works, and passes the tests below, optimize the $\norm{2}$, such that it uses np.numpy's dot operator instead of an explicit sum, call this function L2Dot. This implementation must be pythonic, i.e. it must not contain explicit for- or while-loops.
%% Cell type:code id: tags:
``` python
# TODO: solve Qb...implement the L1, L2 and L2Dot functions...
assert False, "TODO: solve Qb, and remove me.."
# TEST vectors: here I test your implementation...calling your L1() and L2() functions
tx=np.array([1, 2, 3, -1])
ty=np.array([3,-1, 4, 1])
expected_d1=8.0
expected_d2=4.242640687119285
d1=L1(tx-ty)
d2=L2(tx-ty)
print(f"tx-ty={tx-ty}, d1-expected_d1={d1-expected_d1}, d2-expected_d2={d2-expected_d2}")
eps=1E-9 # remember to import math for fabs
assert fabs(d1-expected_d1)<eps, "L1 dist seems to be wrong"
assert fabs(d2-expected_d2)<eps, "L2 dist seems to be wrong"
print("OK(part-1)")
# comment-in once your L2Dot fun is ready...
#d2dot=L2Dot(tx-ty)
#print("d2dot-expected_d2=",d2dot-expected_d2)
#assert fabs(d2dot-expected_d2)<eps, "L2Ddot dist seem to be wrong"
#print("OK(part-2)")
```
%% Cell type:markdown id: tags:
## The cost function, $J$
Now, most ML algorithm uses norms or metrics internally when doing minimizations. Details on this will come later, but for now we need to know that an algorithm typically tries to minimize a given performance metric, the loss function, for all the input data, and implicitly tries to minimize the sum of all norms for the 'distances' between some predicted output, $y\st{pred}$ and the true output $y\st{true}$, with the distance between these typically given by the $\norm{2}$ norm
$$
\mbox{individual loss:}~~L\powni = d(y\st{pred}\powni,y\st{true}\powni)
$$
with $y\st{pred}\powni$, a scalar value, being the output from the hypothesis function, that maps the input vector $\bx\powni$ to a scalar
$$
y_{pred}\powni = \hat{y}\powni = h(\bx\powni;\btheta)
$$
and the total loss, $J$ will be the sum over all $i$'s
$$
\ar{rl}{
J &= \frac{1}{n} \sum_{i=1}^{n} L\powni\\
&= \frac{1}{n} \sum_{i=1}^{n} d( h(\bx\powni) , y\powni\st{true})
}
$$
### Cost function in vector/matrix notation using $\norm{2}$
Remember the data-flow model for supervised learning
<img src="https://blackboard.au.dk/bbcswebdav/courses/BB-Cou-UUVA-94506/Fildeling/L02/Figs/ml_simple_vector.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:500px">
<img src="https://itundervisning.ase.au.dk/GITMAL/L02/Figs/ml_simple_vector.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:500px">
Let us now express $J$ in terms of vectors and matrices instead of summing over individual scalars, and let's use $\norm{2}$ as the distance function
$$
\ar{rl}{
J(\bX,\by;\btheta) &= \frac{1}{n} \sum_{i=1}^{n} L\powni\\
&= \frac{1}{n}\sum_{i=1}^{n} (h(\bx\powni) - \by\powni\st{true})^2\\
&= \frac{1}{n} ||h(\bX) - \by\st{true} ||_2^2\\
&= \frac{1}{n} ||\by\st{pred} - \by\st{true} ||_2^2\\
}
$$
with the matrix-vector notation
$$
\by_{pred} = \hat{\by} = h(\bX;\btheta)
$$
#### Loss or Objective Function using the Mean Squared Error
This formulation is equal to the definition of the _mean-squared-error_, MSE (or indirectly also RMSE), here given in the general formulation for some random variable $Z$
$$
\ar{rl}{
\mbox{MSE} &= \frac{1}{n} \sum_{i=1}^{n} (\hat{Z}_i-Z_i)^2 = \frac{1}{n} SS\\
\mbox{RMSE} &= \sqrt{\mbox{MSE}}\
}
$$
with sum-of-squares (SS) is given simply by
$$
\mbox{SS} = \sum_{i=1}^{n} (\hat{Z}_i-Z_i)^2\\
$$
So, using the $\norm{2}$ for the distance metric, is equal to saying that we want to minimize $J$ with respect to the MSE
$$
\ar{rl}{
J &= \mbox{MSE}(h(\bX), \by\st{true}) \\
&= \mbox{MSE}(\by\st{pred}~, \by\st{true}) \\
&= \mbox{MSE}(\hat{\by}, \by\st{true})
}
$$
Note: when minimizing one can ignore the constant factor $1/n$ and it really does not matter if you minimize MSE or RMSE. Often $J$ is also multiplied by 1/2 to ease notation when trying to differentiate it.
$$
\ar{rl}{
J(\bX,\by\st{true};\btheta) &\propto \half ||\by\st{pred} - \by\st{true} ||_2^2 \\
&\propto \mbox{MSE}
}
$$
### MSE
Now, let us take a look on how you calculate the MSE.
The MSE uses the $\norm{2}$ norm internally, well, actually $||\cdot||^2_2$ to be precise, and basically just sums, means and roots the individual (scalar) losses (distances), we just saw before.
And the RMSE is just an MSE with a final square-root call.
### Qc Construct the Root Mean Square Error (RMSE) function (Equation 2-1 [HOML]).
Call the function RMSE, and evaluate it using the $\bX$ matrix and $\by$ from Qa.
We implement a dummy hypothesis function, that just takes the first column of $\bX$ as its 'prediction'
$$
h\st{dummy}(\bX) = \bX(:,0)
$$
Do not re-implement the $\norm{2}$ for the RMSE function, but call the '''L2''' function you just implemented internally in RMSE.
%% Cell type:code id: tags:
``` python
# TODO: solve Qc...implement your RMSE function here
assert False, "TODO: solve Qc, and remove me.."
# Dummy h function:
def h(X):
if X.ndim!=2:
raise ValueError("excpeted X to be of ndim=2, got ndim=",X.ndim)
if X.shape[0]==0 or X.shape[1]==0:
raise ValueError("X got zero data along the 0/1 axis, cannot continue")
return X[:,0]
# Calls your RMSE() function:
r=RMSE(h(X),y)
# TEST vector:
eps=1E-9
expected=6.57647321898295
print(f"RMSE={r}, diff={r-expected}")
assert fabs(r-expected)<eps, "your RMSE dist seems to be wrong"
print("OK")
```
%% Cell type:markdown id: tags:
### MAE
#### Qd Similar construct the Mean Absolute Error (MAE) function (Equation 2-2 [HOML]) and evaluate it.
The MAE will algorithmic wise be similar to the MSE part from using the $\norm{1}$ instead of the $\norm{2}$ norm.
Again, re-implementation of the$\norm{1}$ is a no-go, call the '''L1''' instead internally i MAE.
%% Cell type:code id: tags:
``` python
# TODO: solve Qd
assert False, "TODO: solve Qd, and remove me.."
# Calls your MAE function:
r=MAE(h(X), y)
# TEST vector:
expected=3.75
print(f"MAE={r}, diff={r-expected}")
assert fabs(r-expected)<eps, "MAE dist seems to be wrong"
print("OK")
```
%% Cell type:markdown id: tags:
## Pythonic Code
### Robustness of Code
Data validity checking is an essential part of robust code, and in Python the 'fail-fast' method is used extensively: instead of lingering on trying to get the 'best' out of an erroneous situation, the fail-fast pragma will be very loud about any data inconsistencies at the earliest possible moment.
Hence robust code should include a lot of error checking, say as pre- and post-conditions (part of the design-by-contract programming) when calling a function: when entering the function you check that all parameters are ok (pre-condition), and when leaving you check the return parameter (post-conditions).
Normally assert-checking or exception-throwing will do the trick just fine, with the exception method being more _pythonic_.
For the norm-function you could, for instance, test your input data to be 'vector' like, i.e. like
```python
assert x.shape[0]>=0 and x.shape[1]==0
if not x.ndim==1:
raise some error
```
or similar.
#### Qe Robust Code
Add error checking code (asserts or exceptions), that checks for right $\hat\by$-$\by$ sizes of the MSE and MAE functions.
Also add error checking to all you previously tested L2() and L1() functions, and re-run all your tests.
%% Cell type:code id: tags:
``` python
# TODO: solve Qe...you need to modify your python cells above
assert False, "TODO: solve Qe, and remove me.."
```
%% Cell type:markdown id: tags:
### Qf Conclusion
Now, conclude on all the exercise above.
Write a short textual conclusion (max. 10- to 20-lines) that extract the _essence_ of the exercises: why did you think it was important to look at these particular ML concepts, and what was our overall learning outcome of the exercises (in broad terms).
%% Cell type:code id: tags:
``` python
# TODO: Qf concluding remarks in text..
```
%% Cell type:markdown id: tags:
REVISIONS||
---------||