Commit 3542a392 authored by Carsten Eie Frigaard's avatar Carsten Eie Frigaard
Browse files

added_extra_material

parent 91814ec2
%% Cell type:markdown id: tags:
# ITMAL Demo
## Installing Keras
REVISIONS| |
---------| |
2018-0325| CEF, initial.
2020-0305| CEF, F20 ITMAL update.
2020-0306| CEF, investigated Anaconda 2019.10 on Windows and updated GPU server notes.
2021-1012| CEF, updated for ITMAL E21.
### Install Keras via Anaconda Prompt
1: Launch the __anaconda prompt__ console (CMD), via the Start menu
<img src="https://itundervisning.ase.au.dk/GITMAL/L07/Figs/Screenshot_anaconda_prompt.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:200px">
2: list installed packages via
```bash
> conda list
```
in the anaconda console.
<img src="https://itundervisning.ase.au.dk/GITMAL/L07/Figs/Screenshot_anaconda_prompt_install_0.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:700px">
3: install keras via
```bash
> conda install keras
```
and WAIT for 1 to 30 min before the spin-progress bar finish (a problem makes `conda` extreme slow in the latest two releases of anaconda!).
and WAIT for 1 to 30 min before the spin-progress bar finish (a problem that makes `conda` extreme slow in the latest two releases of anaconda!).
<img src="https://itundervisning.ase.au.dk/GITMAL/L07/Figs/Screenshot_anaconda_prompt_install_1.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:700px">
After install, you can see the Keras and Tensorflow version via ```conda list keras``` and ```conda list tensorflow```, but notice that you might also want to install the GPU version of Tensoflow, if our PC has a suitable GPU (need CUDA support). Below I did not install the GPU version seen by the call ```conda list tensorflow-gpu```
<img src="https://itundervisning.ase.au.dk/GITMAL/L07/Figs/Screenshot_anaconda_prompt_install_2.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:700px">
4: if it downgrades your Scikit-learn (use version function in the cell below), then try removing keras and/or tensorflow and reinstall
```bash
> conda remove keras tensorflow
```
```bash
> conda install keras tensorflow
```
or perhaps try installing from conda-forge
```
conda install -c conda-forge tensorflow keras
```
5: if everything fails: use the ASE GPU cluster or use keras in TensorFlow ala
```python
import tensorflow as tf
mnist = tf.keras.datasets.mnist.load_data()
```
My local installation has the following version setup (yours may vary)
Initial:
```python
Python version: 3.8.5.
Scikit-learn version: 0.23.2.
WARN: could not find keras!
WARN: could not find tensorflow!
WARN: could not find tensorflow.keras!
```
and after installing Keras (and hence implicitly TensorFlow) on Windows
```python
Python version: 3.8.5.
Scikit-learn version: 0.23.2.
Keras version: 2.4.3
Tensorflow version: 2.2.0
Tensorflow.keras version: 2.3.0-tf
Opencv2 version: 4.5.1
```
### Install Keras via Anaconda GUI
### Alternative 1: Installing Keras via Tensorflow
If you dislike the Anacoda prompt and prefer a GUI, then launch the Anaconda Navigator, goto the Environment tab and enter 'keras' in the search prompt
If Keras and TensorFlow (TF) start a battle-of-versions (Keras wants one version TF another, it happens frequently) you could also go for just use the Keras already found in TF.
So, yes, there is already a Keras in the TF interface that can be used directly as
```tf.keras.(some modules or functions)```
instead of direct keras interface calls via
```keras.(some modules or functions)```
### Alternative 2: Install Keras via Anaconda GUI
If you dislike the Anacoda prompt and prefer a GUI, then launch the Anaconda Navigator, go to the Environment tab and enter 'keras' in the search prompt
<img src="https://itundervisning.ase.au.dk/GITMAL/L07/Figs/Screenshot_anaconda_prompt_install_3.png" alt="WARNING: you need to be logged into Blackboard to view images" style="width:700px">
%% Cell type:code id: tags:
``` python
# DEMO of Versions in libitmal
from libitmal import versions as itmalversions
itmalversions.Versions()
```
%%%% Output: stream
Python version: 3.8.5.
Scikit-learn version: 0.23.2.
Keras version: 2.4.3
Tensorflow version: 2.2.0
Tensorflow.keras version: 2.3.0-tf
Opencv2 version: 4.5.1
%% Cell type:markdown id: tags:
# Using the ASE GPU Cluster
%% Cell type:markdown id: tags:
__NOTE: this section is currently slighty outdated!__
### Client GPU Support
If your own computer has a CUDA-compatible GPU you might also want to install TensorFlow for the GPU
```
conda install tensorflow-gpu
```
### Server GPU support
You also have an ITMAL group account on our GPU Cluster server at
* http://gpucluster.st.lab.au.dk/
Find login details etc. in Blackboard ("Kursusinfo | GPU Cluster"):
* https://blackboard.au.dk/webapps/blackboard/content/listContentEditable.jsp?content_id=_2485142_1&course_id=_134254_1#gpucluster
* https://brightspace.au.dk/d2l/le/lessons/27524/topics/296678
Current GPU-Cluster version setup is
Current GPU-Cluster version setup is (??)
```python
Python version: 3.6.8.
Scikit-learn version: 0.20.3.
Keras version: 2.2.4
Tensorflow version: 1.12.0
```
### Issues regarding the Server GPU Memory
For all users, I've added a startup-script when you log into the GPU server. The startup-script is found in
* /home/shared/00_init.py
and among other things, add your home-folder to the PYTHON path.
When running on the GPU-server you are automatically assigned 10% of the GPU memory. This is also done via the startup-script, and you are allowed to increase you GPU memory fraction if needed by calling the Enable GPU function in ```/home/shared/00_init.py``` (or the module ```kernelfun``` in ``libitmal```) like
```
StartupSequence_EnableGPU(gpu_mem_fraction=0.3, gpus=1, verbose=1)
StartupSequence_GPU(verbose=True)
```
or
and thereby allocating 30% GPU mem.
NOTE 1: processes using more than 50% GPU memory will automatically be killed with an interval of about 5 min, and python kernels running for more than about a week will also be terminated automatically.
NOTE 2: most Scikit-learn ML algorithms (if not all) do NOT use the GPU at all. You need to move to Tensorflow/Keras to get true GPU hardware support.
NOTE 3: notebooks will keep running on the server, even if you shut-down you web connection to it. Print output will hence be lost, but you can still start long running model training on the server, and come back later to see if its finished...(on the same Node).
NOTE 4: If you need to stop you server: use the "Control Panel (upper right) | Stop my server" to shut down all your kernels and release all memory.
%% Cell type:code id: tags:
``` python
# DEMO of set GPU memory fraction in libitmal
from libitmal import kernelfuns as itmalkernefuns
itmalkernefuns.EnableGPU()
itmalkernefuns.StartupSequence_GPU(verbose=True)
# See kernel running, but works only if you got CUDA installed
! nvidia-smi
```
%%%% Output: stream
ERROR: something failed in EnableGPU(), Tensorflow part
/bin/sh: 1: nvidia-smi: not found
%% Cell type:markdown id: tags:
### GPU Server GIT/PYTHONPATH setup
On the GPU server You can clone the git repository inside the Jupyternotbook via
```bash
! git clone https://cfrigaard@bitbucket.org/cfrigaard/itmal
! git clone https://gitlab.au.dk/au204573/GITMAL.git
! cd GITMAL && git pull
```
The `PYTHONPATH` environment should already point to you home folder via the startup-script (described above).
......
%% Cell type:markdown id: tags:
### Keras to catagorical demo
%% Cell type:code id: tags:
``` python
import numpy as np
from keras.utils.np_utils import to_categorical
y = np.array([1, 2, 0, 4, -1])
y_cat = to_categorical(y)
print(y_cat)
# prints
#[[0. 1. 0. 0. 0.] => i=0, class 1
# [0. 0. 1. 0. 0.] => i=1, class 2
# [1. 0. 0. 0. 0.] => i=2, class 0
# [0. 0. 0. 0. 1.] => i=3, class 4
# [0. 0. 0. 0. 1.]] => i=4, also class 4!
# NOTE: no class 3
```
%%%% Output: stream
[[0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0.]
[1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1.]
[0. 0. 0. 0. 1.]]
%% Cell type:markdown id: tags:
### Softmax demo
%% Cell type:code id: tags:
``` python
import numpy as np
x = np.array([1, 2, -4, 5, 1])
i = np.argmax(x)
print(f"x={x}")
print(f"np.argmax(x) = {np.argmax(x)}")
def softmax(x):
if (x.ndim!=1):
raise RuntimeError("bad input argument, expected a type with ndim=1")
if (len(x)==0):
raise RuntimeError("input argument has lenght=0")
z = np.exp(x - np.max(x)) # NOTE: slightly optimized version
s = np.sum(z)
if (s==0):
raise ArithmeticError("cannot divide by zero")
sm = z / s
assert sm.sum()>=1-1E-12 and sm.sum()<=1
return sm
s = softmax(x)
print("\nsoftmax(x)=[")
for i in s:
print(" {:0.3f}".format(i))
print("]")
a = np.argmax(s)
print(f"\nnp.argmax(softmax(x)) = index {a} => {s[a]}")
# TEST vector
x = np.array([1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0])
t = np.array([0.023640543021591385, 0.06426165851049616, 0.17468129859572226, 0.4748329997443803, 0.023640543021591385, 0.06426165851049616, 0.17468129859572226])
s = softmax(x)
assert np.allclose(s, t), "test vector for softmax failed"
print("\nOK")
```
%%%% Output: stream
x=[ 1 2 -4 5 1]
np.argmax(x) = 3
softmax(x)=[
0.017
0.046
0.000
0.920
0.017
]
np.argmax(softmax(x)) = index 3 => 0.9203511917737585
OK
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment