help='deep learning model training batch size for each image scale')
--num_gpus how many gpus to use( only tested on single gpu, can run on multi gpu)
--gpu_list list of strings of specific gpus to use if not using slurm queue ,write: gpu:0 gpu:1 gpu:2 (does not work for multi gpu yet)
--num_cpus integer, how many CPUs to use to preprocess the images optained from the mrc files.
--float16 , write True to use half precision, works well on volta series and higher, increases training speed up to 2.5 times.
--star list of star files, can contain wild cards
--ab The batch size to train with on a single gpu.
--o The output director (defaults ./results)
--mp The max particles to use pr training epoch.
--epochs The number of epochs, such that the total number of training imabes are epochs*mp
--tr Use pretrained model, this will skip step 1 and 2, and the optimization procedure in step 3 so everything is just predicted. This can predict image dater within 10 min for a huge dataset.
--log If the star file contains classes you can track the training with actual human classification, from Relion / cryosparc (to test to see if its worth it)
--num_classes How many parts of the protein to rfine you want when you want to compare pretraining (step 1) with the number of classes in star file.
- Finalize multi gpu support
- Finalize transfer learning support
Example of typical run , the star file is required, the --ab argument is the batch size. If the batch size is to big
help='Apply Tensor core acceleration to training and inference, requires compute capability of 10.0 or higher.')
parser.add_argument('--save_model', type=int,default=5,help='validation interval where models at full size are printed out.')
parser.add_argument('--lr_g',type=float,default=[10**(-5),0.5*10**(-5),10**(-6),0.5*10**(-6),10**(-7),0.5*10**(-7)], nargs='+',help='The staircase learning rates of the generator')
parser.add_argument('--lr_d',type=float,default=[10**(-4),0.5*10**(-4),10**(-5),0.5*10**(-5),10**(-6),0.5*10**(-6)], nargs='+',help='The staircase learning rates of the discriminator')
parser.add_argument('--lr_e',type=float,default=[10**(-4),0.5*10**(-4),10**(-5),0.5*10**(-5),10**(-6),0.5*10**(-6)], nargs='+',help='The staircase learning rates of the encoder')
parser.add_argument('--ctf', dest='ctf',action='store_true',default=False,help='Use CTF parameters for model.')
parser.add_argument('--noise', dest='noise',action='store_true',default=False ,help='Use the noise generator to generate and scale the noise')
parser.add_argument('--steps',type=int,default=[10000,10000,10000,10000,10000], nargs='+',help='how many epochs( runs through the dataset) before termination')
parser.add_argument('--l_reg',type=float,default=0.01,help='the lambda regulization of the diversity score loss if the noise generator is active')
parser.add_argument('--frames',type=int,default=4,help='number of models to generate from each cluster')
parser.add_argument('--umap_p_size',type=int,default=100,help='The number of feature vectors to use for training Umap'
parser.add_argument('--umap_t_size',type=int,default=100,help='The number of feature vectors to use for intermediate evaluation of clusters in the umap algorithm')
parser.add_argument('--neighbours',type=int,default=30,help='number of neighbours in the graph creation algorithm')
parser.add_argument('--t_res',type=int,default=None,choices=[32,64,128,256,512],help='The maximum resolution to train the model on')
parser.add_argument('--minimum_size',type=int,default=500,help='the minimum size before its considered an actual cluster, anything else less is considered noise')
Example of typical run , the star file is required, and use --ctf and --noise to run on real data.