- We have considered many many points in our last 4 lectures. Some of these we have covered directly and some indirectly. They are:
- How many layers,
- MaxPooling,
- 1x1 Convolutions,
- 3x3 Convolutions,
- Receptive Field,
- SoftMax,
- Learning Rate,
- Kernels and how do we decide the number of kernels?
- Batch Normalization,
- Image Normalization,
- Position of MaxPooling,
- Concept of Transition Layers,
- Position of Transition Layer,
- DropOut
- When do we introduce DropOut, or when do we know we have some overfitting
- The distance of MaxPooling from Prediction,
- The distance of Batch Normalization from Prediction,
- When do we stop convolutions and go ahead with a larger kernel or some other alternative (which we have not yet covered)
- How do we know our network is not going well, comparatively, very early
- Batch Size, and effects of batch size
- etc (you can add more if we missed it here)
- Refer to this code: COLABLINK (links to an external site)
- WRITE IT AGAIN SUCH THAT IT ACHIEVES
- 99.4% validation accuracy
- Less than 20k Parameters
- You can use anything from above you want.
- Less than 20 Epochs
- No fully connected layer
- WRITE IT AGAIN SUCH THAT IT ACHIEVES
Link to GitHub ipynb: https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0/blob/master/04_ArchitectureBasics/ArchitectureBasics.ipynb Link to Solution in Colab: https://colab.research.google.com/github/satyajitghana/TSAI-DeepVision-EVA4.0/blob/master/04_ArchitectureBasics/ArchitectureBasics.ipynb