WORKSHOP ON AI AND LEARNING

 In Events, sticky Post

Leading The Next Revolution In Ai

Marwadi University has been appointed as the Institutional Collaborator Partner for the nationwide initiative on “AI and Deep Learning, Skilling and Research.” As a part of this initiative, Marwadi University will be among approximately 10 collaborator institutions, 1,000 basic partner institutions, 10,000 teachers and 10,00,000 students.

A Nationwide Initiative To Provide AI Manpower To The World

The Royal Academy of Engineering, UK has approved a nationwide initiative on ‘AI and Deep Learning, Skilling and Research’ under the Newton Bhabha Fund. This initiative aims at changing the landscape of AI research in India. It has the potential of having tangible impacts on the entire engineering community and general public. University College, London, Brunel University, London, and Bennett University, India are collaborators of the project while NVIDIA, AWS Educate, VideoKen and Edvantics are the industry partners in this initiative.

The 1st Workshop

The first ever workshop on ‘Artificial Intelligence and Deep Learning’ as part of the LeadingIndia.ai initiative was organized by Marwadi University, Rajkot (Gujarat). The workshop was held between 1st June to 3rd June 2018.

POST WORKSHOP REPORT

Place of workshop: Marwadi University, Rajkot

Duration: 1st to 3rd June 2018

Report prepared by: Prof. Himanshu Chaturvedi

Day 1(01/06/18)

Session 1
11.30-1.30
Machine Learning Basics
Linear Regression, Logistic Regression Gradient Descent
Session 2
2.15-4.00
Designing & Optimizing Neural Network Model
Building Deep Models and Hyperparameter Tuning
Session 3
4.30-6.30
Deep Learning Hacks
System/project level tricks and regularization strategies

Speaker: DR. DEEPAK GARG

The session began with an introduction to neural network. An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. Information that flows through the network affects the structure of the ANN because a neural network changes – or learns, in a sense – based on that input and output.  Meaning of deep learning is that, a machine learns by itself while deep neural network means more than one hidden layer in the network.

Here Artificial Neural Network is explained to detect an image among various images. There are three types of leanings for neural network; supervised, unsupervised (for example to detect the image of a dog in a natural scene, animal and human images are used as data) and reinforcement (steps used in chess, step might be right or wrong).

A feature of neural network is to extract & decide weights of neutral network.

For example:

Housing prices depends on Number of rooms, house area, pollution, and distance from facilities

Weight:- Higher weight means more importance. So for selection of house higher weight will be depends on above mentioned choices of individual person. Objective of ANN is to optimize the weight.

Fig.1 Steps in Artificial Neural Network

Fig.2 Arcitecture of ANN

Linear Regression

 

 

= Predicted output values

W = Weight

b = bias

Weight is slope of line while bias is distance on x axis.

Output from neuron is generated after applying activation function

Error Analysis helps in identifying major sources of errors, It helps to clear our assumptions and perceptions about the sources of error, It also helps in analyzing the best way to reduce the error source

Mean Square error:

MSE =

Activation function

  • Rectified linear unità it make value zero for negative values and keeps same value for value more than zero. ReLU should be differentiable.
  • Sigmoid àConvert output between 0 and 1.
  • tanh

Some important key terms of ANN

Cost function: Average of loss across whole input and it should be minimized.

Gradient Descent: It is training algorithm for model. If slope is positive then weight should be decreased.

Nomenclature of brackets: [] for layer, () input, {} batch number, where 5 is feature.

Biases are tuned alongside weights by learning algorithms such as gradient descent. Dev Set is used to tune the parameters and to make decisions regarding bias/variance issues to optimize the system. Precision of all the examples recognized by the system as correct is how many they are really correct.

The algorithm’s error rate on the training set is bias. If the difference in the error on the dev (or test) set than the training set then it is called as variance.

For example:

(1) Training Set Error : 19 Dev Set Error 20 — High Bias

(2) Human Error 7 Training Set Error : 8 Dev Set Error 11 — High Bias

Following can effect the change in weights during training: Learning rate, Gradients of cost/loss function, L2 Regularization.

Softmax is an activation function for generating classification probabilities. The sum of values generated by softmax is 1.

Suppose you need to identify apples, banana, cat, and dog from an image. Then it is Classification problem. In IPL they show projected scores. Identifying the projected score is also a Classification problem.

Objective of training a model is to capture pattern information in training set data, to modify weights to achieve predicted output close to actual output and  it should lead to convergence of weights.

Back propagation is a learning technique that adjusts weights in the neural network by propagating weight changes. Total weights to be learned in a neural network depend on both number of layers as well as number of units in each layer.

Following steps can be used to prevent over-fitting in a Neural Network: Data Augmentation, L2 Regularization, Early Stopping, Dropout

Three types of library is used for training the network. Numpy–  numerical computation library, Matplotlib– plotting library, Keras–  deep learning library. Following is to distribute data: Training Set, Dev Set and Test set.

To classify dog image among various images, how much is our model trained (confidence) that is known as accuracy.

Examples of Error analysis:

Train set error11281
Dev set error913161.5
High varianceHigh BiasHigh vari, biasLow vari, bias
OverfitUnderfitUnderfitGood fit

Exponential weighted average or morning average: for example temperature of Delhi

Three types of techniques for optimizing the algorithm:

a) Momentum b.) RMSProp  c.) Adam optimization algo. (Adaptive momentum algorithm)

Fig.3 Training of network

Day 2(02/06/18)

Session 1
9.00-11.00
Convolutional Neural Network
Computer Vision with CNNs
Session 2
11.30-1.30
Recurrent Neural Network
Sequence Modeling with RNNs, Auto-encoders, generative modeling in Deep Learning
Session 3
2.15-4.00
Advanced Learning Topics
Hands-On Labs
Session 4
4.30-6.30
Assessment Test

 

Speaker: DR. VINIT JAKHETIYA, DR. SRIDHAR SWAMINATHAN

Convolution Neural Network

Dense network is the drawback of NN. So to the increase accuracy, parameters should be reduced (Pooling is used to reduce parameters hence the size). Convolutional Neural Network is preferred for Image classification, Object detection, Face detection. Fully connected layers of the CNN layers are responsible for detection of small features such as edges, corners. Convolution is used to extract the features from images(horizontal, vertical edges). Padding is to preserve original dimension of input image. In NN nodes are used while in CNN filters are used. Softmax is output layer of CNN. Regularization is used to make value of weight between -1 to 1.

Recurrent neural network

Recurrent Neural Network is best suitable for machine translation because it can be trained an unsupervised way. It is strictly more powerful than a Convolutional Neural Network. Recurrent Neural network architecture has feedback connections. Following neural net architecture has the concept of weight sharing:  Convolutional neural Network, Recurrent Neural Network, Feed Forward Neural Network. Recurrent Neural network is suitable to solve stock prediction, speech recognition, language translation

Reference link for performing practical programs of ANN:

https://notebooks.azure.com/LeadingIndia/libraries/LeadingIndiaLabMU

Day 3(02/06/18) 

Speaker : Dr. Satyadhyan Chickerur

Mr. Bharat Kumar, Senior Solutions Architect at NVIDIA, was also present as part of the session.

INVIDIA DEEP LEARNING INSTITUTE HANDS ON LABS

10:00 – 12.00Image Classification with DIGITS (lab)
1:00 – 3:00Object Detection with DIGITS (lab)
3:15 – 5:15Neural Network Deployment with DIGITS and TensorRT (lab)

Following images are showing procedure for image classification, object detection using DIGITS lab.

Fig.5 Home page of DIGITS

Fig.6(a) Model page of DIGITS

Fig.6(b) Model page of DIGITS

Fig.7 Training the model

Fig.8 Page for launching three labs

Fig.9 Prdeiction(output) of recognition of breed of dog

Fig.10 Output showing moving images using RNN

Reference Link for more information:

CLICK HERE

Photographs