Welcome to the Simplilearn Community

Want to join the rest of our members? Sign up right away!

Sign Up

Deep Learning Course with Tensorflow Training|Apr 17|Nihal

Suman Basu

Active Member
Alumni
Customer
Hi Team,

Please use this community forum for your Deep Learning Course discussion.

Regards,
Simplilearn
 

Roopesh Marar

New Member
Hi Simplilearn,

I attended yesterday the whole class, but attendance is not showing in my Liveclass dashboard. Can you please update it..It is urgent as the attendance is one of the mandate for unlocking certificate..

Roopesh,
Singapore..
 

_89624

Active Member
Hello every one

Lets create a whats app group so we all are connected and share best practices and help each other.
Just ping me on my mob no , shall send the link

Warm Regards
Aiysha
9880129078
roxy9404@gmail.com
Bangalore
 

Partha Chaudhuri

Active Member
Hi Nihal,

In the first assessment project, q3 - Print percentage of default to payer of the dataset for the TARGET column has unique value 1, 0.

How do we know if 1 means default or 0 Means default? Any thoughts
thanks

Partha
 

nihaltahariya95

Active Member
Hi Nihal,

In the first assessment project, q3 - Print percentage of default to payer of the dataset for the TARGET column has unique value 1, 0.

How do we know if 1 means default or 0 Means default? Any thoughts
thanks

Partha
Actually the hint is in the problem statement, where they said the dataset is highly imbalanced.
 

nihaltahariya95

Active Member
Hello Everyone,

Please try to watch these two videos, before you attend tomorrow's session. Like this post, so that i got to know how many of you will watch the video.
 

Partha Chaudhuri

Active Member
Hi Nihal,

Need help to understand


Convert class vectors to binary class matrices​



y_train = keras.utils.to_categorical(y_train, num_classes)
#print(y_train)
print("first val", y_train[0])
print("second val", y_train[1])
y_test = keras.utils.to_categorical(y_test, num_classes)


what we are trying to do here?

Need help to understand num_classes for Diabetes problem.
 

nihaltahariya95

Active Member
Hello Partha,

You don't need to use this for a binary classification problem, since the utils.to_categorical acts like the one-hot encoding for multiple classes.

As the target variable(Outcome) suggest that there are just two classes (0 & 1), which means either a person can have diabetes or can't have diabetes.

I hope this is clear.
 

Partha Chaudhuri

Active Member
Hello Friends,
Can you please help me to get better accuracy. It is just 61%. Any strategies.

regards

Partha
 

Attachments

  • Diabetes.zip
    3.2 KB · Views: 11

CHAN Siu Chung

Active Member
Hi Nihal,

Regarding Keras model on Diabetes dataset,
1. When compiling the model using metrics = "binary_accuracy" (rather than "accuracy"), the model accuracy figures are similar. If keeping one-hot encoding with two output nodes in this model (like what you have suggested), I think I should keep metrics = "accuracy", am I correct?

2. I tried to apply Batch Normalization in two different layer positions:
a. after first activation function ReLU but before the second one [this gives a higher accuracy 0.7532 rising from 0.7403 after 20 epochs]
b. after 2nd activation function ReLU and before the last activation function Softmax [this gives 0.72 in 2nd epoch, but dropped forever to 0.61]
What are the difference and meanings behind these?

Thank you.
 

CHAN Siu Chung

Active Member
Hello Friends,
Can you please help me to get better accuracy. It is just 61%. Any strategies.

regards

Partha
Hello Partha,

I think you only followed the pdf file coding, but not applying the model example from Nahil.
And you used different optimizer as well.

Your version:

model = Sequential()
model.add(Dense(12, input_dim=8, activation='sigmoid'))
model.add(Dropout(0.2))
model.add(Dense(12, activation='sigmoid'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))

Class example version by Nahil:

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax'))

The accuracy under RMSProp() optimizer can goes up to 0.74 in my practice.
 

nihaltahariya95

Active Member
Hi Nihal,

Regarding Keras model on Diabetes dataset,
1. When compiling the model using metrics = "binary_accuracy" (rather than "accuracy"), the model accuracy figures are similar. If keeping one-hot encoding with two output nodes in this model (like what you have suggested), I think I should keep metrics = "accuracy", am I correct?

2. I tried to apply Batch Normalization in two different layer positions:
a. after first activation function ReLU but before the second one [this gives a higher accuracy 0.7532 rising from 0.7403 after 20 epochs]
b. after 2nd activation function ReLU and before the last activation function Softmax [this gives 0.72 in 2nd epoch, but dropped forever to 0.61]
What are the difference and meanings behind these?

Thank you.
Hello Siu Chung,

1) It's better to stick with metrics= "accuracy", since assigning one-hot-encoding is an overhead and it's advisable to stick to simple things until there is requirement to check other things.

2) Very interesting results:
a) The results improved to 0.753 after 20 epochs, this seems fair.
b) Could you share your architecture summary, that will help me understand your situation better.

Thanks
 

CHAN Siu Chung

Active Member
Hello Siu Chung,

1) It's better to stick with metrics= "accuracy", since assigning one-hot-encoding is an overhead and it's advisable to stick to simple things until there is requirement to check other things.

2) Very interesting results:
a) The results improved to 0.753 after 20 epochs, this seems fair.
b) Could you share your architecture summary, that will help me understand your situation better.

Thanks
Hi Nihal,

My update today (sticking to "accuracy" as metric and RMSprop as optimizer):

1. Using RMS without Batch Normalization, first 20 Epoch, test accuracy can go up to 0.725
1619548493042.png
2. Using RMS with Batch Normalization, first 20 Epoch, test accuracy can reach over 0.77
1619548646112.png

3. Using RMS with Batch Normalization in different layer, first 20 epoch, test accuracy is the worst
1619548880533.png

Here is the model summary respectively, please give me comment if anything wrong:

1. No Batch Normalization
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax')) # num_classes

2. Batch Normalization after first ReLU
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(BatchNormalization(axis=1)) # added here
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax')) # num_classes


3. Batch Normalization after second ReLU
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization(axis=1)) # moved here
model.add(Dense(num_classes, activation='softmax')) # num_classes
 

CHAN Siu Chung

Active Member
Hi Nihal,

Can you help to check what is the problem with line 6?
My target is to put the batch normalization layer before first ReLU activation function, and then check the impact on accuracy and loss.
1619587650169.png
Thank you.
 

_85855

New Member
Hi Nihal,

Can you help to check what is the problem with line 6?
My target is to put the batch normalization layer before first ReLU activation function, and then check the impact on accuracy and loss.
View attachment 15453
Thank you.
Hi Chan

Did you imported the activation module from keras library before defining and executing your model layer, if not please use below
from keras.layers import Activation
 

nihaltahariya95

Active Member
Hi Nihal,

Can you help to check what is the problem with line 6?
My target is to put the batch normalization layer before first ReLU activation function, and then check the impact on accuracy and loss.
View attachment 15453
Thank you.
Hi Siu Chung,

You are trying to use the "Acitvation" module which is not present in the keras. You can define the "relu" function as you have defined in line8 by modifying the line 4.
model.add(Dense(512, activation = 'relu', input_shape(8,)))

Regards
Nihal
 

nihaltahariya95

Active Member
Hi Nihal,

My update today (sticking to "accuracy" as metric and RMSprop as optimizer):

1. Using RMS without Batch Normalization, first 20 Epoch, test accuracy can go up to 0.725
View attachment 15447
2. Using RMS with Batch Normalization, first 20 Epoch, test accuracy can reach over 0.77
View attachment 15448

3. Using RMS with Batch Normalization in different layer, first 20 epoch, test accuracy is the worst
View attachment 15449

Here is the model summary respectively, please give me comment if anything wrong:

1. No Batch Normalization
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax')) # num_classes

2. Batch Normalization after first ReLU
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(BatchNormalization(axis=1)) # added here
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax')) # num_classes


3. Batch Normalization after second ReLU
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization(axis=1)) # moved here
model.add(Dense(num_classes, activation='softmax')) # num_classes
Hello Siu Chung,

When we train the Deep neural networks, there can be multiple hyperparameters that can impact the learning of the network. In your third graph both the accuracy and loss got saturated, it means that the gradients are stuck in local minima and can't figure out the path to come to out of it. It could be because of lower learning rate, one suggestion would be to try increasing the learning rate with the third architecture. But there could be other reasons as well, so it's advisable to try out different combination of hyperparameters and check which gives the optimal validation accuracy.

Regards
Nihal
 

CHAN Siu Chung

Active Member
Hello Siu Chung,

When we train the Deep neural networks, there can be multiple hyperparameters that can impact the learning of the network. In your third graph both the accuracy and loss got saturated, it means that the gradients are stuck in local minima and can't figure out the path to come to out of it. It could be because of lower learning rate, one suggestion would be to try increasing the learning rate with the third architecture. But there could be other reasons as well, so it's advisable to try out different combination of hyperparameters and check which gives the optimal validation accuracy.

Regards
Nihal
Thanks Nihal,

But the yellow curve is for test data.
I can't figure out why the train data can give improvement in accuracy when more epochs are done (this shall mean no local minima trapped the gradient descent search).
I am thinking if any coding problem instead....
 

nihaltahariya95

Active Member
Thanks Nihal,

But the yellow curve is for test data.
I can't figure out why the train data can give improvement in accuracy when more epochs are done (this shall mean no local minima trapped the gradient descent search).
I am thinking if any coding problem instead....
Hello Siu Chung,

The above graph is classic case of overfitting where your training accuracy keeps increasing, while your test accuracy got saturated. The network has memorized the training data really well it keep learning patterns from the training data as number of epoch increases, because of which the network won't be able to perform well on the test data and you see no improvement in test_accuracy.
There is no problem with respect to the coding.

Hope this helps.
 

Partha Chaudhuri

Active Member
Hi Nihal,

I have a request.
1. Can we go over all the project problems and explore what we can do with what we have learnt so far?
2. What question we can solve only after some more concepts getting taught?

Regards

Partha
 

Partha Chaudhuri

Active Member
Hi Nihal,

My update today (sticking to "accuracy" as metric and RMSprop as optimizer):

1. Using RMS without Batch Normalization, first 20 Epoch, test accuracy can go up to 0.725
View attachment 15447
2. Using RMS with Batch Normalization, first 20 Epoch, test accuracy can reach over 0.77
View attachment 15448

3. Using RMS with Batch Normalization in different layer, first 20 epoch, test accuracy is the worst
View attachment 15449

Here is the model summary respectively, please give me comment if anything wrong:

1. No Batch Normalization
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax')) # num_classes

2. Batch Normalization after first ReLU
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(BatchNormalization(axis=1)) # added here
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax')) # num_classes


3. Batch Normalization after second ReLU
model.add(Dense(512, activation='relu', input_shape=(8,)))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization(axis=1)) # moved here
model.add(Dense(num_classes, activation='softmax')) # num_classes
How are you creating Accuracy and Epoch plots.

Need coding help

Thanks
 

CHAN Siu Chung

Active Member
How are you creating Accuracy and Epoch plots.

Need coding help

Thanks
Hello Partha,

I am following this example on extending the codes:
https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/

Here are my few lines for sharing:

plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Train')
plt.plot(history.history['val_accuracy'], label='Test')
plt.legend()
plt.ylabel('Accuracy')
plt.xlabel('Epoch')

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.ylabel('Loss')
plt.xlabel('Epoch')

plt.show()

Happy learning together!
 

Partha Chaudhuri

Active Member
Hello Partha,

I am following this example on extending the codes:
https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/

Here are my few lines for sharing:

plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Train')
plt.plot(history.history['val_accuracy'], label='Test')
plt.legend()
plt.ylabel('Accuracy')
plt.xlabel('Epoch')

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.ylabel('Loss')
plt.xlabel('Epoch')

plt.show()

Happy learning together!
Thanks Siu Chung
 

Sachin Govind

Customer
Customer
Hi everyone,

I have been assigned a project in deep learning in my company, as a part of which I have been given a set of documents from which I have to extract data. First of all I need to check the dataset and remove the blurred samples. For this I'm trying to calculate the blur in the images using laplacian operator in opencv and remove it. By this we test for all images, obtain a value for each image, set a threshold based on the values. From my understanding ideally we are expected to get a lower value for higher quality images from laplacian operator. But I am getting a lower value for some blurred images which is not expected. It is after this blur check that we pass in the image for text extraction. Laplacian operator almost fails here. Can somebody suggest a better alternative if possible?


Thanks and Regards

Sachin Govind
 

nihaltahariya95

Active Member
Hi everyone,

I have been assigned a project in deep learning in my company, as a part of which I have been given a set of documents from which I have to extract data. First of all I need to check the dataset and remove the blurred samples. For this I'm trying to calculate the blur in the images using laplacian operator in opencv and remove it. By this we test for all images, obtain a value for each image, set a threshold based on the values. From my understanding ideally we are expected to get a lower value for higher quality images from laplacian operator. But I am getting a lower value for some blurred images which is not expected. It is after this blur check that we pass in the image for text extraction. Laplacian operator almost fails here. Can somebody suggest a better alternative if possible?


Thanks and Regards

Sachin Govind
Hello Sachin,

Check this:
Hope this helps.

Regards
Nihal
 

CHAN Siu Chung

Active Member
Hi Nihal,

About Project 3,
After download data set and set up the datagen, here are the messages at run-time:

Found 40 images belonging to 2 classes --> 20 dogs and 20 cats
Found 20 images belonging to 2 classes --> 10 dogs and 10 cats

The project asks us to run 100, 200, 300 epochs, and under such a small number of available training and testing images, are we going to achieve any meaningful CNN models?

Under this questions in mind, I tried different batch size:
batch size = 1
batch size = 4
batch size = 40

They all give very similar results. Typically as below.
1620320816854.png

Could you give me some insight? Is my model wrong?
 
Last edited:

nihaltahariya95

Active Member
Hi Nihal,

About Project 3,
After download data set and set up the datagen, here are the messages at run-time:

Found 40 images belonging to 2 classes --> 20 dogs and 20 cats
Found 20 images belonging to 2 classes --> 10 dogs and 10 cats

The project asks us to run 100, 200, 300 epochs, and under such a small number of available training and testing images, are we going to achieve any meaningful CNN models?

Under this questions in mind, I tried different batch size:
batch size = 1
batch size = 4
batch size = 40

They all give very similar results. Typically as below.
View attachment 15697

Could you give me some insight? Is my model wrong?
Hello Siu chung,

Try to use image Data generator for the data augmentation and train with more data, refer this:

Regards
Nihal
 

nihaltahariya95

Active Member
Hello Folks,

Please refer this video for the mathematical understanding of the backpropagation.
 

Sachin Govind

Customer
Customer
Hello Sachin,

Check this:
Hope this helps.

Regards
Nihal
Hi Nihal,

Thanks for the reply. We have done the fast-fourier transform approach after laplacian. This works for most images. Still for some images there is an issue similar to Laplacian, but comparitively this works great. Trying to sort it out for the rest.

Regards

Sachin
 

Partha Chaudhuri

Active Member
Hi Nihal

This is from Assessment Problem 3.

I want to make sure I am thinking right.

● For the training step, define the loss function and minimize it

Ans- for the above point, we handle during model compile time.

Like for example -

model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])

we are mentioning loss function, optimizer and metrics. Right?

Please comment.


● For the evaluation step, calculate the accuracy
Run the program for 100, 200, and 300 iterations, respectively. Follow this by a report on the final accuracy and loss on the evaluation data.


My approach-

1. fit the model for 100, 200 and 300 epochs
2. For each epoch value( 100, 200, 300) report accuracy for each evaluate the model and report accuracy.

What is meant by "report on the final accuracy and loss on the evaluation data? "

thanks

Partha
 
Top