Machine Learning | Nov 2 - Dec 7

Discussion in 'Big Data and Analytics' started by Shohini_1, Nov 3, 2019.

  1. Shohini_1

    Shohini_1 Well-Known Member
    Simplilearn Support

    Sep 24, 2018
    Likes Received:
    Dedicated community link for this batch
  2. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
  3. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
    There are 3 questions that I am asked in every class that are the Achilles heel of machine learning:
    1. How much data do I need?
    2. What is the best model to use?
    3. What are the best hyperparameters to choose?

    This article intends to answer the first question.
    Hope you enjoy!


    How Much Training Data is Required for Machine Learning?
    by Jason Brownlee on July 24, 2017

    The amount of data you need depends both on the complexity of your problem and on the complexity of your chosen algorithm.
    This is a fact, but does not help you if you are at the pointy end of a machine learning project.

    A common question I get asked is:

    How much data do I need?

    I cannot answer this question directly for you, or for anyone. But I can give you a handful of ways of thinking about this question.
    In this post, I lay out a suite of methods that you can use to think about how much training data you need to apply machine learning to your problem.

    My hope that one or more of these methods may help you understand the difficulty of the question and how it is tightly coupled with the heart of the induction problem that you are trying to solve.

    Let’s dive into it.

    Why Are You Asking This Question?
    It is important to know why you are asking about the required size of the training dataset.

    The answer may influence your next step.

    For example:

    • Do you have too much data? Consider developing some learning curves to find out just how big a representative sample is (below). Or, consider using a big data framework in order to use all available data.
    • Do you have too little data? Consider confirming that you indeed have too little data. Consider collecting more data, or using data augmentation methods to artificially increase your sample size.
    • Have you not collected data yet? Consider collecting some data and evaluating whether it is enough. Or, if it is for a study or data collection is expensive, consider talking to a domain expert and a statistician.
    More generally, you may have more pedestrian questions such as:

    • How many records should I export from the database?
    • How many samples are required to achieve a desired level of performance?
    • How large must the training set be to achieve a sufficient estimate of model performance?
    • How much data is required to demonstrate that one model is better than another?
    • Should I use a train/test split or k-fold cross validation?
    It may be these latter questions that the suggestions in this post seek to address.

    In practice, I answer this question myself using learning curves (see below), using resampling methods on small datasets (e.g. k-fold cross validation and the bootstrap), and by adding confidence intervals to final results.

    What is your reason for asking about the number of samples required for machine learning?
    Please let me know in the comments.

    So, how much data do you need?

    1. It Depends; No One Can Tell You
    No one can tell you how much data you need for your predictive modeling problem.

    It is unknowable: an intractable problem that you must discover answers to through empirical investigation.

    The amount of data required for machine learning depends on many factors, such as:

    • The complexity of the problem, nominally the unknown underlying function that best relates your input variables to the output variable.
    • The complexity of the learning algorithm, nominally the algorithm used to inductively learn the unknown underlying mapping function from specific examples.
    This is our starting point.

    And “it depends” is the answer that most practitioners will give you the first time you ask.

    2. Reason by Analogy
    A lot of people have worked on a lot of applied machine learning problems before you.

    Some of them have published their results.

    Perhaps you can look at studies on problems similar to yours as an estimate for the amount of data that may be required.

    Similarly, it is common to perform studies on how algorithm performance scales with dataset size. Perhaps such studies can inform you how much data you require to use a specific algorithm.

    Perhaps you can average over multiple studies.

    Search for papers on Google, Google Scholar, and Arxiv.

    3. Use Domain Expertise
    You need a sample of data from your problem that is representative of the problem you are trying to solve.

    In general, the examples must be independent and identically distributed.

    Remember, in machine learning we are learning a function to map input data to output data. The mapping function learned will only be as good as the data you provide it from which to learn.

    This means that there needs to be enough data to reasonably capture the relationships that may exist both between input features and between input features and output features.

    Use your domain knowledge, or find a domain expert and reason about the domain and the scale of data that may be required to reasonably capture the useful complexity in the problem.

    4. Use a Statistical Heuristic
    There are statistical heuristic methods available that allow you to calculate a suitable sample size.

    Most of the heuristics I have seen have been for classification problems as a function of the number of classes, input features or model parameters. Some heuristics seem rigorous, others seem completely ad hoc.

    Here are some examples you may consider:

    • Factor of the number of classes: There must be x independent examples for each class, where x could be tens, hundreds, or thousands (e.g. 5, 50, 500, 5000).
    • Factor of the number of input features: There must be x% more examples than there are input features, where x could be tens (e.g. 10).
    • Factor of the number of model parameters: There must be x independent examples for each parameter in the model, where x could be tens (e.g. 10).
    They all look like ad hoc scaling factors to me.

    Have you used any of these heuristics?
    How did it go? Let me know in the comments.

    In theoretical work on this topic (not my area of expertise!), a classifier (e.g. k-nearest neighbors) is often contrasted against the optimal Bayesian decision rule and the difficulty is characterized in the context of the curse of dimensionality; that is there is an exponential increase in difficulty of the problem as the number of input features is increased.

    For example:

    Findings suggest avoiding local methods (like k-nearest neighbors) for sparse samples from high dimensional problems (e.g. few samples and many input features).

    For a kinder discussion of this topic, see:

    5. Nonlinear Algorithms Need More Data
    The more powerful machine learning algorithms are often referred to as nonlinear algorithms.

    By definition, they are able to learn complex nonlinear relationships between input and output features. You may very well be using these types of algorithms or intend to use them.

    These algorithms are often more flexible and even nonparametric (they can figure out how many parameters are required to model your problem in addition to the values of those parameters). They are also high-variance, meaning predictions vary based on the specific data used to train them. This added flexibility and power comes at the cost of requiring more training data, often a lot more data.

    In fact, some nonlinear algorithms like deep learning methods can continue to improve in skill as you give them more data.

    If a linear algorithm achieves good performance with hundreds of examples per class, you may need thousands of examples per class for a nonlinear algorithm, like random forest, or an artificial neural network.

    6. Evaluate Dataset Size vs Model Skill
    It is common when developing a new machine learning algorithm to demonstrate and even explain the performance of the algorithm in response to the amount of data or problem complexity.

    These studies may or may not be performed and published by the author of the algorithm, and may or may not exist for the algorithms or problem types that you are working with.

    I would suggest performing your own study with your available data and a single well-performing algorithm, such as random forest.

    Design a study that evaluates model skill versus the size of the training dataset.

    Plotting the result as a line plot with training dataset size on the x-axis and model skill on the y-axis will give you an idea of how the size of the data affects the skill of the model on your specific problem.

    This graph is called a learning curve.

    From this graph, you may be able to project the amount of data that is required to develop a skillful model, or perhaps how little data you actually need before hitting an inflection point of diminishing returns.

    I highly recommend this approach in general in order to develop robust models in the context of a well-rounded understanding of the problem.

    7. Naive Guesstimate
    You need lots of data when applying machine learning algorithms.

    Often, you need more data than you may reasonably require in classical statistics.

    I often answer the question of how much data is required with the flippant response:

    Get and use as much data as you can.

    If pressed with the question, and with zero knowledge of the specifics of your problem, I would say something naive like:

    • You need thousands of examples.
    • No fewer than hundreds.
    • Ideally, tens or hundreds of thousands for “average” modeling problems.
    • Millions or tens-of-millions for “hard” problems like those tackled by deep learning.
    Again, this is just more ad hoc guesstimating, but it’s a starting point if you need it. So get started!

    8. Get More Data (No Matter What!?)
    Big data is often discussed along with machine learning, but you may not require big data to fit your predictive model.

    Some problems require big data, all the data you have. For example, simple statistical machine translation:

    If you are performing traditional predictive modeling, then there will likely be a point of diminishing returns in the training set size, and you should study your problems and your chosen model/s to see where that point is.

    Keep in mind that machine learning is a process of induction. The model can only capture what it has seen. If your training data does not include edge cases, they will very likely not be supported by the model.

    Don’t Procrastinate; Get Started
    Now, stop getting ready to model your problem, and model it.

    Do not let the problem of the training set size stop you from getting started on your predictive modeling problem.

    In many cases, I see this question as a reason to procrastinate.

    Get all the data you can, use what you have, and see how effective models are on your problem.

    Learn something, then take action to better understand what you have with further analysis, extend the data you have with augmentation, or gather more data from your domain.

    Further Reading
    This section provides more resources on the topic if you are looking go deeper.

    There is a lot of discussion around this question on Q&A sites like Quora, StackOverflow, and CrossValidated. Below are few choice examples that may help.

    I expect that there are some great statistical studies on this question; here are a few I could find.

    Other related articles.

  4. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
  5. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
    Some very useful links
    Data types in Python

    Sample Size Calculator

    Data growth forecast by Cisco

    Data Sources (a few of them....)
    General ML

    For Deep Learning

    For NLP

    Datasets for Time Series Analysis

    Datasets for Recommender Systems

    Datasets by Industry for Streaming

    Datasets for Web Scraping

    Datasets for Current Events

    Other sources:
  6. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
    Most Used Classification Performance Metrics

    Confusion Matrix

    It's a table used to describe the performance of a classifier on a test set of data where values are known.
    Let's use the table below as an example to describe how a confusion matrix works:


    • There are two possible predicted classes: "yes" and "no". If we were predicting the presence of a disease, for example, "yes" would mean they have the disease, and "no" would mean they don't have the disease.
    • The classifier made a total of 165 predictions (e.g., 165 patients were being tested for the presence of that disease).
    • Out of those 165 cases, the classifier predicted "yes" 110 times, and "no" 55 times.
    • In reality, 105 patients in the sample have the disease, and 60 patients do not.
    Let's now define the most basic terms, which are whole numbers (not rates):
    • True Positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease.
    • True Negatives (TN): We predicted no, and they don't have the disease.
    • False Positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.")
    • False Negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.")
    This is a list of rates that are often computed from a confusion matrix for a binary classifier:
    • Accuracy: Overall, how often is the classifier correct?
      • (TP+TN)/total = (100+50)/165 = 0.91
    • Misclassification Rate: Overall, how often is it wrong?
      • (FP+FN)/total = (10+5)/165 = 0.09
      • equivalent to 1 minus Accuracy
      • also known as "Error Rate"
    • True Positive Rate: When it's actually yes, how often does it predict yes?
      • TP/actual yes = 100/105 = 0.95
      • also known as "Sensitivity" or "Recall"
    • False Positive Rate: When it's actually no, how often does it predict yes?
      • FP/actual no = 10/60 = 0.17
    • Specificity: When it's actually no, how often does it predict no?
      • TN/actual no = 50/60 = 0.83
      • equivalent to 1 minus False Positive Rate
    • Precision: When it predicts yes, how often is it correct?
      • TP/predicted yes = 100/110 = 0.91
    • Prevalence: How often does the yes condition actually occur in our sample?
      • actual yes/total = 105/165 = 0.64
    • Positive Predictive Value: This is very similar to precision, except that it takes prevalence into account. In the case where the classes are perfectly balanced (meaning the prevalence is 50%), the positive predictive value (PPV) is equivalent to precision.
    • Null Error Rate: This is how often you would be wrong if you always predicted the majority class. (In our example, the null error rate would be 60/165=0.36 because if you always predicted yes, you would only be wrong for the 60 "no" cases.) This can be a useful baseline metric to compare your classifier against. However, the best classifier for a particular application will sometimes have a higher error rate than the null error rate.
    • Cohen's Kappa: This is essentially a measure of how well the classifier performed as compared to how well it would have performed simply by chance. In other words, a model will have a high Kappa score if there is a big difference between the accuracy and the null error rate.
    • F Score: This is a weighted average of the true positive rate (recall) and precision. (More details about the F Score.)
    • ROC Curve: This is a commonly used graph that summarizes the performance of a classifier over all possible thresholds. It is generated by plotting the True Positive Rate (y-axis) against the False Positive Rate (x-axis) as you vary the threshold for assigning observations to a given class.
  7. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
  8. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
  9. Armando Galeana_1


    Nov 14, 2018
    Likes Received:
    End-to-End Machine Learning
    There are a few ways to deploy your machine learning models. I would classify them in end-to-end ML in the cloud, end-to-end local ML and hybrid models.

    You will find multiple variations depending on the language being used (python, java, etc), the preferred framework (django, flask, etc.), distribution, etc.

    Here are some references for you to get started with end-to-end machine learning:

    End-to-end Machine Learning with Tensorflow on GCP

    Learning Path: Building Machine Learning Pipelines Using Spark, Docker, and AWS

    End-to-end Machine Learning with Azure

    Deploy Machine Learning using Flask

    Hope this helps
  10. Shuvankar Goutam

    May 6, 2019
    Likes Received:
    ['model', 'mpg', 'cyl', 'disp', 'hp', 'drat', 'wt', 'qsec', 'vs', 'am',
    'gear', 'carb these are columns in data set. how will I able to find the correlation between HP and MPG or any 2 columns ?
  11. Naveen Kumar_45

    Jul 6, 2019
    Likes Received:
    Hi Armando,

    What needs to done of one variables has 60-70% of data as null ( for both categorical and continuous feature)?
    --> If we remove them we loose data (loose one feature)
    --> We cannot use mean,median,mode as 30% of data won't be sufficient
    --> Do we need to predict the missing values first ?

    Please advise !!

  12. Naveen Kumar_45

    Jul 6, 2019
    Likes Received:
    Hi Armando,

    How does pca handles multi-collinearity ?
    1. If we have multi-collinearity in our dataset our PC1 will be biased and tends to lean towards highly correlated variable.

    How should we approach this?

  13. Pinak Ganguly

    Pinak Ganguly Member

    Dec 22, 2019
    Likes Received:
    Sir VarianceSelector is not working when there is categorical data in the dataframe.SO,could you please help?

Share This Page