Data Science Immersive

The Data Science Immersive Course is a 12 week immersive boot camp program.

  • 19
    Aug
    12 weeks, 9:00 am - 5:00 pm
    Addison
    Session information
    Sessions
    Exploratory Data Analysis
    Mon 19 Aug 09:00 - Fri 23 Aug 17:00
    Addison
    Advanced Data Visualization
    Mon 26 Aug 09:00 - Fri 30 Aug 17:00
    Addison
    Modeling for Inference
    Mon 02 Sep 09:00 - Fri 06 Sep 17:00
    Addison
    Supervised Learning
    Mon 16 Sep 09:00 - Fri 20 Sep 17:00
    Addison
    Ensembles and Natural Language Processing (NLP)
    Mon 23 Sep 09:00 - Fri 27 Sep 17:00
    Addison
    Unsupervised Learning
    Mon 30 Sep 09:00 - Fri 01 Nov 17:00
    Addison
    Scaling Analytics with Distributed Frameworks - 1
    Mon 14 Oct 09:00 - Fri 18 Oct 17:00
    Addison
    Scaling Analytics with Distributed Framework II
    Mon 21 Oct 09:00 - Fri 25 Oct 17:00
    Addison
    Advanced Machine Learning Use Cases
    Mon 28 Oct 09:00 - Fri 01 Nov 17:00
    Addison
    Project Capstone
    Mon 11 Nov 09:00 - Fri 29 Nov 17:00
    Addison
    Business Use Cases and Whiteboarding – Employment Skills
    Mon 02 Dec 09:00 - Fri 06 Dec 17:00
    Addison
    • $17,000.00 excl. Tax
None of these dates work for you? Suggest another date & time

    Description

    Our (approved) Data Science immersive program is your direct path to a career as a Data Scientist. This 420-hour immersive curriculum includes eight courses, two weeks of project capstone, interview preparation and white boarding skills, extensive hands-on skill building, and guided product training.

    Data Science Immersive students graduate with the sought after knowledge and trade-craft for immediate employment as Machine Learning Engineers, Business Analysts, Data Analysts, Product Managers, and Consultants.

     

     

    Course Plan

     

    • Data Science Foundations I – SQL Queries:  Structured Query Language is a fundamental starting point for connecting data across multiple systems whether the data sits in Big Data Hadoop-based platforms or on a local computer. SQL is a must-have skill for every role whether Business or Technical.

    • Data Science Foundations II – Data Wrangling and Exploratory Data Analysis: Master cleanup of datasets using Python language and Pandas library, exploratory data analysis to generate hypotheses and intuition, and communication of results through visualization, stories, and summaries.

    • Modeling for Inference:  Develop approaches to performing inference, and acceptance of results; master concepts in causal inference and motivate the need for experiments; apply statistical tools to help plan experiments: exploratory analysis, power calculations, and the use of simulation; apply statistical methods to estimate causal quantities of interest and construct appropriate confidence intervals.

       

    • Supervised Learning I – Regression and Classification:  Develop a modeling life-cycle – from specification, fit, and accuracy thru reliability; apply feature selection methods, finding “optimal” model parameters based on data; master Linear Regression – Bias-variance Trade-off, and Logistic Regression including multi-class modeling (Multinomial, Bernoulli, and Gaussian).

    • Supervised Learning II – Regularization, Ensembling, and Natural Language Processing:  Apply visualization of model performance under various kinds of uncertainty; further consideration of what is desired from data mining results using Decision Trees, Random Forests, and Ensembles; Implement Natural Language Processing (NLP) processes into projects and software applications; Programmatically extract data stored in common formats; critically assess options for cleaning data in different contexts; store, retrieve, and analyze data using NoSQL databases.

    • Unsupervised Learning: Continue to apply feature selection methods such as – Filtering and wrapping algorithms; master unsupervised methods in predictive analytics, in network and text analytics; apply Dimension reduction of predictor space and Graphing analysis algorithms for clustering (community detection in graph networks).

    • Scaling Analytics with Distributed Frameworks I: Use Hadoop ecosystem for Pre-processing; and then apply Exploratory Data Analysis and Predictive Modeling; develop Mappers, Reducers and jobs using Hive, Sqoop, and Pig scripting; master Hadoop data workflows and jobs with Python; read and write data to HDFS; and apply the next generation framework i.e. Spark (in-memory), for Filtering, Aggregating and Searching.

    • Scaling Analytics with Distributed Frameworks II: Use distributed computing frameworks from streaming to deep learning to develop Advanced Machine Learning models.

    • Advanced Machine Learning Use Cases: Learn fundamentals of time-series to develop Anomaly Detection models, learn fundamentals of collaborative filtering to implement recommendation systems, and learn fundamentals of Deep Learning for advanced use cases. You will also learn about thresholding to make better business decisions, AutoML for choosing the right classifier, REST API for predictive models, deployment of models into production using Predictive Modeling Markup Language, and understand Platform-as-a-service offerings to deploy web applications.

    • Capstone Project 1: The student works alone with support from staff to tailor the data science process steps to develop a minimum viable data product within two weeks. The student will be evaluated on their problem hypothesis, statistical model, insights delivered through use of the model, flexibility of the model including bias and variance, communication of the end-to-end approach through an oral presentation.

    • Capstone Project 2: The student works alone with support from staff to tailor the data science process steps to develop a minimum viable data product within two weeks. The student will be evaluated on their problem hypothesis, statistical model, insights delivered through use of the model, flexibility of the model including bias and variance, communication of the end-to-end approach through an oral presentation.

    • Business Use Cases and Whiteboarding – Employment Skills: During Employment Skills students will develop an effective interviewing skills white boarding, connecting technical options to the business problems to solve, and answering questions succinctly. Students will also develop an effective LinkedIn Profile, Showcase Project portfolio, prepare for interviews by revisiting their Toy Problems, share their Capstone project results.