Monthly Archives: March 2021

DS 501 Introduction to Data Science

Antalya University

Course Name: Introduction to Data Science

Course Code: DS 501

Language of Course: English

Credit: 3

Course Coordinator / Instructor: Şadi Evren ŞEKER


Schedule: Tue 13.00 – 16.00

Location: Course will be online, via Discord (for server link please contact Meltem Koc <> )

Courses will be available on YouTube channel (after a delay) :


Course Description:  This course is an introduction level course to data science, specialized on machine learning, artificial intelligence and big data.

  • The course starts with a top down approach to data science projects. The first step is covering data science project management techniques and we follow CRISP-DM methodology with 6 steps below:
  • Business Understanding : We cover the types of problems and business processes in real life
  • Data Understanding: We cover the data types and data problems. We also try to visualize data to discover.
  • Data Preprocessing: We cover the classical problems on data and also handling the problems like noisy or dirty data and missing values. Row or column filtering, data integration with concatenation and joins. We cover the data transformation such as discretization, normalization, or pivoting.
  • Machine Learning: we cover the classification algorithms such as Naive Bayes, Decision Trees, Logistic Regression or K-NN. We also cover prediction / regression algorithms like linear regression, polynomial regression or decision tree regression. We also cover unsupervised learning problems like clustering and association rule learning with k-means or hierarchical clustering, and a priori algorithms. Finally we cover ensemble techniques in Knime and Python on Big Data Platforms.
  • Evaluation: In the final step of data science, we study the metrics of success via Confusion Matrix, Precision, Recall, Sensitivity, Specificity for classification; purity , randindex for Clustering and  rmse, rmae, mse, mae for Regression / Prediction problems with Knime and Python on Big Data Platforms.

Course Objective and Learning Outcomes: 

1.     Understanding of real life cases about data

2.     Understanding of real life data related problems

3.     Understanding of data analysis methodologies

4.     Understanding of some basic data operations like: preprocessing, transformation or manipulation

5.     Understanding of new technologies like bigdata, nosql, cloud computing

6.     Ability to use some trending software in the industry

7.     Introduction to data related problems and their applications


List of course software:

·       Excel,

·       KNIME,

·       Python Programming with Numpy, Pandas, SKLearn, StatsModel or DASK

This course is following hands on experience in all the steps. So attendance with laptop computers is necessary. Also the software list above, will be provided during the course and the list is subject to updates.


One individual term project covering all the topics covered in the course : %100

Project Requirements :

You are free to select a project topic. The only requirement about the project is, you have to cover at least two topics from the following list and solve the same problem with two separate approaches from the list, you are also asked to compare your findings from these two alternative solutions : KNN, SVM, XGBoost, LightGBM, CatBoost, Decision Trees, Random Forest, Linear Regression, Polynomial Regression, SVR, ARL (ARM), K-Means, DBSCAN, HC

Sample Project Flow

Sample Project Flow

Example project topic: you can search Kaggle for some idea about the projects, you can also find some good data sets from these web sites.

Project proposal : until Apr 30 : please explain your project idea and alternative solution approaches from the course content.

Project Deliverables: You are asked to submit the below items via mail until May 19, 2020.

  1. Presentation and Demo video: please shoot a video for your presentation and demo of your project.
  2. Project Presentation: slides you are using during the presentation
  3. Project Report : a detailed explanation of your approaches, the difficulties you have faced during the project implementation, comparison of your two alternative approaches to the same problem (from the perspectives of implementation difficulties, their success rates, running performances etc.), some critical parts of your algorithms. Also provide details about increasing the success of your approach. Please answer all of those questions in your project report: what did you do to solve the unbalanced data if you have in your problem? what did you do to solve missing values, dirty or noisy data problems? did you use dimension transformation like PCA or LDA, why? did you check the underfitting or overfitting possibility and how did you get rid of it? did you use any regularization? did you implement segmentation / clustering before the classification or prediction steps, why or why not? Which data science project management method did you use (e.g. SEMMA, CRISP-DM or KDD?) why did you pick this method? Which step was the most difficulty step and why? How did you optimize the parameters of your algorithms? What was the best parameters and why? how did you found these parameters and do you think you can use same parameters for the other data sets in the future for the same problem?
  4. Running Code or Project: you are free to implement your solution in any platform / language. The only requirement about your implementation is, you have to code the two alternative solution on the same platform / programming language (otherwise it will not be fair to compare them). Please also provide an installation manual for your platform and running your code.
  5. Interview: A personal interview will be held after the submissions. Each of you will be asked to provide a time slot of at least 30 minutes for your projects. During this time, you will be asked to connect via an online platform and show your running demo and answer the questions. Please also attach your available time slots to your submissions.

Project Policies: There will be no late submission policy. If you can solve a problem with only 1 approach, which also means you can not compare two approaches, will be graded with 35 points over 100 max. So, please push yourselves to submit two separate approaches for your problem. You are free to use any library during your projects, you are not allowed to use a library or any code on the internet or written by anybody else on the AI part of your project only. So, in other words, you have to write the two different AI module for your project with two different approaches from the course content and using somebodyelse’s code in the AI module will get 0 as the final grade.

Course Content:

Week 1 : Introduction to Data, Problems and Real World Examples:Some useful information:DIKW Pyramid: DIKW pyramid – WikipediaCRISP-DM: Cross-industry standard process for data mining – WikipediaSlides from first week:week1
Week 2 : Introduction to Descriptive Analytics Repeating the first week for majority of the class and starting the concept of end to end data science projects.

Installation of Knime from ( and a brief introduction document : )

Weight and Heigh Sample project and Data Set for Knime work flow.

download first workflow

Week 3 : Introduction to Data Manipulation Concept of Data and types of data : Categorical (Nominal, Ordinal) and Numerical (Interval, Ratio). Basic Data Manipulation techniques with Knime: 1.Row Filter and Concept of Missing Values 2.Column Filter 3.Advanced Filters 4.Concatenate 5.Join 6. Group by , Aggregation 7. Formulas, String Replace 8. String Manipulation 9. Discrete, Quantized Data, Binning 10. Normalization 11.Splitting and Merging 12.Type Conversion (Numeric , String)
Week 4 : Introduction to Python Programming for Data Science and an end-to-end Python application for data science Brief review of python programming Introduction to data manipulation libraries: NumPY and Pandas Introduction to the Sci-Kit Learn library and a sample classification You can install anaconda and Spyder from the link below: Also we have covered below topics during the class:

  • Data loading from external source using Pandas library (with read_excel or read_csv methods)
  • DataFrame slicing and dicing (using the iloc property and the lists provided to the iloc method)
  • Column Filtering (with copying into a new data frame)
  • Row Filtering (with copying into a new data frame)
  • Advanced row filtering (like filtering the people with even number of heights)
  • Column or row wise formula (we have calculated the BMI for everybody)
  • Quantization (discretization or binning): where we have applied the condition based binning
  • Min – Max Normalization (we have implemented MinMaxScaler from the SKLearn library)
  • Group By operation (we have implemented the groupby method from pandas library)

Click here to download the codes from the class For further information I strongly suggest you to read the below documentations:

Week 5 : Classification Algorithms concepts of classification algorithms, implementing the algorithms in Knime and coding in python. Algorithms covered are: K-NN Naive Bayes Decision Tree Logistic Regression Support Vector Machines 2nd Python Code of the course for the classifications Knime Workflow for the classification algorithms
Week 6: Regression Algorithms concepts of prediction algorithms, implementing the algorithms in Knime and coding in python. Algorithms covered are: Linear Regression Polynomial Regression Support Vector Regressor Regression Trees and Decision Tree Regressor Python code for the RegressionKnime Workflow and the BIST 100 data set for the Regression Algorithms The Data Set obtained from :
Week 7 : Clustering Algorithms concepts of clustering algorithms, implementing the algorithms in Knime and coding in python. Algorithms covered are: K-Means DBScan Hierarchical Clustering Knime WorkflowPython Code
Week 8 : Association Rule Mining concepts of association rule mining (ARM) and association rule learning (ARL) algorithms, implementing the algorithms in Knime and coding in python. Algorithms covered are: A-Priori Algorithm Click Here To Download Apyroiri Library for the Python Codesclick for python code click for knime workflow Homework : Link for Kaggle, instacart
Week 9 : Concept of Error and Evaluation Techniques n-Fold Cross Validation , LOO, Split Validation RMSE, MAE, R2 values for regression RandIndex, Silhouet, WCSS for clustering algorithms Accuracy, Recall, Precision, F-Score, F1-Score etc. for classification algorithms We also got an introduction to dimension reduction with PCA (principal component analysis) and Neural networks with MLP (multi layer perceptron) Please don’t forget to install Keras for next week.
Week 10 : Collective Learning :


Week 11 : Collective Learning and Consensus Learning and Clustering Algorithms: Ensemble Learning, Bagging, Boosting Techniques, Random Forest, GBM, XGBoost, LightGBM Some links useful for the class:

Readings and resources:

Python Codes from the class : Gradient Boosting: XGBoost (for running the code install XGBoost by the command prompt: conda install -c conda-forge xgboost Install XGBoost extension for Knime

Week 12 : Project Presentations First Group. Presentations will be picked randomly during the class and anybody absent will be considered as not presented. Project Deliveries (until May 6): Project Presentation, Project Report (explaining your project, your approach and methodologies, difficulties you have faced, solutions you have found, results you have achieved in your projects, links to your data sources). Knime Workflows (in .knwf format) and python codes (in .py format). Please make all these files a single .zip or .rar archive and do not put more than 4 files in your archive.
Week 13 : Project Presentations Second Group If you haven missed the project presentations in the first week, please contact me for further details.
Week 14( May 12): TBA
Week 15( May 19): TBA

ECE 550 Advanced AI

ECE 550 Advanced AI

Classes: Mon 13.00 – 16.00

Location: Courses will be online on Discord (for server link please contact Meltem Koc <> )

Courses will be available on YouTube channel (after a delay) :

Instructor: Dr. Şadi Evren ŞEKER (+9 0531 605 6726)


Web Site: TBA

Course Content:

  • History and Philosophy of the Artificial Intelligence (AI)
  • Classical AI approaches like search problems, machine learning, constraint satisfaction, graphical models, logic etc.
  • Learning how to model a complex real-world problem by the classical AI approach


  • Introduction to Artificial Intelligence Problems
  • Programming with Python for solving Real Life problems with AI Algorithms
  • Writing a real world application with an AI module (like a game)
  • Introducing sub-AI topics like neural computing, uncertainity and bayesian networks, concept of learning (supervised / unsupervised) etc.


  • S. Russell and P. Norvig Artificial Intelligence: A Modern Approach Prentice Hall
  • —A must check :
  • Another useful link:
  • Some parts of the course is related to Machine Learning, Data Science, Data Mining, Pattern Recognition, Natural Language Processing, Statistics, Logic, Artificial Neural Networks and Fuzzy Logic, so you can read any [text] books about the topics.


Final Exam (100%).

Requirements and grading details: You are asked to solve the same problem with 2 different AI approaches in same programming language (you can pick any programming language) and compare them. The grading details are listed below:

  • Cheating = 0%, if somebody else solves your problem, writes your code, or you submit an already solved project without any effort on the project and without understanding the approach, strategy or coding, than you will be considered as you have submitted somebody else’s code and you will get 0 and fail the course, without exception.
  • Grading from Approach 1 = Code for Approach 1 (Code will be questioned during demonstration) (25%) + Detailed Report about Approach 1 (5%) + Presentation about Approach 1 ( 5%) = 35%
  • Grading from Approach 2 = Code for Approach 2 (Code will be questioned during demonstration) (25%) + Detailed Report about Approach 2 (5%) + Presentation about Approach 2 ( 5%) = 35%
  • Grading from comparing your approaches = If you have coded two approaches and if both of them are working, then you are asked to compare their advantages and disadvantages, processing complexity, memory complexity, your comments about these approaches. (presentation 10% + report 10% + demonstration 10%).

Grading summary : Approach 1 (35%) + Approach 2 ( 35%) + Comparison (30%) = 100%

  • These percents only represents the maximum score you can get from any part of the project, the grading may be lower, related to the quality of your submission.
  • Also grading will be individual, so depending on the answers during the demonstration you can get a different grade than your team mates.

Important Dates about Projects:

  • 5th of Jan, until midnight, for the submissions of project code (submit all related libraries or data files to execute your code), report (word or pdf format) and presentation files (power point or pdf format).
  • 6th of Jan, until 14.00, project presentation video (submit a video link to the discord channel, so everybody in the class can watch your presentation, the video link can be a youtube video or any drive link)
  • Final Exam week , TBA, Demonstrations, you will be invited to a time slot of 30 minutes. Please check the discord channel for updates.

Final Exam date will be announced at the end of the semester, demonstrations will be by invitation, during the final exams week.


Final Project will be group work (max 3 people in a group), and expectations are : Project Report, Project Presentation, Running Code in Python (updated: the programming language is up to you, but you have to use the same programming environment/language for the whole project).

Projects should include at least 2 implementation from following list:  1) Search/ Heuristic, 2) CSP, 3) Game Trees, 4) Logic, 5) Fuzzy, 6) Machine Learning / ANN

Late Submission Policy: Any late submission will get 10% penalty for each 24 hours. Demonstration has no postpone and if you don’t appear on demo time you lose your grade percent from your code.

Course Outline:

  • —Introduction and Agents (chapters 1,2)
  • —Search (chapters 3,4,5,6)
  • —Logic (chapters 7,8,9)
  • —Planning (chapters 11,12)
  • —Uncertainty (chapters 13,14)
  • —Learning (chapters 18,20)
  • —Natural Language Processing (chapter 22,23)

Schedule and Contents (Very Very Very Tentative):

  • Class 1, :[PPT] Introduction : Course Demonstration Slides, Introduction Slides
  • Class 2, : [PPT] Agents
  • Class 3, :  [PPT] Search
  • Class 4, : No Class
  • Class 5, : [PPT] Heuristic Search
  • Class 6, : [PPT]Constraint Satisfaction Problems
  • Class 7, : [PPT] Game Playing
  • Class 8, : Constraint Satisfaction Problems (CSP)
  • Class 9, : [PPT] Logic, [PPT]First Order Logic, Inference in First Order Logic, [PPT] Uncertainity and Fuzzy Logic.
  • Class 10, : Supervised / Unsupervised Learning and Classification / Clustering Problems, k-nn, Decision Tree, Random Forest, Logistic Regression
  • Class 11, : Regression : Logistic Regression, Decision Tree Regression, Linear Regression, Polynomial Regression
  • Class 12, :  [PPT] Artificial Neural Networks
  • Class 13, : Project Presentations
  • Class 15, : Project Presentations
  • Class 16, : No Class. [PPT] Genetic Algorithms
  • Final Exam : Date TBA

Collaboration Policy: You may freely use internet resources and your course notes in completing assignments and quizzes for this course. You may not consult any person other than the professor when completing quizzes or exams. (Clarifying questions should be directed to the professor.) On assignments you may collaborate with others in the course, so long as you personally prepare the materials submitted under your name, and they accurately reflect your understanding of the topic. Any collaborations should be indicated by a note submitted with the assignment.


Please fill the knowledge card attached here, and send it back via email.