diff --git a/Code/DAY 2.ipynb b/Code/DAY 2.ipynb
index f9ec15b..ee89270 100644
--- a/Code/DAY 2.ipynb
+++ b/Code/DAY 2.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#机器学习100天——第2天:简单线性回归\n",
+ "#机器学习100天——第一天:数据预处理\n",
"##第一步:数据预处理"
]
},
@@ -198,7 +198,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "##第四步:可视化"
+ "##可视化"
]
},
{
diff --git a/Code/Day 11 K-NN.md b/Code/Day 11 K-NN.md
new file mode 100644
index 0000000..2afaa1a
--- /dev/null
+++ b/Code/Day 11 K-NN.md
@@ -0,0 +1,55 @@
+# K-Nearest Neighbors (K-NN)
+
+
+
+
+
+## The DataSet | Social Network
+
+
+
+
+
+
+## Importing the libraries
+```python
+import numpy as np
+import matplotlib.pyplot as plt
+import pandas as pd
+```
+
+## Importing the dataset
+```python
+dataset = pd.read_csv('Social_Network_Ads.csv')
+X = dataset.iloc[:, [2, 3]].values
+y = dataset.iloc[:, 4].values
+```
+
+## Splitting the dataset into the Training set and Test set
+```python
+from sklearn.cross_validation import train_test_split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
+```
+## Feature Scaling
+```python
+from sklearn.preprocessing import StandardScaler
+sc = StandardScaler()
+X_train = sc.fit_transform(X_train)
+X_test = sc.transform(X_test)
+```
+## Fitting K-NN to the Training set
+```python
+from sklearn.neighbors import KNeighborsClassifier
+classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
+classifier.fit(X_train, y_train)
+```
+## Predicting the Test set results
+```python
+y_pred = classifier.predict(X_test)
+```
+
+## Making the Confusion Matrix
+```python
+from sklearn.metrics import confusion_matrix
+cm = confusion_matrix(y_test, y_pred)
+```
diff --git a/Code/Day 7.jpg b/Code/Day 7.jpg
new file mode 100644
index 0000000..3a7c87e
Binary files /dev/null and b/Code/Day 7.jpg differ