diff --git a/Code/Day 41.ipynb b/Code/Day 41.ipynb
new file mode 100644
index 0000000..b2b35e7
--- /dev/null
+++ b/Code/Day 41.ipynb
@@ -0,0 +1,147 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 基础知识\n",
+ "基本的CNN结构如下: Convolution(卷积) -> Pooling(池化) -> Convolution -> Pooling -> Fully Connected Layer(全连接层) -> Output\n",
+ "\n",
+ "Convolution(卷积)是获取原始数据并从中创建特征映射的行为。Pooling(池化)是下采样,通常以“max-pooling”的形式,我们选择一个区域,然后在该区域中取最大值,这将成为整个区域的新值。Fully Connected Layers(全连接层)是典型的神经网络,其中所有节点都“完全连接”。卷积层不像传统的神经网络那样完全连接。\n",
+ "\n",
+ "卷积:我们将采用某个窗口,并在该窗口中查找要素,该窗口的功能现在只是新功能图中的一个像素大小的功能,但实际上我们将有多层功能图。接下来,我们将该窗口滑过并继续该过程,继续此过程,直到覆盖整个图像。\n",
+ "\n",
+ "池化:最常见的池化形式是“最大池化”,其中我们简单地获取窗口中的最大值,并且该值成为该区域的新值。\n",
+ "\n",
+ "全连接层:每个卷积和池化步骤都是隐藏层。在此之后,我们有一个完全连接的层,然后是输出层。完全连接的层是典型的神经网络(多层感知器)类型的层,与输出层相同。\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 注意\n",
+ "本次代码中所需的X.pickle和y.pickle为上一篇的输出,路径请根据自己的情况更改!\n",
+ "\n",
+ "此篇中文为译者根据原文整理得到,可能有不当之处,可以点击此处查看原文。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Using TensorFlow backend.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Train on 17462 samples, validate on 7484 samples\n",
+ "Epoch 1/3\n",
+ "17462/17462 [==============================] - 47s 3ms/step - loss: 0.6728 - acc: 0.6019 - val_loss: 0.6317 - val_acc: 0.6463\n",
+ "Epoch 2/3\n",
+ "17462/17462 [==============================] - 41s 2ms/step - loss: 0.6164 - acc: 0.6673 - val_loss: 0.6117 - val_acc: 0.6776\n",
+ "Epoch 3/3\n",
+ "17462/17462 [==============================] - 41s 2ms/step - loss: 0.5690 - acc: 0.7129 - val_loss: 0.5860 - val_acc: 0.6963\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 1,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "import tensorflow as tf\n",
+ "from keras.datasets import cifar10\n",
+ "from keras.preprocessing.image import ImageDataGenerator\n",
+ "from keras.models import Sequential\n",
+ "from keras.layers import Dense, Dropout, Activation, Flatten\n",
+ "from keras.layers import Conv2D, MaxPooling2D\n",
+ "\n",
+ "import pickle\n",
+ "\n",
+ "pickle_in = open(\"../datasets/X.pickle\",\"rb\")\n",
+ "X = pickle.load(pickle_in)\n",
+ "\n",
+ "pickle_in = open(\"../datasets/y.pickle\",\"rb\")\n",
+ "y = pickle.load(pickle_in)\n",
+ "\n",
+ "X = X/255.0\n",
+ "\n",
+ "model = Sequential()\n",
+ "\n",
+ "model.add(Conv2D(256, (3, 3), input_shape=X.shape[1:]))\n",
+ "model.add(Activation('relu'))\n",
+ "model.add(MaxPooling2D(pool_size=(2, 2)))\n",
+ "\n",
+ "model.add(Conv2D(256, (3, 3)))\n",
+ "model.add(Activation('relu'))\n",
+ "model.add(MaxPooling2D(pool_size=(2, 2)))\n",
+ "\n",
+ "model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors\n",
+ "\n",
+ "model.add(Dense(64))\n",
+ "\n",
+ "model.add(Dense(1))\n",
+ "model.add(Activation('sigmoid'))\n",
+ "\n",
+ "model.compile(loss='binary_crossentropy',\n",
+ " optimizer='adam',\n",
+ " metrics=['accuracy'])\n",
+ "\n",
+ "model.fit(X, y, batch_size=32, epochs=3, validation_split=0.3)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "在仅仅三个epoches之后,我们的验证准确率为71%。如果我们继续进行更多的epoches,我们可能会做得更好,但我们应该讨论我们如何知道我们如何做。为了解决这个问题,我们可以使用TensorFlow附带的TensorBoard,它可以帮助您在训练模型时可视化模型。\n",
+ "\n",
+ "我们将在下一个教程中讨论TensorBoard以及对我们模型的各种调整!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}