{"id":5797,"date":"2024-08-31T18:01:03","date_gmt":"2024-08-31T10:01:03","guid":{"rendered":""},"modified":"2024-08-31T18:01:03","modified_gmt":"2024-08-31T10:01:03","slug":"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc","status":"publish","type":"post","link":"https:\/\/mushiming.com\/5797.html","title":{"rendered":"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc"},"content":{"rendered":"

\n <\/path> \n<\/svg> <\/p>\n

\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u81ea\u7f16\u7801\u5668\u5b9e\u73b0\u4e0e\u7ed3\u679c\u5206\u6790<\/h2>\n

\uff081\uff09\u5b9e\u73b0\u6846\u67b6\uff1aKeras
\uff082\uff09\u6570\u636e\u96c6\uff1aMnist \u624b\u5199\u6570\u5b57\u8bc6\u522b
\uff083\uff09\u5173\u952e\u4ee3\u7801\uff1a<\/p>\n

\u73af\u5883\u914d\u7f6e<\/h2>\n

\u4f7f\u7528conda\uff0c\u65b0\u5efa\u4e00\u4e2akeras \u548ctensorflow\u7684\u73af\u5883<\/p>\n

\u5728win cmd \u7ec8\u7aef\u4e2d\u5efa\u7acb\u4e00\u4e2a\u65b0\u7684\u73af\u5883
\u65b0\u5efa\u7528pip\u5b89\u88c5\u4e09\u4e2a\u5305<\/p>\n

C:\\<\/span>Users\\<\/span>TJ619\\<\/span>Downloads\\<\/span>autoencoder-master><\/span>conda create -n keras_only python<\/span>=<\/span>3.9<\/span> (<\/span>base)<\/span> C:\\<\/span>Users\\<\/span>TJ619\\<\/span>Downloads\\<\/span>autoencoder-master><\/span>conda activate keras_only (<\/span>keras_only)<\/span> C:\\<\/span>Users\\<\/span>TJ619\\<\/span>Downloads\\<\/span>autoencoder-master><\/span> conda install<\/span> pip (<\/span>keras_only)<\/span> C:\\<\/span>Users\\<\/span>TJ619\\<\/span>Downloads\\<\/span>autoencoder-master><\/span>pip install<\/span> keras tensorflow matplotlib (<\/span>keras_only)<\/span> C:\\<\/span>Users\\<\/span>TJ619\\<\/span>Downloads\\<\/span>autoencoder-master><\/span>conda list # packages in environment at C:\\Users\\TJ619\\AppData\\Local\\Continuum\\anaconda3\\envs\\keras_only: <\/span> #<\/span> # Name Version Build Channel<\/span> absl-py 1.0<\/span>.0 pypi_0 pypi astunparse 1.6<\/span>.3 pypi_0 pypi ca-certificates 2021.10<\/span>.26 haa95532_2 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main cachetools 4.2<\/span>.4 pypi_0 pypi certifi 2021.10<\/span>.8 py39haa95532_0 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main charset-normalizer 2.0<\/span>.9 pypi_0 pypi cycler 0.11<\/span>.0 pypi_0 pypi flatbuffers 2.0<\/span> pypi_0 pypi fonttools 4.28<\/span>.3 pypi_0 pypi gast 0.4<\/span>.0 pypi_0 pypi google-auth 2.3<\/span>.3 pypi_0 pypi google-auth-oauthlib 0.4<\/span>.6 pypi_0 pypi google-pasta 0.2<\/span>.0 pypi_0 pypi grpcio 1.42<\/span>.0 pypi_0 pypi h5py 3.6<\/span>.0 pypi_0 pypi idna 3.3<\/span> pypi_0 pypi importlib-metadata 4.8<\/span>.2 pypi_0 pypi keras 2.7<\/span>.0 pypi_0 pypi keras-preprocessing 1.1<\/span>.2 pypi_0 pypi kiwisolver 1.3<\/span>.2 pypi_0 pypi libclang 12.0<\/span>.0 pypi_0 pypi markdown 3.3<\/span>.6 pypi_0 pypi matplotlib 3.5<\/span>.0 pypi_0 pypi numpy 1.21<\/span>.4 pypi_0 pypi oauthlib 3.1<\/span>.1 pypi_0 pypi openssl 1.1<\/span>.1l h2bbff1b_0 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main opt-einsum 3.3<\/span>.0 pypi_0 pypi packaging 21.3<\/span> pypi_0 pypi pillow 8.4<\/span>.0 pypi_0 pypi pip 21.2<\/span>.4 py39haa95532_0 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main protobuf 3.19<\/span>.1 pypi_0 pypi pyasn1 0.4<\/span>.8 pypi_0 pypi pyasn1-modules 0.2<\/span>.8 pypi_0 pypi pyparsing 3.0<\/span>.6 pypi_0 pypi python 3.9<\/span>.7 h_1 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main python-dateutil 2.8<\/span>.2 pypi_0 pypi requests 2.26<\/span>.0 pypi_0 pypi requests-oauthlib 1.3<\/span>.0 pypi_0 pypi rsa 4.8<\/span> pypi_0 pypi setuptools 58.0<\/span>.4 py39haa95532_0 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main setuptools-scm 6.3<\/span>.2 pypi_0 pypi six 1.16<\/span>.0 pypi_0 pypi sqlite 3.36<\/span>.0 h2bbff1b_0 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main tensorboard 2.7<\/span>.0 pypi_0 pypi tensorboard-data-server 0.6<\/span>.1 pypi_0 pypi tensorboard-plugin-wit 1.8<\/span>.0 pypi_0 pypi tensorflow 2.7<\/span>.0 pypi_0 pypi tensorflow-estimator 2.7<\/span>.0 pypi_0 pypi tensorflow-io-gcs-filesystem 0.22<\/span>.0 pypi_0 pypi termcolor 1.1<\/span>.0 pypi_0 pypi tomli 1.2<\/span>.2 pypi_0 pypi typing-extensions 4.0<\/span>.1 pypi_0 pypi tzdata 2021e hda174b7_0 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main urllib3 1.26<\/span>.7 pypi_0 pypi vc 14.2<\/span> h21ff451_1 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main vs2015_runtime 14.27<\/span>.29016 h5e58377_2 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main werkzeug 2.0<\/span>.2 pypi_0 pypi wheel 0.37<\/span>.0 pyhd3eb1b0_1 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main wincertstore 0.2<\/span> py39haa95532_2 http:\/\/mirrors.tuna.tsinghua.edu.cn\/anaconda\/pkgs\/main wrapt 1.13<\/span>.3 pypi_0 pypi zipp 3.6<\/span>.0 pypi_0 pypi <\/code><\/pre>\n

\"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc<\/p>\n

I\uff09\u5728 Keras \u7f16\u7801\u4e2d\uff0c\u53cd\u5377\u79ef\u7684\u5b9e\u73b0\u4ee3\u7801\u4fbf\u662f\u5377\u79ef\u64cd\u4f5c\uff0c\u5177\u4f53\u89e3\u91ca\u8be6\u89c1\u4e0a\u8ff0\u535a
\u5ba2\u3002https:\/\/blog.csdn.net\/quiet_girl\/article\/details\/ \u3002
ii\uff09UpSampling2D()\u5b9e\u73b0\u7684\u662f\u53cd\u5e73\u5747\u5377\u79ef\u7684\u64cd\u4f5c\u3002
iii) autoencoder.summary()\u5982\u4e0b\uff1a<\/p>\n

o enable<\/span> them in<\/span> other operations, rebuild TensorFlow with the appropriate compiler fl Model: \"model\"<\/span> _________________________________________________________________ Layer (<\/span>type)<\/span> Output Shape Param #<\/span> ==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>=<\/span> input_1 (<\/span>InputLayer)<\/span> [<\/span>(<\/span>None, 28<\/span>, 28<\/span>, 1<\/span>)<\/span>]<\/span> 0<\/span> conv2d (<\/span>Conv2D)<\/span> (<\/span>None, 28<\/span>, 28<\/span>, 16<\/span>)<\/span> 160<\/span> max_pooling2d (<\/span>MaxPooling2D (<\/span>None, 14<\/span>, 14<\/span>, 16<\/span>)<\/span> 0<\/span> )<\/span> conv2d_1 (<\/span>Conv2D)<\/span> (<\/span>None, 14<\/span>, 14<\/span>, 8<\/span>)<\/span> 1160<\/span> max_pooling2d_1 (<\/span>MaxPooling (<\/span>None, 7<\/span>, 7<\/span>, 8<\/span>)<\/span> 0<\/span> 2D)<\/span> conv2d_2 (<\/span>Conv2D)<\/span> (<\/span>None, 7<\/span>, 7<\/span>, 8<\/span>)<\/span> 584<\/span> up_sampling2d (<\/span>UpSampling2D (<\/span>None, 14<\/span>, 14<\/span>, 8<\/span>)<\/span> 0<\/span> )<\/span> conv2d_3 (<\/span>Conv2D)<\/span> (<\/span>None, 14<\/span>, 14<\/span>, 16<\/span>)<\/span> 1168<\/span> up_sampling2d_1 (<\/span>UpSampling (<\/span>None, 28<\/span>, 28<\/span>, 16<\/span>)<\/span> 0<\/span> 2D)<\/span> conv2d_4 (<\/span>Conv2D)<\/span> (<\/span>None, 28<\/span>, 28<\/span>, 1<\/span>)<\/span> 145<\/span> ==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>==<\/span>=<\/span> Total params: 3,217<\/span> Trainable params: 3,217<\/span> Non-trainable params: 0<\/span> Non-trainable params: 0<\/span> _________________________________________________________________ Epoch 1<\/span>\/20 <\/code><\/pre>\n

\"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc<\/p>\n

\u6574\u4f53\u7a0b\u5e8f\u5982\u4e0b<\/p>\n

#<\/span> python<\/span>3<\/span><\/span><\/span> # -<\/span>*<\/span>-<\/span> coding:<\/span> utf-<\/span>8<\/span> -<\/span>*<\/span>-<\/span> # @Author :<\/span> ziyue # @Time :<\/span> 2021<\/span> 12<\/span> 10<\/span> \"\"<\/span>\" Convolutional Autoencoder.<\/span> \"\"<\/span>\" import numpy as np from keras.<\/span>datasets import mnist from keras.<\/span>models import Model from keras.<\/span>layers import Conv2D,<\/span> MaxPool2D,<\/span>Input,<\/span> UpSampling2D import matplotlib.<\/span>pyplot as plt np.<\/span>random.<\/span>seed<\/span>(<\/span>33<\/span>)<\/span> # random seed\uff0cto reproduce results.<\/span> def train<\/span>(<\/span>x_train)<\/span>:<\/span> \"\"<\/span>\" build autoencoder.<\/span> :<\/span>param x_train:<\/span> the train data :<\/span>return<\/span>:<\/span> encoder and<\/span> decoder \"\"<\/span>\" #<\/span> input<\/span> placeholder<\/span><\/span> input_image =<\/span> Input<\/span>(<\/span>shape=<\/span>(<\/span>28<\/span>,<\/span> 28<\/span>,<\/span> 1<\/span>)<\/span>)<\/span> #<\/span> encoding<\/span> layer<\/span><\/span> x =<\/span> Conv2D<\/span>(<\/span>CHANNEL_1,<\/span> (<\/span>3<\/span>,<\/span> 3<\/span>)<\/span>,<\/span> activation=<\/span>'relu'<\/span>,<\/span> padding=<\/span>\"same\"<\/span>)<\/span>(<\/span>input_image)<\/span> x =<\/span> MaxPool2D<\/span>(<\/span>(<\/span>2<\/span>,<\/span> 2<\/span>)<\/span>,<\/span> padding=<\/span>'same'<\/span>)<\/span>(<\/span>x)<\/span> x =<\/span> Conv2D<\/span>(<\/span>CHANNEL_2,<\/span> (<\/span>3<\/span>,<\/span> 3<\/span>)<\/span>,<\/span> activation=<\/span>'relu'<\/span>,<\/span> padding=<\/span>'same'<\/span>)<\/span>(<\/span>x)<\/span> encoded =<\/span> MaxPool2D<\/span>(<\/span>(<\/span>2<\/span>,<\/span> 2<\/span>)<\/span>,<\/span> padding=<\/span>'same'<\/span>)<\/span>(<\/span>x)<\/span> #<\/span> decoding<\/span> layer<\/span><\/span> x =<\/span> Conv2D<\/span>(<\/span>CHANNEL_2,<\/span> (<\/span>3<\/span>,<\/span> 3<\/span>)<\/span>,<\/span> activation=<\/span>'relu'<\/span>,<\/span> padding=<\/span>'same'<\/span>)<\/span>(<\/span>encoded)<\/span> x =<\/span> UpSampling2D<\/span>(<\/span>(<\/span>2<\/span>,<\/span> 2<\/span>)<\/span>)<\/span>(<\/span>x)<\/span> x =<\/span> Conv2D<\/span>(<\/span>CHANNEL_1,<\/span> (<\/span>3<\/span>,<\/span> 3<\/span>)<\/span>,<\/span>activation=<\/span>'relu'<\/span>,<\/span> padding=<\/span>'same'<\/span>)<\/span>(<\/span>x)<\/span> x =<\/span> UpSampling2D<\/span>(<\/span>(<\/span>2<\/span>,<\/span> 2<\/span>)<\/span>)<\/span>(<\/span>x)<\/span> decoded =<\/span> Conv2D<\/span>(<\/span>CHANNEL_OUTPUT,<\/span> (<\/span>3<\/span>,<\/span> 3<\/span>)<\/span>,<\/span> activation=<\/span>'sigmoid'<\/span>,<\/span> padding=<\/span>'same'<\/span>)<\/span>(<\/span>x)<\/span> #<\/span> build<\/span> autoencoder,<\/span> encoder,<\/span> decoder<\/span><\/span> autoencoder =<\/span> Model<\/span>(<\/span>inputs=<\/span>input_image,<\/span> outputs=<\/span>decoded)<\/span> encoder =<\/span> Model<\/span>(<\/span>inputs=<\/span>input_image,<\/span> outputs=<\/span>encoded)<\/span> #<\/span> compile<\/span> autoencoder<\/span><\/span> autoencoder.<\/span>compile<\/span>(<\/span>optimizer=<\/span>'adam'<\/span>,<\/span> loss=<\/span>'binary_crossentropy'<\/span>,<\/span> metrics=<\/span>[<\/span>'accuracy'<\/span>]<\/span>)<\/span> autoencoder.<\/span>summary<\/span>(<\/span>)<\/span> #<\/span> training<\/span><\/span> #<\/span> need<\/span> return<\/span> history,<\/span> otherwise can not<\/span> use history[<\/span><\/span>\"acc\"<\/span>]<\/span><\/span><\/span> history_record =<\/span> autoencoder.<\/span>fit<\/span>(<\/span>x_train,<\/span> x_train,<\/span> epochs=<\/span>EPOCHS,<\/span> batch_size=<\/span>BATCH_SIZE,<\/span> shuffle=<\/span>True,<\/span> )<\/span> return<\/span> encoder,<\/span> autoencoder,<\/span> history_record def plot_accuray<\/span>(<\/span>history_record)<\/span>:<\/span> \"\"<\/span>\" plot the accuracy and<\/span> loss line.<\/span> :<\/span>param history_record:<\/span> :<\/span>return<\/span>:<\/span> \"\"<\/span>\" accuracy =<\/span> history_record.<\/span>history[<\/span>\"accuracy\"<\/span>]<\/span> loss =<\/span> history_record.<\/span>history[<\/span>\"loss\"<\/span>]<\/span> epochs =<\/span> range<\/span>(<\/span>len<\/span>(<\/span>accuracy)<\/span>)<\/span> plt.<\/span>plot<\/span>(<\/span>epochs,<\/span> accuracy,<\/span> 'bo'<\/span>,<\/span> label=<\/span>'Training accuracy'<\/span>)<\/span> plt.<\/span>title<\/span>(<\/span>'Training accuracy'<\/span>)<\/span> plt.<\/span>legend<\/span>(<\/span>)<\/span> plt.<\/span>figure<\/span>(<\/span>)<\/span> plt.<\/span>plot<\/span>(<\/span>epochs,<\/span> loss,<\/span> 'bo'<\/span>,<\/span> label=<\/span>'Training loss'<\/span>)<\/span> plt.<\/span>title<\/span>(<\/span>'Training loss'<\/span>)<\/span> plt.<\/span>legend<\/span>(<\/span>)<\/span> plt.<\/span>show<\/span>(<\/span>)<\/span> def show_images<\/span>(<\/span>decode_images,<\/span> x_test)<\/span>:<\/span> \"\"<\/span>\" plot the images.<\/span> :<\/span>param decode_images:<\/span> the images after decoding :<\/span>param x_test:<\/span> testing data :<\/span>return<\/span>:<\/span> \"\"<\/span>\" n =<\/span> 10<\/span> plt.<\/span>figure<\/span>(<\/span>figsize=<\/span>(<\/span>20<\/span>,<\/span> 4<\/span>)<\/span>)<\/span> for<\/span> i in range<\/span>(<\/span>n)<\/span>:<\/span> ax =<\/span> plt.<\/span>subplot<\/span>(<\/span>2<\/span>,<\/span> n,<\/span> i+<\/span>1<\/span>)<\/span> ax.<\/span>imshow<\/span>(<\/span>x_test[<\/span>i]<\/span>.<\/span>reshape<\/span>(<\/span>28<\/span>,<\/span> 28<\/span>)<\/span>)<\/span> plt.<\/span>gray<\/span>(<\/span>)<\/span> ax.<\/span>get_xaxis<\/span>(<\/span>)<\/span>.<\/span>set_visible<\/span>(<\/span>False)<\/span> ax.<\/span>get_yaxis<\/span>(<\/span>)<\/span>.<\/span>set_visible<\/span>(<\/span>False)<\/span> ax =<\/span> plt.<\/span>subplot<\/span>(<\/span>2<\/span>,<\/span> n,<\/span> i +<\/span> 1<\/span> +<\/span> n)<\/span> ax.<\/span>imshow<\/span>(<\/span>decode_images[<\/span>i]<\/span>.<\/span>reshape<\/span>(<\/span>28<\/span>,<\/span> 28<\/span>)<\/span>)<\/span> plt.<\/span>gray<\/span>(<\/span>)<\/span> ax.<\/span>get_xaxis<\/span>(<\/span>)<\/span>.<\/span>set_visible<\/span>(<\/span>False)<\/span> ax.<\/span>get_yaxis<\/span>(<\/span>)<\/span>.<\/span>set_visible<\/span>(<\/span>False)<\/span> plt.<\/span>show<\/span>(<\/span>)<\/span> if<\/span> __name__ ==<\/span> '__main__'<\/span>:<\/span> CHANNEL_1 =<\/span> 16<\/span> CHANNEL_2 =<\/span> 8<\/span> CHANNEL_OUTPUT =<\/span> 1<\/span> EPOCHS =<\/span> 1<\/span> BATCH_SIZE =<\/span> 64<\/span> #<\/span> Step1\uff1a load data x_train:<\/span> (<\/span>60000<\/span>,<\/span> 28<\/span>,<\/span> 28<\/span>)<\/span>,<\/span> y_train:<\/span> (<\/span>60000<\/span>,<\/span>)<\/span> x_test:<\/span> (<\/span>10000<\/span>,<\/span> 28<\/span>,<\/span> 28<\/span>)<\/span>,<\/span> y_test:<\/span> (<\/span>10000<\/span>,<\/span>)<\/span><\/span><\/span> (<\/span>x_train,<\/span> y_train)<\/span>,<\/span> (<\/span>x_test,<\/span> y_test)<\/span> =<\/span> mnist.<\/span>load_data<\/span>(<\/span>)<\/span> #<\/span> Step2:<\/span> normalize<\/span><\/span> x_train =<\/span> x_train.<\/span>astype<\/span>(<\/span>'float32'<\/span>)<\/span> \/<\/span> 255.<\/span> x_test =<\/span> x_test.<\/span>astype<\/span>(<\/span>'float32'<\/span>)<\/span> \/<\/span> 255.<\/span> #<\/span> Step3:<\/span> reshape data,<\/span> x_train:<\/span> (<\/span>60000<\/span>,<\/span> 28<\/span>,<\/span> 28<\/span>,<\/span> 1<\/span>)<\/span>,<\/span> x_test:<\/span> (<\/span>10000<\/span>,<\/span> 28<\/span>,<\/span> 28<\/span>,<\/span> 1<\/span>)<\/span>,<\/span> one row denotes one sample.<\/span><\/span><\/span> x_train =<\/span> x_train.<\/span>reshape<\/span>(<\/span>(<\/span>x_train.<\/span>shape[<\/span>0<\/span>]<\/span>,<\/span> 28<\/span>,<\/span> 28<\/span>,<\/span> 1<\/span>)<\/span>)<\/span> x_test =<\/span> x_test.<\/span>reshape<\/span>(<\/span>(<\/span>x_test.<\/span>shape[<\/span>0<\/span>]<\/span>,<\/span> 28<\/span>,<\/span> 28<\/span>,<\/span> 1<\/span>)<\/span>)<\/span> #<\/span> Step4\uff1a train<\/span><\/span> encoder,<\/span> autoencoder,<\/span> history_record =<\/span> train<\/span>(<\/span>x_train=<\/span>x_train)<\/span> #<\/span> show<\/span> images<\/span><\/span> decode_images =<\/span> autoencoder.<\/span>predict<\/span>(<\/span>x_test)<\/span> show_images<\/span>(<\/span>decode_images,<\/span> x_test)<\/span> plot_accuray<\/span>(<\/span>history_record)<\/span> <\/code><\/pre>\n

\u8bba\u6587\u540d\u79f0\uff1aStacked Convolutional Auto-Encoders for
Hierarchical Feature Extraction<\/p>\n

\u8bba\u6587\u5730\u5740\uff1ahttps:\/\/people.idsia.ch\/\/~ciresan\/data\/icann2011.pdf
\u6458\u8981\uff1a<\/p>\n

\u6211\u4eec\u63d0\u51fa\u4e86\u4e00\u79cd\u65b0\u9896\u7684\u5377\u79ef\u81ea\u52a8\u7f16\u7801\u5668\uff08CAE\uff09\uff0c\u7528\u4e8e \u65e0\u76d1\u7763\u7279\u5f81\u5b66\u4e60\u3002\u4e00\u5806 CAE \u5f62\u6210\u4e00\u4e2a\u5377\u79ef \u795e\u7ecf\u7f51\u7edc\uff08CNN\uff09\u3002\u6bcf\u4e2a CAE \u90fd\u4f7f\u7528\u4f20\u7edf\u7684\u5728\u7ebf\u8bad\u7ec3 \u6ca1\u6709\u989d\u5916\u7684\u6b63\u5219\u5316\u9879\u7684\u68af\u5ea6\u4e0b\u964d\u3002\u6700\u5927\u6c60\u5316 \u5c42\u5bf9\u4e8e\u5b66\u4e60\u751f\u7269\u5b66\u4e0a\u5408\u7406\u7684\u7279\u5f81\u662f\u5fc5\u4e0d\u53ef\u5c11\u7684 \u4ee5\u524d\u7684\u65b9\u6cd5\u53d1\u73b0\u7684\u90a3\u4e9b\u3002\u7528 a \u7684\u8fc7\u6ee4\u5668\u521d\u59cb\u5316 CNN \u8bad\u7ec3\u6709\u7d20\u7684 CAE \u5806\u6808\u5728\u6570\u5b57 (MNIST) \u548c \u5bf9\u8c61\u8bc6\u522b (CIFAR10) \u57fa\u51c6\u3002<\/p>\n

\u4ecb\u7ecd\uff1a<\/p>\n

\u65e0\u76d1\u7763\u5b66\u4e60\u65b9\u6cd5\u7684\u4e3b\u8981\u76ee\u7684\u662f\u4ece\u672a\u6807\u8bb0\u7684\u6570\u636e\u4e2d\u63d0\u53d6\u666e\u904d\u6709\u7528\u7684\u7279\u5f81\uff0c\u68c0\u6d4b\u548c\u53bb\u9664\u8f93\u5165\u5197\u4f59\uff0c\u4ee5\u53ca
\u4ec5\u4fdd\u7559\u6570\u636e\u7684\u57fa\u672c\u65b9\u9762\u4ee5\u7a33\u5065\u548c\u6709\u533a\u522b\u7684\u8868\u793a\u3002\u65e0\u76d1\u7763\u65b9\u6cd5\u5df2\u5728\u8bb8\u591a\u79d1\u5b66\u4e2d\u5e38\u89c4\u4f7f\u7528\u548c\u5de5\u4e1a\u5e94\u7528\u3002\u5728\u795e\u7ecf\u7f51\u7edc\u67b6\u6784\u7684\u80cc\u666f\u4e0b\uff0c\u65e0\u76d1\u7763\u5c42\u53ef\u4ee5\u76f8\u4e92\u5806\u53e0\u4ee5\u6784\u5efa\u6df1\u5c42\u5c42\u6b21\u7ed3\u6784[7]\u3002\u8f93\u5165\u5c42\u6fc0\u6d3b\u88ab\u9988\u9001\u5230\u7b2c\u4e00\u5c42\uff0c\u5b83\u9988\u9001\u4e0b\u4e00\u5c42\uff0c\u5e76\u4e14
\u4f9d\u6b64\u7c7b\u63a8\uff0c\u5bf9\u4e8e\u5c42\u6b21\u7ed3\u6784\u4e2d\u7684\u6240\u6709\u5c42\u3002\u53ef\u4ee5\u8bad\u7ec3\u6df1\u5ea6\u67b6\u6784\u65e0\u76d1\u7763\u7684\u9010\u5c42\u65f6\u5c1a\uff0c\u7136\u540e\u901a\u8fc7\u53cd\u5411\u4f20\u64ad\u8fdb\u884c\u5fae\u8c03\u4ee5\u6210\u4e3a\u5206\u7c7b\u5668 [9]\u3002\u65e0\u76d1\u7763\u521d\u59cb\u5316\u503e\u5411\u4e8e\u907f\u514d\u5c40\u90e8\u6700\u5c0f\u503c\u548c\u63d0\u9ad8\u7f51\u7edc\u7684\u6027\u80fd\u7a33\u5b9a\u6027[6]\u3002<\/p>\n

3 \u5377\u79ef\u81ea\u7f16\u7801\u5668 (CAE) \u5168\u8fde\u63a5 AE \u548c DAE \u90fd\u5ffd\u7565\u4e86 2D \u56fe\u50cf\u7ed3\u6784\u3002\u8fd9\u4e0d\u662f \u53ea\u662f\u5728\u5904\u7406\u5b9e\u9645\u5927\u5c0f\u7684\u8f93\u5165\u65f6\u51fa\u73b0\u95ee\u9898\uff0c\u800c\u4e14\u8fd8\u5f15\u5165\u4e86 \u53c2\u6570\u4e2d\u7684\u5197\u4f59\uff0c\u8feb\u4f7f\u6bcf\u4e2a\u7279\u5f81\u90fd\u662f\u5168\u5c40\u7684\uff08\u5373\u8de8\u8d8a \u6574\u4e2a\u89c6\u91ce\uff09\u3002\u7136\u800c\uff0c\u89c6\u89c9\u548c\u7269\u4f53\u8bc6\u522b\u7684\u8d8b\u52bf\u91c7\u7528 \u6700\u6210\u529f\u7684\u6a21\u578b [17,25] \u662f\u53d1\u73b0\u5728\u6574\u4e2a\u8f93\u5165\u4e2d\u91cd\u590d\u7684\u5c40\u90e8\u7279\u5f81\u3002 CAE \u4e0e\u4f20\u7edf AE \u7684\u4e0d\u540c\u4e4b\u5904\u5728\u4e8e\u5b83\u4eec\u7684 \u6743\u91cd\u5728\u8f93\u5165\u4e2d\u7684\u6240\u6709\u4f4d\u7f6e\u4e4b\u95f4\u5171\u4eab\uff0c\u4ece\u800c\u4fdd\u6301\u7a7a\u95f4\u5c40\u90e8\u6027\u3002 \u56e0\u6b64\u91cd\u5efa\u662f\u7531\u4e8e\u57fa\u672c\u56fe\u50cf\u5757\u7684\u7ebf\u6027\u7ec4\u5408 \u57fa\u4e8e\u6f5c\u5728\u4ee3\u7801\u3002<\/p>\n

\"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc<\/p>\n

\"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc
\"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc<\/p>\n

\u7ed3\u8bba
\u6211\u4eec\u4ecb\u7ecd\u4e86\u5377\u79ef\u81ea\u52a8\u7f16\u7801\u5668\uff0c\u8fd9\u662f\u4e00\u79cd\u7528\u4e8e\u5206\u5c42\u7279\u5f81\u63d0\u53d6\u7684\u65e0\u76d1\u7763\u65b9\u6cd5\u3002\u5b83\u5b66\u4e60\u751f\u7269\u5b66\u4e0a\u5408\u7406\u7684\u8fc7\u6ee4\u5668\u3002\u4e00\u4e2aCNN\u53ef\u4ee5\u7531 CAE \u5806\u6808\u521d\u59cb\u5316\u3002\u867d\u7136 CAE \u7684\u8fc7\u5b8c\u5907\u9690\u85cf\u8868\u793a\u4f7f\u5b66\u4e60\u6bd4\u6807\u51c6\u81ea\u52a8\u7f16\u7801\u5668\u66f4\u96be\uff0c\u4f46\u597d\u7684\u8fc7\u6ee4\u5668
\u5982\u679c\u6211\u4eec\u4f7f\u7528\u6700\u5927\u6c60\u5316\u5c42\uff0c\u5c31\u4f1a\u51fa\u73b0\uff0c\u8fd9\u662f\u4e00\u79cd\u5f3a\u5236\u6267\u884c\u7a00\u758f\u4ee3\u7801\u7684\u4f18\u96c5\u65b9\u5f0f\u65e0\u9700\u901a\u8fc7\u53cd\u590d\u8bd5\u9a8c\u8bbe\u7f6e\u4efb\u4f55\u6b63\u5219\u5316\u53c2\u6570\u3002\u9884\u8bad\u7ec3CNN \u7684\u6027\u80fd\u5f80\u5f80\u7565\u80dc\u4e8e\u968f\u673a\u521d\u59cb\u5316\u7684\u7f51\u7edc\uff0c\u4f46\u59cb\u7ec8\u5982\u4e00\u3002\u6211\u4eec\u7684 CIFAR10 \u7ed3\u679c\u662f\u4efb\u4f55\u5728\u539f\u59cb\u6570\u636e\u4e0a\u8bad\u7ec3\u7684\u65e0\u76d1\u7763\u65b9\u6cd5\u7684\u6700\u4f73\u7ed3\u679c\u6570\u636e\uff0c\u5e76\u63a5\u8fd1\u8be5\u57fa\u51c6\u6d4b\u8bd5\u7684\u6700\u4f73\u53d1\u5e03\u7ed3\u679c.<\/p>\n","protected":false},"excerpt":{"rendered":"\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4ee3\u7801\u8be6\u89e3_fpga\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u81ea\u7f16\u7801\u5668\u5b9e\u73b0\u4e0e\u7ed3\u679c\u5206\u6790\uff081\uff09\u5b9e\u73b0\u6846\u67b6\uff1aKeras\uff082\uff09\u6570\u636e\u96c6\uff1aMnist\u624b\u5199\u6570\u5b57...","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"_links":{"self":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/posts\/5797"}],"collection":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/comments?post=5797"}],"version-history":[{"count":0,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/posts\/5797\/revisions"}],"wp:attachment":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/media?parent=5797"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/categories?post=5797"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/tags?post=5797"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}