Deep Residual Shrinkage Network: I-Artificial Intelligence Method ye-Highly Noisy Data

An Artificial Intelligence Method for Highly Noisy Data

I-Deep Residual Shrinkage Network yihlobo oluthuthukisiweko (improved variant) lwe-Deep Residual Network. Kahle kahle, i-Deep Residual Shrinkage Networkihlanganisa ndawonye i-Deep Residual Network, ama-attention mechanisms, kanye nama-soft thresholding functions.

Singayizwisisa indlela i-Deep Residual Shrinkage Network esebenza ngayo ngendlela le. Kokuthoma, i-network isebenzisa ama-attention mechanisms ukubona ama-features angakalungi (unimportant features). Bese, i-network isebenzisa ama-soft thresholding functions ukwenza la ma-features angakalungi abe yi-zero. Ngakolunye uhlangothi, i-networkibona ama-features aqakathekileko (important features) bese iyawagcina. Le proseyisiiqinisa amandla we-deep neural network. Lokhu kusiza i-network bona ikghone ukukhipha (extract) ama-useful features kumasignals anomsindo (signals containing noise).

1. I-Research Motivation (Isisusa seRhubhululo)

Kokuthoma, i-noise ayibalekeleki lokha i-algorithm i-classifaya ama-samples. Iimbonelo zale noise zibandakanya i-Gaussian noise, i-pink noise, kanye ne-Laplacian noise. Ngokubanzi, ama-samples avamise ukuba ne-information engadingeki ku-task ye-classification esiyenzako kwamanje. Singathatha le information engadingeki njenge-noise. Le noise ingaphungula i-performance ye-classification. (Njengoba sazi, i-soft thresholding lisiga eliqakathekileko kuma-signal denoising algorithms amanengi.)

Isibonelo, cabanga ngekulumo eceleni kwendlela (conversation by a roadside). I-audio ingaba nomsindo weemotjhini (car horns) namasondo (wheels). Mhlambe sifuna ukwenza i-speech recognition kulezi zimpawu (signals). Imisindo ye-background le izophazamisa imiphumela. Ngokubona kwe-deep learning, i-deep neural network kufuzeisuse ama-features ahambisana neenhlabamkhosi (horns) namasondo. Lokhu kususa kuvimbela la ma-features bona angaphazamisi imiphumela ye-speech recognition.

Okwesibili, inani le-noise livamise ukuhluka phakathi kwama-samples. Lokhu kwenzeka ngitjho nangabe sisebenzisa i-dataset eyodwa. (Umehluko loufana ncamashi nama-attention mechanisms. Ake sithathe i-image dataset njengesibonelo. Indawo lapho i-target object ikhona ingahluka ukusuka esithombeni ukuya kwesinye. Ama-attention mechanisms angakghona uku-focusa endaweni ethile ye-target object esithombeni ngasinye.)

Ngokwesibonelo, ake sithi si-train-a i-cat-and-dog classifier sisebenzisa izithombe ezinhlanu ezi-label-we njenge-“dog.”

  • Isithombe 1 singaba nenja negundwana (mouse).
  • Isithombe 2 singaba nenja nerhoze (goose).
  • Isithombe 3 singaba nenja nekukhu (chicken).
  • Isithombe 4 singaba nenja nedonki (donkey).
  • Isithombe 5 singaba nenja nedada (duck).

Ngesikhathi se-training, izinto ezingadingekiko (irrelevant objects) zizophazamisa i-classifier. Lezi zinto zibandakanya amagundwana, amarhoze, iinkukhu, iimbongolo, namadada. Lokhu kuphazamiseka kubangela ukwehla kwe-classification accuracy. Nangabe siyakghona ukubona lezi zinto ezingadingekiko, singakghona ukususa ama-features ahambisana nazo. Ngaleyo ndlela, singathuthukisa i-accuracy ye-cat-and-dog classifier.

2. I-Soft Thresholding

I-Soft thresholding lisiga esiqakatheke khulu (core step) kuma-signal denoising algorithms amanengi. I-algorithmisusa ama-features nangabe ama-absolute values walawa ma-features aphasi kune-threshold ethile. I-algorithm ishrikhisa (shrinks) ama-features ukuthi aye ngase-zero nangabe ama-absolute values walawa ma-features aphezulu kune-threshold. Abaphandi (Researchers) bangasebenzisa i-formula elandelako ukwenza i-soft thresholding:

\[y = \begin{cases} x - \tau & x > \tau \\ 0 & -\tau \le x \le \tau \\ x + \tau & x < -\tau \end{cases}\]

I-derivative ye-output ye-soft thresholding mayelana ne-input yile:

\[\frac{\partial y}{\partial x} = \begin{cases} 1 & x > \tau \\ 0 & -\tau \le x \le \tau \\ 1 & x < -\tau \end{cases}\]

I-formula engehla le itsengisa bona i-derivative ye-soft thresholding kungaba ngu-1 nofana u-0. Le property iyafana ne-property ye-ReLU activation function. Ngalokho, i-soft thresholding inganciphisa ubungozi be-gradient vanishing ne-gradient exploding kuma-deep learning algorithms.

Ku-soft thresholding function, uku-setha i-threshold kufuze kuhlangabezane nemibandela emibili. Kokuthoma, i-threshold kufuze ibe yi-positive number. Kwesibili, i-threshold akukafuzi ibe khulu ukudlula i-maximum value ye-input signal. Nakungasinjalo, i-output izoba ngu-zero yoke.

Ngaphezu kwalokho, kungaba kuhle bona i-threshold ihlangabezane nombandela wesithathu. I-sample ngayinye kufuze ibe ne-threshold yayo ezijameleko (independent threshold) kuye ngokobana i-sample leyo ine-noise engangani.

Isizathu kukobana i-noise content ivamise ukuhluka phakathi kwama-samples. Isibonelo, i-Sample A ingaba ne-noise encani kodwana i-Sample B ibe ne-noise enengi ku-dataset efanako. Esimeni esinjalo, i-Sample A kufuze isebenzise i-threshold encani lokha nakwenziwa i-soft thresholding. I-Sample B kufuze isebenzise i-threshold ekulu. Ema-deep neural networks, nakube la ma-features nama-thresholds alahlekelwa yi-physical definition yawo ecacileko, i-logic esisisekeloihlala ifana. Ngamanye amazwi, i-sample ngayinye kufuzeibe ne-threshold e-independent. I-threshold leyoincike ku-noise content ethile ye-sample leyo.

3. I-Attention Mechanism

Abaphandi bangayizwisisa lula i-attention mechanism emkhakheni we-computer vision. Iimiso zokubona (visual systems) zeenlwana zingakghona ukuhlukanisa ama-targets ngokwenza i-scanning esheshako kuyo yoke indawo. Ngemva kwalokho, i-visual system ifocusa i-attention yayo ku-target object. Lesi senzo sivumela ama-systems bona akhiphe (extract) ama-details amanengi. Ngesikhathi esifanako, ama-systems a-suppressa i-information engadingekiko. Ukuthola imininingwana, ningaqala imibhalo (literature) mayelana ne-attention mechanism.

I-Squeeze-and-Excitation Network (SENet) imela indlela entsha ye-deep learning esebenzisa ama-attention mechanisms. Kuma-samples ahlukileko, ama-feature channels ahlukileko anikela (contribute) ngendlela engafaniyo ku-classification task. I-SENet isebenzisa i-sub-network encani ukuthola i-set yama-weights (“Learn a set of weights”). Bese, i-SENet iphindaphinda la ma-weights nama-features wama-channels afaneleko (“Apply weighting to each feature channel”). Lo msebenziulungisa ubukhulu bama-features ku-channel ngayinye. Singakubona lokhu njengendlela yoku-applya amazinga ahlukileko we-attention kuma-feature channels ahlukileko.

Squeeze-and-Excitation Network

Ngale ndlela, qobe sample inesethi yayo yama-weights e-independent. Ngamanye amazwi, ama-weights wanoma ngimaphi ama-samples amabili ahlukile. Ku-SENet, indlela ethile yokuthola ama-weights yi-“Global Pooling → Fully Connected Layer → ReLU Function → Fully Connected Layer → Sigmoid Function.”

Squeeze-and-Excitation Network

4. I-Soft Thresholding ene-Deep Attention Mechanism

I-Deep Residual Shrinkage Network isebenzisa i-structure se-SENet sub-network. I-networkisebenzisa le structure ukwenza i-soft thresholding ngaphasi kwe-deep attention mechanism. I-sub-network (eboniswe ngaphakathi kwebhokisi elibovu) ifunda isethi yama-thresholds (“Learn a set of thresholds”). Bese, i-networkisebenzisa la ma-thresholds ukwenza i-soft thresholding ku-feature channel ngayinye.

Deep Residual Shrinkage Network

Kule sub-network, i-system kokuthoma ibala ama-absolute values wawo woke ama-features ku-input feature map. Bese, i-system yenza i-global average pooling ne-averaging ukuthola i-feature, ebizwa bona ngu-A. Kolunye uhlangothi (path), i-systemifaka i-feature map ku-fully connected network encani ngemva kwe-global average pooling. Le fully connected network isebenzisa i-Sigmoid function njenge-layer yokugcina. Le function yenza i-normalize i-output ibe phakathi kuka 0 no 1. Le proseyisiikhipha i-coefficient, ebizwa bona ngu-α. Singatjho bona i-final threshold ngu-α×A. Ngakho-ke, i-threshold yi-product yeenomboro ezimbili. Inomboro yinye iphakathi kuka 0 no 1. Enye inomboro yi-average yama-absolute values we-feature map. Le ndlela iqinisekisa bona i-threshold yi-positive. Le ndlela iphinde iqinisekise bona i-threshold ayibi khulu ngokudluleleko.

Ngaphezu kwalokho, ama-samples ahlukileko akhipha ama-thresholds ahlukileko. Ngalokho, singayizwisisa le ndlela njenge-special attention mechanism. I-mechanism le ibona ama-features angahlobani ne-task yamanje. I-mechanismitshintsha la ma-features awenze abe zi-values eziseduze ne-zero ngokusebenzisa ama-convolutional layers amabili. Bese, i-mechanism yenza la ma-features abe yi-zero ngokusebenzisa i-soft thresholding. Nofana singathi, i-mechanism ibona ama-features ahlobene ne-task yamanje. I-mechanism itshintsha la ma-features awenze abe zi-values ezikude ne-zero ngokusebenzisa ama-convolutional layers amabili. Emaphethelweni, i-mechanism iyawagcina la ma-features.

Okokugcina, si-stack-a inani elithile lama-basic modules (“Stack many basic modules”). Sifaka nama-convolutional layers, i-batch normalization, ama-activation functions, i-global average pooling, kanye nama-fully connected output layers. Le proseyisi yakha i-Deep Residual Shrinkage Network epheleleko.

Kunzombelelo esemfanekisweni, kubalulekile ukuthiya i-“Identity path” ehamba eceleni, kanye ne-“Weighting” eyenzekako ngaphambi kokuhlanganiswa kweemiphumela.

Deep Residual Shrinkage Network

5. I-Generalization Capability

I-Deep Residual Shrinkage Network yindlela e-general yokwenza i-feature learning. Isizathu kukobana, emisebenzini eminengi ye-feature learning, ama-samples avamise ukuba ne-noise. Ama-samples nawo anayo i-information engadingekiko. Le noise ne-information engadingekiko ingaphazamisa i-performance ye-feature learning. Isibonelo:

Cabanga nge-image classification. Isithombe singaba nezinye izinto (objects) ezinengi ngesikhathi esisodwa. Singazizwisisa lezi zinto njenge-“noise.” I-Deep Residual Shrinkage Network ingakghona ukusebenzisa i-attention mechanism. I-networkiyayibona le “noise.” Bese, i-networkisebenzisa i-soft thresholding ukusetha ama-features ahambisana nale “noise” ukuthi abe yi-zero. Lesi senzo singathuthukisa i-accuracy ye-image classification.

Cabanga nge-speech recognition. Khulukhulu ezindaweni ezine-noise, njengekulumo eceleni kwendlela nofana ngaphakathi kwe-factory workshop. I-Deep Residual Shrinkage Network ingathuthukisa i-accuracy ye-speech recognition. Nofana okungenani, i-network inikela nge-methodology. Le methodologyiyakghona ukuthuthukisa i-accuracy ye-speech recognition.

I-Reference

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Michael Pecht, Deep residual shrinkage networks for fault diagnosis, IEEE Transactions on Industrial Informatics, 2020, 16(7): 4681-4690.

https://ieeexplore.ieee.org/document/8850096

BibTeX

@article{Zhao2020,
  author    = {Minghang Zhao and Shisheng Zhong and Xuyun Fu and Baoping Tang and Michael Pecht},
  title     = {Deep Residual Shrinkage Networks for Fault Diagnosis},
  journal   = {IEEE Transactions on Industrial Informatics},
  year      = {2020},
  volume    = {16},
  number    = {7},
  pages     = {4681-4690},
  doi       = {10.1109/TII.2019.2943898}
}

I-Academic Impact (Umthelela Wezesayensi)

Leli phepha selithole ama-citations angaphezu kuka-1400 ku-Google Scholar.

Ngokwezibalo ezingakapheleli, abaphandi (researchers) basebenzise i-Deep Residual Shrinkage Network (DRSN) kuma-publications/studies angaphezu kuka-1000. Lezi zicelo (applications) zimboza imikhakha eminengi. Le mikhakha ibandakanya i-mechanical engineering, i-electrical power, i-vision, i-healthcare, i-speech, i-text, i-radar, ne-remote sensing.