Deep Residual Shrinkage Network: Cara Artificial Intelligence untu' Data ma-Noise Dudui

An Artificial Intelligence Method for Highly Noisy Data

Deep Residual Shrinkage Network iami antu versi baji’na battu ri Deep Residual Network. Pokokna, anne Deep Residual Shrinkage Network assamaturuki Deep Residual Network, attention mechanisms, siagang soft thresholding functions.

Akkulleki’ pahami carana jamana Deep Residual Shrinkage Network kamma anne. Uru-uruna, appakei attention mechanisms anne network untu’ antannangi feature tenaya napenting. Nampa, appakei soft thresholding functions untu’ ambuangi anjo feature tenaya napenting (kasii jadi nol). Mingka, antandai feature penting siagang natahang anjo feature penting. Anne proses ampaka-kuatki kemampuanna deep neural network. Anne proses ambantui network untu’ angngalle feature mapake battu ri signal niaka noise-na.

1. Research Motivation

Yang pertama, punna an-classify sample-ki’, taenamo nakkulle nitangkasi anjo noise. Contona anne noise, iami antu Gaussian noise, pink noise, siagang Laplacian noise. Secara luas, sample biasa niak todong isina informasi tenaya hubunganna siagang tugas classification kamma-kamma anne. Akkulleki’ anggapi anne informasi tenaya napenting kamma noise. Anne noise akkulle nakasih kurang performancena classification. (Soft thresholding iami antu langkah kunci ri jai algoritma signal denoising.)

Contona, coba maki’ bayangko punna accuritaki’ ri biring agang (ri pinggir jalan). Anjo audio akkulle niak bunyina klakson mobil siagang roda. Akkulleki’ anggawe speech recognition ri anjo signal. Anjo bunyi latar belakang (background sounds) pasti napengaruhimi hasele’na. Battu ri pandangan deep learning, deep neural network musti ambuangi feature battu ri klakson siagang roda. Anne pambuangan najagai feature supaya tena napengaruhimi hasele’na speech recognition.

Yang kadua, jumlahna noise biasa sija-sijai antarana sample. Anne perbedaanna kajadian tommi biaripi ri dalang dataset yang sama. (Anne perbedaanna nia’ miripna siagang attention mechanisms. Ammaki’ contoh dataset image/gambara’. Lokasina target object akkulle sija-sijai ri satiap gambara’. Attention mechanisms akkulle fokus ri lokasi spesifik target object ri satiap gambara’.)

Contona, angngajari (training) ki’ cat-and-dog classifier pake lima gambara’ niarenga label “dog”. Gambara’ 1 akkulle niak dog siagang mouse. Gambara’ 2 akkulle niak dog siagang goose. Gambara’ 3 akkulle niak dog siagang chicken. Gambara’ 4 akkulle niak dog siagang donkey. Gambara’ 5 akkulle niak dog siagang duck. Pas lagi training, object tenaya napenting lammangganggu ri classifier. Anne object termasuki mouse, geese, chickens, donkeys, siagang ducks. Anne gangguan a’battuangi akkurangmi accuracy-na classification. Punna akkulleki’ tandai anne object tenaya napenting. Nampa, akkulleki’ ambuangi feature battu ri anne object. Pake cara kamma anne, akkulleki’ kasih naik accuracy-na cat-and-dog classifier.

2. Soft Thresholding

Soft thresholding iami antu langkah pokok ri jai algoritma signal denoising. Algoritma ambuangi feature punna nilai absolute-na feature kurangangngang ri threshold tertentu. Algoritma ang-shrink (akkasio’) feature mange ri nol punna nilai absolute-na feature lebbi tinggi battu ri anne threshold. Peneliti akkulle appake soft thresholding pake rumus kamma anne:

\[y = \begin{cases} x - \tau & x > \tau \\ 0 & -\tau \le x \le \tau \\ x + \tau & x < -\tau \end{cases}\]

Turunanna (derivative) battu ri output soft thresholding terhadap input iami antu:

\[\frac{\partial y}{\partial x} = \begin{cases} 1 & x > \tau \\ 0 & -\tau \le x \le \tau \\ 1 & x < -\tau \end{cases}\]

Rumus ri rate’ ampa’nassai angkana derivative battu ri soft thresholding iami antu 1 atauna 0. Anne sifatna sama persis ji siagang sifatna ReLU activation function. Jari, soft thresholding akkulle kurangi resikona gradient vanishing siagang gradient exploding ri algoritma deep learning.

Ri soft thresholding function, panentuang threshold musti anruppi rua syarat. Pertama, threshold musti angka positif. Kadua, threshold tena nakkulle la’biangngang battu ri nilai paling tinggina (maximum value) input signal. Punna tena, output-na lamenjari nol ngaseng.

Tambahana, threshold sebaikna anruppi syarat katallu. Tunggala’ sample musti niak threshold kalenna (independent) based on isina noise anjo sample.

Alasanna, nasaba’ isina noise biasa sija-sijai antarana sample. Contona, Sample A akkulle niak noise-na kurang, mingka Sample B niak noise-na jai ri dalang dataset yang sama. Ri anne keadaan, Sample A musti appake threshold lebbi ca’di pas soft thresholding. Sample B musti appake threshold lebbi kaminang. Anne feature siagang threshold kehilangangmi definisi fisik jelasna ri deep neural networks. Mingka, logika dasarna tetap ji sama. Maksudna, satiap sample musti niak threshold independent. Isina noise spesifik antu ntentuangi anne threshold.

3. Attention Mechanism

Peneliti gampangji pahami attention mechanisms ri bidang computer vision. Sistem penglihatanna olo’-olo’ (hewan) akkulle nabedakang target passabakkang ak-scan cepa’ ri kabusukang area. Nampa, sistem penglihatan fokus attention-na ri target object. Anne tindakan nabalaki sistem untu’ angngalle detail lebbi jai. Ri waktu yang sama, sistem antahangi informasi tenaya napenting. Untu’ rincina, akkulleki’ baca literature passala attention mechanisms.

Squeeze-and-Excitation Network (SENet) mewakili metode deep learning baru niaka appake attention mechanisms. Ri sample sija-sijaya, feature channels sija-sijaya niak kontribusina ri tugas classification. SENet appake sub-network ca’di untu’ angngalle set of weights (sakumpulang bobot). Nampa, SENet ang-multiply (ak-kali) anne weights siagang feature battu ri channel masing-masing. Anne operasi at-tantuangi besarna feature ri tunggala’ channel. Akkulleki’ anggapi anne proses kamma appake level attention sija-sijaya ri feature channels sija-sijaya.

Squeeze-and-Excitation Network

Ri anne cara, tunggala’ sample niak set of weights independent-na. Basana, weights untu’ rua sample sembarang iami antu sija-sijai. Ri SENet, jalur spesifik untu’ angngalle weights iami antu “Global Pooling → Fully Connected Layer → ReLU Function → Fully Connected Layer → Sigmoid Function.”

Squeeze-and-Excitation Network

4. Soft Thresholding with Deep Attention Mechanism

Deep Residual Shrinkage Network appakei struktur battu ri sub-network SENet. Network appake anne struktur untu’ implement soft thresholding ri bawatna deep attention mechanism. Anjo sub-network (nitandai ri dalang kotak eja) angngajari (learns) set of thresholds. Nampa, network appake soft thresholding ri tunggala’ feature channel pake anne thresholds.

Deep Residual Shrinkage Network

Ri anne sub-network, sistem uru-uruna hitungi nilai absolute battu ri kabusukang feature ri input feature map. Nampa, sistem anggawe global average pooling siagang averaging untu’ angngalle feature, niarengi A. Ri jalur maraeng, sistem ang-input feature map mange ri fully connected network ca’di satelah global average pooling. Anne fully connected network appake Sigmoid function salaku layer terakhir. Anne function ang-normalize output antaran 0 siagang 1. Anne proses anghasilkang coefficient, niarengi α. Akkulleki’ nyatakang threshold terakhir kamma α × A. Jari, threshold iami antu hasele’ perkalian battu ri rua angka. Satu angka antarana 0 siagang 1. Angka maraeng iami antu rata-rata nilai absolute battu ri feature map. Anne metode najaminki angkana threshold iami antu positif. Anne metode najamin todong threshold tena nala’bi dudui kaminangna.

Lebbi jauhna, sample sija-sijaya anghasilkang threshold sija-sijaya. Husele’na, akkulleki’ pahami anne metode kamma attention mechanism spesialis. Mekanisme antandai feature tenaya napenting ri tugas kamma-kamma anne. Mekanisme an-transform anne feature jari nilai ampe’ ri nol (close to zero) liwat rua convolutional layers. Nampa, mekanisme kasih jadi nol anne feature pake soft thresholding. Atauna, mekanisme antandai feature napenting ri tugas kamma-kamma anne. Mekanisme an-transform anne feature jari nilai bella battu ri nol liwat rua convolutional layers. Akhirna, mekanisme antahangi (preserves) anne feature.

Akhirna, ak-stack ki’ (antumpukki’) jumlah tertentu battu ri basic modules. Antambahki’ todong convolutional layers, batch normalization, activation functions, global average pooling, siagang fully connected output layers. Anne proses ambangungi Deep Residual Shrinkage Network lengkap.

Deep Residual Shrinkage Network

5. Generalization Capability

Deep Residual Shrinkage Network iami antu metode umum untu’ feature learning. Alasanna, nasaba’ sample biasa niak isina noise ri jai tugas feature learning. Sample niak todong isina informasi tenaya napenting. Anne noise siagang informasi tenaya napenting akkulle napengaruhimi performancena feature learning. Contona:

Pikirki’ passala image classification. Satu gambara’ akkulle niak isina jai object maraeng. Akkulleki’ pahami anne object salaku “noise.” Deep Residual Shrinkage Network kapang akkulle napake attention mechanism. Network perhatikangi anne “noise.” Nampa, network appake soft thresholding untu’ kasih jadi nol feature battu ri anne “noise.” Anne tindakan akkulle kasih naik accuracy-na image classification.

Pikirki’ passala speech recognition. Khususna, pikirki’ lingkungan sanna’ ma-noise kamma settings accurita ri biring agang (pinggir jalan) atau ri dalang factory workshop (bengkel pabrik). Deep Residual Shrinkage Network kapang akkulle kasih naik accuracy-na speech recognition. Atau paling kurang, network anyadiakang methodology. Anne methodology akkulle kasih naik accuracy-na speech recognition.

Reference

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Michael Pecht, Deep residual shrinkage networks for fault diagnosis, IEEE Transactions on Industrial Informatics, 2020, 16(7): 4681-4690.

https://ieeexplore.ieee.org/document/8850096

BibTeX

@article{Zhao2020,
  author    = {Minghang Zhao and Shisheng Zhong and Xuyun Fu and Baoping Tang and Michael Pecht},
  title     = {Deep Residual Shrinkage Networks for Fault Diagnosis},
  journal   = {IEEE Transactions on Industrial Informatics},
  year      = {2020},
  volume    = {16},
  number    = {7},
  pages     = {4681-4690},
  doi       = {10.1109/TII.2019.2943898}
}

Academic Impact

Anne paper niakmi lebba’ 1400 citations-na ri Google Scholar.

Berdasarkang statistik tenaya nalengkap, peneliti lebba’mi appake Deep Residual Shrinkage Network (DRSN) ri lebbi 1000 publications/studies. Anne aplikasi antu’jui jai bidang. Anne bidang termasuki mechanical engineering, electrical power, vision, healthcare, speech, text, radar, siagang remote sensing.