000 002 002 003 003 004 004 005 Paper ID *** 005 006 006 007 007 008 Abstract

000
002 002
003 003
004 004
005 Paper ID *** 005
006 006
007 007
008 Abstract. The nasopharyngeal carcinoma(NPC) with the increase of 008
001
A New Strategy to Segment Naspharyngeal Carcinoma by Using Convolution Neural Network

000
001
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
its incidence in Southeast Asia and Northwest region of Africa becomes an important studied nowadays. Based on computer tomography (CT) images, several researchers tried to segment this tumor by using di erent techniques. One of these techniques is convolution neural network(CNN). Di erent architecture was applied by using this latter. In this context, based on the anatomy of CT images and the di culty of the segmentation of this tumor, we developed a new method by creating our new strategy. This strategy is by segmented the tumor after the elimination of the or- gan by using simple and overlapping patches of 70 patients with stage T1 or stage T2. Compared to the manual contouring of radiation oncol- ogist and other studies, our automatic results prove that these methods have a good performance in terms of precision, recall, Dice similarity coe cient(DSC) and Jaccard index.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Introduction
Nasopharyngeal carcinoma (NPC) is an epidermoid cell lineage carcinoma lesion located in the nasopharynx just behind the nasal cavity con ned with a small size called stage T1 and T2 and spreads out with a large tumor size T3 then T4 that is involving intracranial or infratemporal regions, an extensive neck disease, and/or any distant metastasis such as cranial nerves, hypopharynx, eye socket 1. This malignant tumor is prevalent in southeast Asia with a high incidence between 30 to 80 and the northwestern region of Africa with intermediate incidences between 8 to 12 per 100,000 inhabitants 2. According to Parkin, Bray and al. show in 3, a statistical survey in 2002 prove that more than 80,000 new NPC cases were diagnosed worldwide and 50,000 deaths were reported. The diagnosis of this tumor in early stage will success the treatment by using radiotherapy to destroy the tumor cells 4. Before starting this treatment, a preliminary step need not only the location and the characteristics of the tumor but also a very speci c information on the studied organs at risk (OARs) 5. This step uses the treatment planning system (TPS) that allow the radiation oncologists based on recommended guidelines (e.g., RTOG 0615 Protocol) to locate the tumor manually slice by slice on the computed tomography (CT) images. However, Harari, Shiyu and al. has been reported in their statistics that this manual location process is time-consuming and takes an average of 2.7 hours for a single head-and-neck (H&N) cancer case 6. In addition, the accuracy of inter- and intra-observer variation of the regions of interest (ROIs) of the tumor

009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
2ACCV-18 submission ID ***
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
is highly depended on the knowledge, experience and preference of the radiation oncologists 7.

As a result, automatic methods needs to avoid this time-consuming and increase the accuracy. Several researches used microscope image that can only de ne the super cial location of the tumor without providing its volumetric estimation and limits the radiologist's appreciation 8. From previous research, Region growing method is a basic method used to segment this tumor. However, this technique is sensitive to noise and needs to determinate the preliminary seed points representing a part of a segmenting object as a preprocessing step 9. In fact, the complex structure of the NPC in volume and shape makes its diagnosis even by an expert radiation oncologist di cult if we take into account that the contrast of organs and tumor in this type of medical image are so close and the shape, size and position of the tumor are not speci ed especially if it is an advanced stage (T3, T4) which reduce its proper extraction.

Deep learning methods have succeeded in computer vision tasks such as image classi cation 10. Convolution neural networks (CNNs) is one of this latter and become the most popular algorithm for deep learning 11. CNNs have di erent architectures depending on the classi ed image. This technique has been applied to segment many organs and substructures such as skin cancer 12, liver 13, nuclei 14, brain 15, epithelial and stromal 16, breast 17, etc. Inspired by this success, using deep CNN with a di erent architecture and process to segment NPC is the best solution.

In general, the most traditional NPC segmentation based on CNNs is by using the whole image as a training sample with a modi cation of network architecture. Kuo, Xinyuan and al. developed a deep deconvolutional neural network (DDNN) that include an encoder part and a decoder part and compared with VGG-16 network 18. However, the NPC occupies a small part of CT images which decrease the in uence of NPC region and then the accuracy of the NPC segmented.

To solve this problem, Yan, Chen and al. divided the original image (size 512×512) into small patches with size 32×32. These patches used to generate the training set in cross section 34. As a result, this method increases the segmentation performance but remains insu cient.

Based on the anatomy of CT images, the location of the tumor and Yan, Chen and al. method, we developed a new strategy to segment the NPC of stage T1 and T2 in this work. The experimental results show that this strategy can be used to realize the segmentation of NPC targets. Our strategy is by segmenting the tumor after the elimination of organ segmented in a previous step by using patches with size 16×16.

The remainder of the paper is organized as follows. At the beginning, in section 2, we presented the dataset and the proposed strategy. In the next section, we detailed the quantitative evaluation which used to compare our methods with the manual contouring of the NPC. In section 4, we reported the experimental results and discussion. At last, the paper is ended by our conclusions in section 5.

045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
ACCV-18 submission ID ***3
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134

Material and Methods
Data Acquisition
A total of 70 patients with di erent NPC of stage T1 or stage T2 have diagnosed in Radiotherapy department of Habib Bourguiba Hospital of Sfax-Tunisa. All patients examined by using CT images with matrix size of 512×512 and thickness of 3.0 mm.

Only one radiation oncologists determined the contouring of the NPC manu- ally slice by slice on the cross section of CT images using a Pinnacle TPS (Philips Radiation Oncology system, Fitchburg, WI, USA) system. These contours used as label in my method to segment the tumor.

New Strategy of CNN model for segmentation
In the present study, we introduced the new strategy of CNN model(NCNN) to segment the target NPC. Based on the anatomy of CT images and the location of stage T1 and stage T2, there are 3 di erent regions that are the tumor, organs and normal tissue with a large proportion for organs. In these 2 stages, the NPC and organs are separate it. From which, we developed this model with 2 steps. The rst step (Step I) is to segment all the organs as one organ by using the manual contouring of organ as a label for training sample. As a result of this step, we will have 2 classi ers organ and otherwise that is tumor and normal tissues. Then, the second step (Step II) has applied after the elimination of the segmented organ on the otherwise to segment the tumor. Before each step, a preprocessing stage needed to create the dataset. For each 2D CT image that have the 3 regions and the manual contouring image with original size 512×512, we divided them into small patches with size 16×16. These patches were used for training samples. The overview of the method is show in Fig.1.

2253970145895
Fig. 1: The architecture of NCNN model

090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
4ACCV-18 submission ID ***
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
The architecture of the proposed CNN (Fig.2) consisted of 3 convolutional layers for feature extraction, fellowed by ReLU activation function. All the ker- nels of convolutional layers had a window size of 3×3, a stride of 1 and a same padding. Moreover, the 1st and 2nd convolutional layers convolved to 16x16x32 and the last one convolved to 16x16x64. A maxpooling were added after each convolutional layer with a window size of 3×3, a same padding and with ReLU activation function. The output of the last maxpooling was atten to 1D vector. Then, a neural network was applied with a fully connected layers, 1 hidden layer with 256 nodes, a dropout with p = 0.5 and an output of 256 nodes. The output of CNN (1D vector) were converted to 2D image by lled out a matrix of 16×16 by taking every 16 nodes and put it in the ieme row of the matrix.

1711515130239
Fig. 2: The architecture of the proposed CNN
In order to ameliorate the dataset and have more information in the training sample, we created overlapping patches20 with size 16×16 from the original image. In fact, we modi ed the CNN model that is presented in the next section.

Overlapping CNN model
177 177
178 178
179 179
Overlapping patches is to decompose image into small patches with a di erence distance between the current and previous patch. This overlapping can be from horizontal and/or vertical position(Fig.3). In Fact, because the size of each patch is 16×16, the di erent distance is 1, 2,…, 15. As a result, we created 256th di erent overlapping patches(Fig.3).

135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
ACCV-18 submission ID ***5
191
192
193
194

180 180
181 181
182 182
183 183
184 184
185 185
186 186
187 187
Fig. 3: overlapping patches(P=P1=P2=1,2,…,15)
188 188
189 189
190 190
195 195
196 196
197 197
198 198
199 199
200 200
201 201
202 202
203 203
204 204
205 205
206 206
207 207
208 208
209 209
210
Fig. 4: An exemple of the nal 210
result by using a voting method
211 211
212 212
213 213
214Fig.5 illustrates the architecture of overlapping CNN model (ONCNN).214
215 215
216 216
217 217
218 218
219 219
220 220
221 221
222 222
223 223
224 224
The CNN model was modi ed to another model with respecting the same architecture and strategy of CNN. In step I and II, each di erent overlapping patches tested independently. In this case, we had 256 di erent results. A voting method was included to decide the nal result for each pixel. This method con- sists in calculating the probability average of 256 results of the same pixel(Fig.4).

191
192
193
194

21544403590896ACCV-18 submission ID ***
225 225
226 226
227 227
228 228
229 229
230 230
231 231
232 232
233 233
234 234
235 235
236 236
237 237
238 238
239 239
240 240
241 Fig. 5: Architecture of overlapping CNN model 241
242 242
243 243
244 3 Quantitative Evaluation 244
245 245
246
247
248
249
250
251
252

In order to evaluate the performance of the 2 models during the training phase, we used binary cross entropy function to calculate the loss of the model. The loss layer speci es how training penalizes the deviation between the predicted value and true labels and is normally the nal layer which mean the output.

Loss =
i=1
i
i
N
i
i
(1)
?N ? (y log (y^ ) + (1 ? y ) log (1 ? y^ ))

246
247
248
249
250
251
252

253
254
255
However, the accuracy speci es how the quality or state of being correct or precise between the absolute value of the predicted value and true labels.

253
254
255
256
257
Accuracy =
N
?
i=1
(yi = |yi|)
^
N
(2)

256
257
258
259
260
261
262
263
264
265
266
where N: number of nodes(256); y: true value of i-output, y : predict value of i-output.

For the testing phase, the obtained results for organ, otherwise and tumor were compared with the manual contouring by using the quantitative evaluation namely precision, recall, Dice similarity coe cient (DSC) and jaccard index(J). These quantitative evaluations were de ned respectively in Eq.3 to Eq.8 as fol- lows:
T P
258
259
260
261
262
263
264
265
266
267
P recision =

T P + F P
(3)

267
268
269
where TP (true positive) is number of pixels for the intersection part between our result and manual contouring and FP (false positive) is the number of pixels

268
269
ACCV-18 submission ID ***7
270
271
272
273

rejected by the manual contouring.

Recall =

T P T P + F N

(4)

270
271
272
273
274
where FN (false negative) is the number of pixels rejected by our result.

274
275
276
277
278

DSC = 2
P recision × recall =
P recision + recall
T P
J i =
2T P

2T P + F N + F P

(5)
(6)

275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
T P + F P + F N
In addition, we compared our models with DDNN, VGG-16 and CNN of Yan(YCNN) in 34. The average DSC of organ, otherwise and tumor and haus- dor distance values for NPC were analyzed with paired t-tests between them with p-value p<0.05
4Experimental Results and Discussion
In this research work, we used 70 patients with Stage T1 and stage T2 as a total number for dataset. In fact, we divided this total number into 7 parts. Each part had 60 patients for training and 10 patients for testing that are not include the training samples(Fig.6). We implemented all the methods by using Keras. Moreover, DDNN and VGG-16 were applied by using our strategy.

279
280
281
282
283
284
285
286
287
288
289
290
291
292 292
293 293
294 294
295 295
296 296
297 297
298 298
299 299
300 300
301 301
302 302
303 303
304 304
305 305
2843479-1866345306
307
308
309
310
311
312
313
314
Fig. 6: Training and Testing data
Based on the strategy, in Fig.7, we present an example of organ segmented with the manual contouring(MC) by radiation oncologists. This gure illustrates the segmented results of our 2 models (NCNN and ONCNN), DDNN, VGG- 16(Step I). Each gure contains 3 colors that are yellow color(the match region), red color(not matching from our result) and green color(not matching from MC) (Fig.7a-7d).

306
307
308
309
310
311
312
313
314
239039436529941070143591078ACCV-18 submission ID ***
315 315
316 316
317 317
318 318
319 319
320 320
321 321
322 322
323 323
324(a) NCNN vs MC (b) OCNN vs MC324
325 325
326 326
327 327
328 328
329 329
330 330
331 331
332 332
333 333
2390394-12980524107014-1298052334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
(c) DDNN vs MC(d) VGG-16 vs MC
Fig. 7: Organ results of di erent methods
For the Step II(tumor results), we had shown NCNN, ONCNN, DDNN, VGG- 16 and YCNN results with tha manual contouring in Fig.8. As the organ results, each gure(Fig.8a-8e) also contains 3 colors (yellow, red and green) with the same meaning.

334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
ACCV-18 submission ID ***9
360 360
361 361
362 362
363 363
364 364
365 365
366 366
367 367
368 368
369(a) NCNN vs MC (b) OCNN vs MC369
370 370
371 371
372 372
373 373
374 374
375 375
376 376
377 377
378 378
379
380
381
382
383
384
385
386
387
388
389

(c) DDNN vs MC(d) VGG-16 vs MC(e) YCNN vs MC
Fig. 8: Tumor results of di erent methods
1836446135212546639511376133For the training data, an average result of loss, accuracy of all patients were calculated (Table 1). The results for all tested patients are summarized Table 2 5 and Fig.9. Table 2 to table 4 describe the average results of precision, recall, DSC and jaccard index(Ji) for segmented organ, otherwise and tumor. However, the table 5 represent the hausdor distance and Fig.9 show 3 boxplots obtained from DSC analyses.

379
380
381
382
383
384
385
386
387
388
389
390 390
391 391
392 392
393 393
394 394
395 395
396 396
397 397
398 398
399 (a) Boxplots of organ (b) Boxplots of otherwise (c) Boxplots of tumor 399
400 400
401 Fig. 9: Boxplots of DSC 401
402 402
403 403
404 404
405 405
406 406
407 Table 1: Average result of Loss and Accuracy for 70 patients 407
408 MethodAccuracy Loss 408
409 OCNN0.880.24 409
410
NCNN 0.86 410
411DDNN 0.85 0.28411
412 YCNN 0.78 0.30 412
413 VGG-16 0.83 0.29 413
414 414
415 415
416 416
417 417
418 418
419Table 2: Average result of organ for 70 patients419
10ACCV-18 submission ID ***
0.25

420Method Precision Recall DSC Ji 420
421
OCNN 0.83 0.83 0.83 0.75 421
422NCNN 0.82 0.80 0.81 0.74 422
423DDNN 0.79 0.80 0.79 0.68 423
424VGG-16 0.73 0.72 0.72 0.61 424
425 425
426 426
427 427
428 428
429 429
Table 3: Average result of otherwise for 70 patients
431Method Precision Recall DSC Ji 431
432
OCNN 0.81 0.82 0.81 0.73 432
433NCNN 0.79 0.80 0.79 0.71 433
434DDNN 0.78 0.75 0.76 0.63 434
435VGG-16 0.71 0.71 0.71 0.59 435
436 436
437 437
438 438
439 439
440 440
Table 4: Average result of tumor for 70 patients
442 MethodPrecision Recall DSC Ji 442
443OCNN 0.84 0.83 0.83 0.75443
444 NCNN 0.81 0.82 0.81 0.73 444
445 YCNN 0.71 0.72 0.71 0.59 445
446 DDNN 0.79 0.79 0.79 0.67 446
447 VGG-16 0.72 0.73 0.72 0.60 447
448 448
449 449
430

430
441

441
ACCV-18 submission ID ***11
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494

Theoretically, according to the results in the tables and Fig.9, Our strat- egy can segment the tumor. In addition, using overlapping patches will increase the performances from average DSC=0.81, Ji=0.73 in NCNN to DSC =0.83, Ji=0.75 in OCNN with p-value p=0.028. However, using the whole image will reduce these performances to DSC =0.79, Ji=0.67 in DDNN and DSC =0.72, Ji=0.60(p 0.05). These results are expected especially that the pixel values of the tumor and the normal tissue are so close. Moreover, the segmentation of the tumor directly in YCNN have the lowest values of quantitative evaluation compared with other methods with DSC =0.71, Ji=0.59(p 0.05). This compar- ison revealed that our strategy is a good strategy to segment NPC of stage T1 and stage T2. In fact, a highest performance for organ and otherwise segmented will increase the segmentation of the tumor. Also, in the training samples with a lower loss and higher accuracy, the segmentation of NPC will have achieved. Cconsequently, we suggest that more information during the training phase by using overlapping and after the elimination of segmented organ will help to seg- ment the NPC which is the case of OCNN.

5Conclusion
The Nasopharyngeal carcinoma has become an important health problem in the Souteast Asia and Northwestern region of Africa. Several studies tried to segment this tumor by using deep learning in order to classify the image between NPC and normal tissue from CT images. However, using the previous architecture will achieve the target but with a less perfermance. In order to have a good perfermance, we developed a new strategy by using overlapping patches with CNN applied on 70 patients. In this paper, we showed new methods using 2 steps with simple and overlapping patches called NCNN and OCNN architecture for NPC of stage I and stage II.

The given results prove that our methods could segment the NPC tumor with a good perfermance. This perfermance was compared with previous research DDNN and VGG-16 by using our strategy and YCNN by using only simple patches and had the highest quantitative evaluation for precision, recall, DSC, Ji, accuracy and loss. The comparaison proved that OCNN have the best results In order to improve our study, the manual contouring of the tumor can be contoured by di erent experts. Also, as CT images are 3D images, there are another type of 2D images that are coronal and sagittal images which we can be
used for our methods.

References
HO, John HC."epidemiologic and clinical study of nasopharyngeal carci- noma", International Journal of Radiation Oncology* Biology* Physics, 1978, vol. 4, no 3-4, p. 183-198.

Chang E, Adami H: "The enigmatic epidemiology of nasopharyngeal car- cinoma", Canc Epidemiol Biomarkers Prev 15:1765—1777, 2006

450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
12ACCV-18 submission ID ***
495 3 Parkin D, Bray F, Ferlay J, Pisani P: Global cancer statistics", CA Cancer 495
496 J Clin 55:74—108, 2005 496
497 4 LEE, Nancy, XIA, Ping, QUIVEY, Jeanne M., et al, "Intensity-modulated 497
498 radiotherapy in the treatment of nasopharyngeal carcinoma: an update of the 498
499 UCSF experience", International Journal of Radiation Oncology* Biology* Physics, 499
500 2002, vol. 53, no 1, p. 12-22. 500
501 5 PESZYNSKA-PIORUN, Magdalena, MALICKI, Julian, et GOLUSIN- 501
502 SKI, Wojciech," Doses in organs at risk during head and neck radiotherapy using 502
503 IMRT and 3D-CRT", Radiology and oncology, 2012, vol. 46, no 4, p. 328-336. 503
504 6 HARARI, Paul M., SONG, Shiyu, et TOMÉ, Wolfgang, " A. Empha- 504
505 sizing conformal avoidance versus target de nition for IMRT planning in head- 505
506 and-neck cancer", International Journal of Radiation Oncology• Biology• 506
507 Physics, 2010, vol. 77, no 3, p. 950-958. 507
508 7 Vinod SK, Myo M, Michael GJ, Lois CH, " A review of interventions 508
509 to reduce inter-observer variability in volume delineation in radiation oncol- 509
510 ogy", J Med Imaging Radiat Oncol (2016) 60(3):393—406. doi:10.1111/1754- 510
511 9485.12462 511
512 8 MOHAMMED, Mazin Abed, GHANI, Mohd Khanapi Abd, HAMED, 512
513 Raed Ibraheem, et al," Automatic segmentation and automatic seed point selec- 513
514 tion of nasopharyngeal carcinoma from microscopy images using region growing 514
515 based approach", Journal of Computational Science, 2017, vol. 20, p. 61-69. 515
516 9 YANG, Xulei, YEO, Si Yong, HONG, Jia Mei, et al, " A deep learning 516
517 approach for tumor tissue image classi cation", Biomedical Engineering, 2016. 517
518 10 LEDIG, Christian, THEIS, Lucas, HUSZÁR, Ferenc, et al," Photo- 518
519 realistic single image super-resolution using a generative adversarial network", 519
520 arXiv preprint, 2017. 520
521 11 USTINOVA, Evgeniya, GANIN, Yaroslav, et LEMPITSKY, Victor, " 521
522 Multi-region bilinear convolutional neural networks for person re-identi cation", 522
523 In: Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE 523
524 International Conference on. IEEE, 2017. p. 1-6. 524
525 12 QI, Jin, LE, Miao, LI, Chunming, et al," Global and Local Information 525
526Based Deep Network for Skin Lesion Segmentation", arXiv preprint arXiv:1703.054675,26
527
528
529
530
531
532
533
534
535
536
537
538
539
2017.

CHRIST, Patrick Ferdinand, ELSHAER, Mohamed Ezzeldin A., ET- TLINGER, Florian, et al," Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random elds", In : International Conference on Medical Image Computing and Computer- Assisted Intervention. Springer, Cham, 2016. p. 415-423.

NAYLOR, Peter, LAÉ, Marick, REYAL, Fabien, et al," Nuclei seg- mentation in histopathology images using deep neural networks", In : Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on. IEEE, 2017. p. 933-936.

PEREIRA, Sérgio, PINTO, Adriano, ALVES, Victor, et al," Brain tumor segmentation using convolutional neural networks in MRI images", IEEE transactions on medical imaging, 2016, vol. 35, no 5, p. 1240-1251.

527
528
529
530
531
532
533
534
535
536
537
538
539
ACCV-18 submission ID ***13
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556

XU, Jun, LUO, Xiaofei, WANG, Guanhao, et al," A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images", Neurocomputing, 2016, vol. 191, p. 214-223.

SU, Hai, LIU, Fujun, XIE, Yuanpu, et al," Region segmentation in histopathological breast cancer images using deep convolutional neural network", In : Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on. IEEE, 2015. p. 55-58.

MEN, Kuo, CHEN, Xinyuan, ZHANG, Ye, et al," Deep deconvolutional
neural network for target segmentation of nasopharyngeal cancer in planning CT images", Frontiers in Oncology, 2017, vol. 7, p. 315.

WANG, Yan, ZU, Chen, HU, Guangliang, et al," Automatic Tumor Seg- mentation with Deep Convolutional Neural Networks for Radiotherapy Applica- tions", Neural Processing Letters, 2018, p. 1-12.

556
557 557
558 558
559 559
560 560
561 561
562 562
563 563
564 564
565 565
566 566
567 567
568 568
569 569
570 570
571 571
572 572
573 573
574 574
575 575
576 576
577 577
578 578
579 579
580 580
581 581
582 582
583 583
584 584
RONNEBERGER, Olaf, FISCHER, Philipp, et BROX, Thomas," U-net: Convolutional networks for biomedical image segmentation", In : International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. p. 234-241.

540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555