322 results on '"Shen, HT"'
Search Results
2. Determination of the α-decay half-life of Po210 based on film and slice bismuth samples at room temperature
- Author
-
Zhao, QZ, Wang, XM, Wang, W, He, M, Dong, KJ, Xiao, CJ, Ruan, XD, Shen, HT, Wu, SY, Yang, XR, Dou, L, Xu, YN, Cai, L, Pang, FF, Zhang, H, Pang, YJ, and Jiang, S
- Subjects
Atomic ,Molecular ,Nuclear ,Particle and Plasma Physics ,Nuclear & Particles Physics - Abstract
The α decay rate of Po210 was measured in a film sample (Po210@Bi2O3) and a slice sample (Po210@Bimetal), respectively. The former was used as a reference sample. The half-lives of Po210@Bi2O3 and Po210@Bi metal environments were observed to be (138.40±0.21d) and (138.87±0.87d) at room temperature, respectively. It was found that the half-life of Po210 is consistent with international recommendations within the uncertainty limits, and we did not find any deviation of the α decay rate of Po210 between film sample (Po210@Bi2O3) and slice sample (Po210@Bimetal).
- Published
- 2015
3. Privacy-Preserving Adaptive Remaining Useful Life Prediction via Source-Free Domain Adaption
- Author
-
Wu, K, Li, J, Meng, L, Li, F, Shen, HT, Wu, K, Li, J, Meng, L, Li, F, and Shen, HT
- Abstract
Unsupervised domain adaptation (UDA) strives to transfer the learned knowledge to differently distributed datasets using both source and target data. Recently, an increasing number of UDA methods have been proposed for domain adaptive remaining useful life (RUL) prediction. However, many industries value their privacy protection a lot. The confidentiality of degradation data in certain fields, such as aircraft engines or bearings, makes the source data inaccessible. To cope with this challenge, our work proposes a source-free domain adaption method to implement cross-domain RUL prediction. Especially, an adversarial architecture with one feature encoder and two RUL predictors is proposed. We first maximize the prediction discrepancy between the predictors to detect target samples that are far from the support of the source. Then the feature encoder is trained to minimize the discrepancy, which can generate features near the support. Besides, a weight regularization is used to replace the supervised training on the source domain. We evaluate our proposed approach on the commonly used C-MAPSS and FEMTO-ST datasets. Extensive experiment results demonstrate that our approach can significantly improve the prediction reliability on the target domain.
- Published
- 2023
4. A new approach for noninvasive transdermal determination of blood uric acid levels
- Author
-
Ching CTS, Yong KK, Yao YD, Shen HT, Hsieh SM, Jheng DY, Sun TP, and Shieh HL
- Subjects
Medicine (General) ,R5-920 - Abstract
Congo Tak-Shing Ching,1,2 Kok-Khun Yong,3 Yan-Dong Yao,4 Huan-Ting Shen,3 Shiu-Man Hsieh,5 Deng-Yun Jheng,1 Tai-Ping Sun,1,6 Hsiu-Li Shieh11Department of Electrical Engineering, National Chi Nan University, Nantou, 2Department of Photonics and Communication Engineering, Asia University, Taichung, 3Department of Internal Medicine, Puli Christian Hospital, Nantou, People’s Republic of China; 4Division of Science and Technology, Hong Kong Community College, Hong Kong; 5Department of Orthopedic Surgery, Puli Christian Hospital, 6Department of Electronic Engineering, Nan Kai University of Technology, Nantou, People’s Republic of ChinaAbstract: The aims of this study were to investigate the most effective combination of physical forces from laser, electroporation, and reverse iontophoresis for noninvasive transdermal extraction of uric acid, and to develop a highly sensitive uric acid biosensor (UAB) for quantifying the uric acid extracted. It is believed that the combination of these physical forces has additional benefits for extraction of molecules other than uric acid from human skin. A diffusion cell with porcine skin was used to investigate the most effective combination of these physical forces. UABs coated with ZnO2 nanoparticles and constructed in an array configuration were developed in this study. The results showed that a combination of laser (0.7 W), electroporation (100 V/cm2), and reverse iontophoresis (0.5 mA/cm2) was the most effective and significantly enhanced transdermal extraction of uric acid. A custom-designed UAB coated with ZnO2 nanoparticles and constructed in a 1×3 array configuration (UAB-1×3-ZnO2) demonstrated enough sensitivity (9.4 µA/mM) for quantifying uric acid extracted by the combined physical forces of laser, electroporation, and RI. A good linear relationship (R2=0.894) was demonstrated to exist between the concentration of uric acid (0.2–0.8 mM) inside the diffusion cell and the current response of the UAB-1×3-ZnO2. In conclusion, a new approach to noninvasive transdermal extraction and quantification of uric acid has been established.Keywords: laser, electroporation, reverse iontophoresis, noninvasive, uric acid, biosensor
- Published
- 2014
5. Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach
- Author
-
Sun, Z, Yao, Y, Wei, XS, Zhang, Y, Shen, F, Wu, J, Zhang, J, Shen, HT, Sun, Z, Yao, Y, Wei, XS, Zhang, Y, Shen, F, Wu, J, Zhang, J, and Shen, HT
- Abstract
Learning from the web can ease the extreme dependence of deep learning on large-scale manually labeled datasets. Especially for fine-grained recognition, which targets at distinguishing subordinate categories, it will significantly reduce the labeling costs by leveraging free web data. Despite its significant practical and research value, the webly supervised fine-grained recognition problem is not extensively studied in the computer vision community, largely due to the lack of high-quality datasets. To fill this gap, in this paper we construct two new benchmark webly supervised fine-grained datasets, termed WebFG-496 and WebiNat-5089, respectively. In concretely, WebFG-496 consists of three sub-datasets containing a total of 53,339 web training images with 200 species of birds (Web-bird), 100 types of aircrafts (Web-aircraft), and 196 models of cars (Web-car). For WebiNat-5089, it contains 5089 sub-categories and more than 1.1 million web training images, which is the largest webly supervised fine-grained dataset ever. As a minor contribution, we also propose a novel webly supervised method (termed “Peer-learning”) for benchmarking these datasets. Comprehensive experimental results and analyses on two new benchmark datasets demonstrate that the proposed method achieves superior performance over the competing baseline models and states-of-the-art. Our benchmark datasets and the source codes of Peer-learning have been made available at https://github.com/NUST-Machine-Intelligence-Laboratory/weblyFG-dataset.
- Published
- 2022
6. Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)
- Author
-
Klionsky, DJ, Abdel-Aziz, AK, Abdelfatah, S, Abdellatif, M, Abdoli, A, Abel, S, Abeliovich, H, Abildgaard, MH, Abudu, YP, Acevedo-Arozena, A, Adamopoulos, IE, Adeli, K, Adolph, TE, Adornetto, A, Aflaki, E, Agam, G, Agarwal, A, Aggarwal, BB, Agnello, M, Agostinis, P, Agrewala, JN, Agrotis, A, Aguilar, PV, Ahmad, ST, Ahmed, ZM, Ahumada-Castro, U, Aits, S, Aizawa, S, Akkoc, Y, Akoumianaki, T, Akpinar, HA, Al-Abd, AM, Al-Akra, L, Al-Gharaibeh, A, Alaoui-Jamali, MA, Alberti, S, Alcocer-Gomez, E, Alessandri, C, Ali, M, Al-Bari, MAA, Aliwaini, S, Alizadeh, J, Almacellas, E, Almasan, A, Alonso, A, Alonso, GD, Altan-Bonnet, N, Altieri, DC, Alves, S, da Costa, CA, Alzaharna, MM, Amadio, M, Amantini, C, Amaral, C, Ambrosio, S, Amer, AO, Ammanathan, V, An, ZY, Andersen, SU, Andrabi, SA, Andrade-Silva, M, Andres, AM, Angelini, S, Ann, D, Anozie, UC, Ansari, MY, Antas, P, Antebi, A, Anton, Z, Anwar, T, Apetoh, L, Apostolova, N, Araki, T, Araki, Y, Arasaki, K, Araujo, WL, Araya, J, Arden, C, Arevalo, MA, Arguelles, S, Arias, E, Arikkath, J, Arimoto, H, Ariosa, AR, Armstrong-James, D, Arnaune-Pelloquin, L, Aroca, A, Arroyo, DS, Arsov, I, Artero, R, Asaro, DML, Aschner, M, Ashrafizadeh, M, Ashur-Fabian, O, Atanasov, AG, Au, AK, Auberger, P, Auner, HW, Aurelian, L, Autelli, R, Avagliano, L, Avalos, Y, Aveic, S, Aveleira, CA, AvinWittenberg, T, Aydin, Y, Ayton, S, Ayyadevara, S, Azzopardi, M, Baba, M, Backer, JM, Backues, SK, Bae, DH, Bae, ON, Bae, SH, Baehrecke, EH, Baek, A, Baek, SH, Bagetta, G, Bagniewska-Zadworna, A, Bai, H, Bai, J, Bai, XY, Bai, YD, Bairagi, N, Baksi, S, Balbi, T, Baldari, CT, Balduini, W, Ballabio, A, Ballester, M, Balazadeh, S, Balzan, R, Bandopadhyay, R, Banerjee, S, Bao, Y, Baptista, MS, Baracca, A, Barbati, C, Bargiela, A, Barila, D, Barlow, PG, Barmada, SJ, Barreiro, E, Barreto, GE, Bartek, J, Bartel, B, Bartolome, A, Barve, GR, Basagoudanavar, SH, Bassham, DC, Jr, RCB, Basu, A, Batoko, H, Batten, I, Baulieu, EE, Baumgarner, BL, Bayry, J, Beale, R, Beau, I, Beaumatin, F, Bechara, LRG, Beck, GR, Beers, MF, Begun, J, Behrends, C, Behrens, GMN, Bei, R, Bejarano, E, Bel, S, Behl, C, Belaid, A, Belgareh-Touze, N, Bellarosa, C, Belleudi, F, Perez, MB, Bello-Morales, R, Beltran, JSD, Beltran, S, Benbrook, DM, Bendorius, M, Benitez, BA, Benito-Cuesta, I, Bensalem, J, Berchtold, MW, Berezowska, S, Bergamaschi, D, Bergami, M, Bergmann, A, Berliocchi, L, Berlioz-Torrent, C, Bernard, A, Berthoux, L, Besirli, CG, Besteiro, S, Betin, VM, Beyaert, R, Bezbradica, JS, Bhaskar, K, Bhatia-Kissova, I, Bhattacharya, R, Bhattacharya, S, Bhattacharyya, S, Bhuiyan, MS, Bhutia, SK, Bi, LR, Bi, XL, Biden, TJ, Bijian, K, Billes, VA, Binart, N, Bincoletto, C, Birgisdottir, AB, Bjorkoy, G, Blanco, G, Blas-Garcia, A, Blasiak, J, Blomgran, R, Blomgren, K, Blum, JS, Boada-Romero, E, Boban, M, BoeszeBattaglia, K, Boeuf, P, Boland, B, Bomont, P, Bonaldo, P, Bonam, SR, Bonfili, L, Bonifacino, JS, Boone, BA, Bootman, MD, Bordi, M, Borner, C, Bornhauser, BC, Borthakur, G, Bosch, J, Bose, S, Botana, LM, Botas, J, Boulanger, CM, Boulton, ME, Bourdenx, M, Bourgeois, B, Bourke, NM, Bousquet, G, Boya, P, Bozhkov, PV, Bozi, LHM, Bozkurt, TO, Brackney, DE, Brandts, CH, Braun, RJ, Braus, GH, Bravo-Sagua, R, Bravo-San Pedro, JM, Brest, P, Bringer, MA, Briones-Herrera, A, Broaddus, VC, Brodersen, P, Alvarez, EMC, Brodsky, JL, Brody, SL, Bronson, PG, Bronstein, JM, Brown, CN, Brown, RE, Brum, PC, Brumell, JH, Brunetti-Pierri, N, Bruno, D, Bryson-Richardson, RJ, Bucci, C, Buchrieser, C, Bueno, M, Buitrago-Molina, LE, Buraschi, S, Buch, S, Buchan, JR, Buckingham, EM, Budak, H, Budini, M, Bultynck, G, Burada, F, Burgoyne, JR, Buron, MI, Bustos, V, Buttner, S, Butturini, E, Byrd, A, Cabas, I, Cabrera-Benitez, S, Cadwell, K, Cai, JJ, Cai, L, Cai, Q, Cairo, M, Calbet, JA, Caldwell, GA, Caldwell, KA, Call, JA, Calvani, R, Calvo, AC, Barrera, MCR, Camara, NO, Camonis, JH, Camougrand, N, Campanella, M, Campbell, EM, Campbell-Valois, FX, Campello, S, Campesi, I, Campos, JC, Camuzard, O, Cancino, J, de Almeida, DC, Canesi, L, Caniggia, I, Canonico, B, Canti, C, Cao, B, Caraglia, M, Carames, B, Carchman, EH, Cardenal-Munoz, E, Cardenas, C, Cardenas, L, Cardoso, SM, Carew, JS, Carle, GF, Carleton, G, Carloni, S, Carmona-Gutierrez, D, Carneiro, LA, Carnevali, O, Carosi, JM, Carra, S, Carrier, A, Carrier, L, Carroll, B, Carter, AB, Carvalho, AN, Casanova, M, Casas, C, Casas, J, Cassioli, C, Castillo, EF, Castillo, K, Castillo-Lluva, S, Castoldi, F, Castori, M, Castro, AF, Castro-Caldas, M, Castro-Hernandez, J, Castro-Obregon, S, Catz, SD, Cavadas, C, Cavaliere, F, Cavallini, G, Cavinato, M, Cayuela, ML, Rica, PC, Cecarini, V, Cecconi, F, Cechowska-Pasko, M, Cenci, S, Ceperuelo-Mallafre, V, Cerqueira, JJ, Cerutti, JM, Cervia, D, Cetintas, VB, Cetrullo, S, Chae, HJ, Chagin, AS, Chai, CY, Chakrabarti, G, Chakrabarti, O, Chakraborty, T, Chami, M, Chamilos, G, Chan, DW, Chan, EYW, Chan, ED, Chan, HYE, Chan, HH, Chan, H, Chan, MTV, Chan, YS, Chandra, PK, Chang, CP, Chang, CM, Chang, HC, Chang, K, Chao, J, Chapman, T, Charlet-Berguerand, N, Chatterjee, S, Chaube, SK, Chaudhary, A, Chauhan, S, Chaum, E, Checler, F, Cheetham, ME, Chen, CS, Chen, GC, Chen, JF, Chen, LL, Chen, L, Chen, ML, Chen, MK, Chen, N, Chen, Q, Chen, RH, Chen, S, Chen, W, Chen, WQ, Chen, XM, Chen, XW, Chen, X, Chen, Y, Chen, YG, Chen, YY, Chen, YQ, Chen, YJ, Chen, ZS, Chen, Z, Chen, ZH, Chen, ZJ, Chen, ZX, Cheng, HH, Cheng, J, Cheng, SY, Cheng, W, Cheng, XD, Cheng, XT, Cheng, YY, Cheng, ZY, Cheong, H, Cheong, JK, Chernyak, BV, Cherry, S, Cheung, CFR, Cheung, CHA, Cheung, KH, Chevet, E, Chi, RJ, Chiang, AKS, Chiaradonna, F, Chiarelli, R, Chiariello, M, Chica, N, Chiocca, S, Chiong, M, Chiou, SH, Chiramel, AI, Chiurchiu, V, Cho, DH, Choe, SK, Choi, AMK, Choi, ME, Choudhury, KR, Chow, NS, Chu, CT, Chua, JP, Chua, JJE, Chung, H, Chung, KP, Chung, S, Chung, SH, Chung, YL, Cianfanelli, V, Ciechomska, IA, Cifuentes, M, Cinque, L, Cirak, S, Cirone, M, Clague, MJ, Clarke, R, Clementi, E, Coccia, EM, Codogno, P, Cohen, E, Cohen, MM, Colasanti, T, Colasuonno, F, Colbert, RA, Colell, A, Coll, NS, Collins, MO, Colombo, MI, Colon-Ramos, DA, Combaret, L, Comincini, S, Cominetti, MR, Consiglio, A, Conte, A, Conti, F, Contu, VR, Cookson, MR, Coombs, KM, Coppens, I, Corasaniti, MT, Corkery, DP, Cordes, N, Cortese, K, Costa, MD, Costantino, S, Costelli, P, Coto-Montes, A, Crack, PJ, Crespo, JL, Criollo, A, Crippa, V, Cristofani, R, Csizmadia, T, Cuadrado, A, Cui, B, Cui, J, Cui, YX, Cui, Y, Culetto, E, Cumino, AC, Cybulsky, AV, Czaja, MJ, Czuczwar, SJ, D'Adamo, S, D'Amelio, M, D'Arcangelo, D, D'Lugos, AC, D'Orazi, G, da Silva, JA, Dafsari, HS, Dagda, RK, Dagdas, Y, Daglia, M, Dai, X, Dai, Y, Dai, YY, Dal Col, J, Dalhaimer, P, Dalla Valle, L, Dallenga, T, Dalmasso, G, Damme, M, Dando, I, Dantuma, NP, Darling, AL, Das, H, Dasarathy, S, Dasari, SK, Dash, S, Daumke, O, Dauphinee, AN, Davies, JS, Davila, VA, Davis, RJ, Davis, T, Naidu, SD, De Amicis, F, De Bosscher, K, De Felice, F, De Franceschi, L, De Leonibus, C, Barbosa, MGD, De Meyer, GRY, De Milito, A, De Nunzio, C, De Palma, C, De Santi, M, De Virgilio, C, De Zio, D, Debnath, J, DeBosch, BJ, Decuypere, J, Deehan, MA, Deflorian, G, DeGregori, J, Dehay, B, Del Rio, G, Delaney, JR, Delbridge, LMD, Delorme-Axford, E, Delpino, MV, Demarchi, F, Dembitz, V, Demers, ND, Deng, HB, Deng, ZQ, Dengjel, J, Dent, P, Denton, D, DePamphilis, ML, Der, CJ, Deretic, V, Descoteaux, A, Devis, L, Devkota, S, Devuyst, O, Dewson, G, Dharmasivam, M, Dhiman, R, di Bernardo, D, Di Cristina, M, Di Domenico, F, Di Fazio, P, Di Fonzo, A, Di Guardo, G, Di Guglielmo, GM, Di Leo, L, Di Malta, C, Di Nardo, A, Di Rienzo, M, Di Sano, F, Diallinas, G, Diao, JJ, Diaz-Araya, G, Diaz-Laviada, I, Dickinson, JM, Diederich, M, Dieude, M, Dikic, I, Ding, SP, Ding, WX, Dini, L, Dinic, M, Dinkova-Kostova, AT, Dionne, MS, Distler, JHW, Diwan, A, Dixon, IMC, Djavaheri-Mergny, M, Dobrinski, I, Dobrovinskaya, O, Dobrowolski, R, Dobson, RCJ, Emre, SD, Donadelli, M, Dong, B, Dong, XN, Dong, ZW, Ii, GWD, Dotsch, V, Dou, H, Dou, J, Dowaidar, M, Dridi, S, Drucker, L, Du, AL, Du, CG, Du, GW, Du, HN, Du, LL, du Toit, A, Duan, SB, Duan, XQ, Duarte, SP, Dubrovska, A, Dunlop, EA, Dupont, N, Duran, RV, Dwarakanath, BS, Dyshlovoy, SA, Ebrahimi-Fakhari, D, Eckhart, L, Edelstein, CL, Efferth, T, Eftekharpour, E, Eichinger, L, Eid, N, Eisenberg, T, Eissa, NT, Eissa, S, Ejarque, M, El Andaloussi, A, El-Hage, N, El-Naggar, S, Eleuteri, AM, El-Shafey, ES, Elgendy, M, Eliopoulos, AG, Elizalde, MM, Elks, PM, Elsasser, HP, Elsherbiny, ES, Emerling, BM, Emre, NCT, Eng, CH, Engedal, N, Engelbrecht, AM, Engelsen, AST, Enserink, JM, Escalante, R, Esclatine, A, Escobar-Henriques, M, Eskelinen, EL, Espert, L, Eusebio, MO, Fabrias, G, Fabrizi, C, Facchiano, A, Facchiano, F, Fadeel, B, Fader, C, Faesen, AC, Fairlie, WD, Falco, A, Falkenburger, BH, Fan, DP, Fan, J, Fan, YB, Fang, EF, Fang, YS, Fang, YQ, Fanto, M, Farfel-Becker, T, Faure, M, Fazeli, G, Fedele, AO, Feldman, AM, Feng, D, Feng, JC, Feng, LF, Feng, YB, Feng, YC, Feng, W, Araujo, TF, Ferguson, TA, Fernandez-Checa, JC, FernandezVeledo, S, Fernie, AR, Ferrante, AW, Ferraresi, A, Ferrari, MF, Ferreira, JCB, Ferro-Novick, S, Figueras, A, Filadi, R, Filigheddu, N, FilippiChiela, E, Filomeni, G, Fimia, GM, Fineschi, V, Finetti, F, Finkbeiner, S, Fisher, EA, Fisher, PB, Flamigni, F, Fliesler, SJ, Flo, TH, Florance, I, Florey, O, Florio, T, Fodor, E, Follo, C, Fon, EA, Forlino, A, Fornai, F, Fortini, P, Fracassi, A, Fraldi, A, Franco, B, Franco, R, Franconi, F, Frankel, LB, Friedman, SL, Frohlich, LF, Fruhbeck, G, Fuentes, JM, Fujiki, Y, Fujita, N, Fujiwara, Y, Fukuda, M, Fulda, S, Furic, L, Furuya, N, Fusco, C, Gack, MU, Gaffke, L, Galadari, S, Galasso, A, Galindo, MF, Kankanamalage, SG, Galluzzi, L, Galy, V, Gammoh, N, Gan, BY, Ganley, IG, Gao, F, Gao, H, Gao, MH, Gao, P, Gao, SJ, Gao, WT, Gao, XB, Garcera, A, Garcia, MN, Garcia, VE, Garcia-Del Portillo, F, Garcia-Escudero, V, GarciaGarcia, A, Garcia-Macia, M, Garcia-Moreno, D, Garcia-Ruiz, C, Garcia-Sanz, P, Garg, AD, Gargini, R, Garofalo, T, Garry, RF, Gassen, NC, Gatica, D, Ge, L, Ge, WZ, Geiss-Friedlander, R, Gelfi, C, Genschik, P, Gentle, IE, Gerbino, V, Gerhardt, C, Germain, K, Germain, M, Gewirtz, DA, Afshar, EG, Ghavami, S, Ghigo, A, Ghosh, M, Giamas, G, Giampietri, C, Giatromanolaki, A, Gibson, GE, Gibson, SB, Ginet, V, Giniger, E, Giorgi, C, Girao, H, Girardin, SE, Giridharan, M, Giuliano, S, Giulivi, C, Giuriato, S, Giustiniani, J, Gluschko, A, Goder, V, Goginashvili, A, Golab, J, Goldstone, DC, Golebiewska, A, Gomes, LR, Gomez, R, Gomez-Sanchez, R, Gomez-Puerto, MC, Gomez-Sintes, R, Gong, Q, Goni, FM, Gonzalez-Gallego, J, Gonzalez-Hernandez, T, Gonzalez-Polo, RA, Gonzalez-Reyes, JA, Gonzalez-Rodriguez, P, Goping, IS, Gorbatyuk, MS, Gorbunov, NV, Gorojod, RM, Gorski, SM, Goruppi, S, Gotor, C, Gottlieb, RA, Gozes, I, Gozuacik, D, Graef, M, Graler, MH, Granatiero, V, Grasso, D, Gray, JP, Green, DR, Greenhough, A, Gregory, SL, Griffin, EF, Grinstaff, MW, Gros, F, Grose, C, Gross, AS, Gruber, F, Grumati, P, Grune, T, Gu, XY, Guan, JL, Guardia, CM, Guda, K, Guerra, F, Guerri, C, Guha, P, Guillen, C, Gujar, S, Gukovskaya, A, Gukovsky, I, Gunst, J, Gunther, A, Guntur, AR, Guo, CY, Guo, C, Guo, HQ, Guo, LW, Guo, M, Gupta, P, Fernandez, AF, Gupta, SK, Gupta, S, Gupta, VB, Gupta, V, Gustafsson, AB, Gutterman, DD, Ranjitha, HB, Haapasalo, A, Haber, JE, Hadano, S, Hafren, AJ, Haidar, M, Hall, BS, Hallden, G, Hamacher-Brady, A, Hamann, A, Hamasaki, M, Han, WD, Hansen, M, Hanson, PI, Hao, ZJ, Harada, M, Harhaji-Trajkovic, L, Hariharan, N, Haroon, N, Harris, J, Hasegawa, T, Nagoor, NH, Haspel, JA, Haucke, V, Hawkins, WD, Hay, BA, Haynes, CM, Hayrabedyan, SB, Hays, TS, He, CC, He, Q, He, RR, He, YW, He, YY, Heakal, Y, Heberle, AM, Hejtmancik, JF, Helgason, GV, Henkel, V, Herb, M, Hergovich, A, Herman-Antosiewicz, A, Hernandez, A, Hernandez, C, Hernandez-Diaz, S, Hernandez-Gea, V, Herpin, A, Herreros, J, Hervas, JH, Hesselson, D, Hetz, C, Heussler, VT, Higuchi, Y, Hilfiker, S, Hill, JA, Hlavacek, WS, Ho, EA, Ho, IHT, Ho, PWL, Ho, S, Ho, WY, Hobbs, GA, Hochstrasser, M, Hoet, PHM, Hofius, D, Hofman, P, Hohn, A, Holmberg, CI, Hombrebueno, JR, Hong, CW, Hong, YR, Hooper, LV, Hoppe, T, Horos, R, Hoshida, Y, Hsin, IL, Hsu, HY, Hu, B, Hu, D, Hu, LF, Hu, MC, Hu, RG, Hu, W, Hu, YC, Hu, ZW, Hua, F, Hua, JL, Hua, YQ, Huan, CM, Huang, CH, Huang, CS, Huang, CX, Huang, CL, Huang, HS, Huang, K, Huang, MLH, Huang, R, Huang, S, Huang, TZ, Huang, X, Huang, YJ, Huber, TB, Hubert, V, Hubner, CA, Hughes, SM, Hughes, WE, Humbert, M, Hummer, G, Hurley, JH, Hussain, S, Hussey, PJ, Hutabarat, M, Hwang, HY, Hwang, S, Ieni, A, Ikeda, F, Imagawa, Y, Imai, Y, Imbriano, C, Imoto, M, Inman, DM, Inoki, K, Iovanna, J, Iozzo, RV, Ippolito, G, Irazoqui, JE, Iribarren, P, Ishaq, M, Ishikawa, M, Ishimwe, N, Isidoro, C, Ismail, N, Issazadeh-Navikas, S, Itakura, E, Ito, D, Ivankovic, D, Ivanova, S, Iyer, AKV, Izquierdo, JM, Izumi, M, Jaattela, M, Jabir, MS, Jackson, WT, Jacobo-Herrera, N, Jacomin, AC, Jacquin, E, Jadiya, P, Jaeschke, H, Jagannath, C, Jakobi, AJ, Jakobsson, J, Janji, B, JansenDurr, P, Jansson, PJ, Jantsch, J, Januszewski, S, Jassey, A, Jean, S, JeltschDavid, H, Jendelova, P, Jenny, A, Jensen, TE, Jessen, N, Jewell, JL, Ji, J, Jia, LJ, Jia, R, Jiang, LW, Jiang, Q, Jiang, RC, Jiang, T, Jiang, XJ, Jiang, Y, Jimenez-Sanchez, M, Jin, EJ, Jin, FY, Jin, HC, Jin, L, Jin, LQ, Jin, MY, Jin, S, Jo, EK, Joffre, C, Johansen, T, Johnson, GVW, Johnston, SA, Jokitalo, E, Jolly, MK, Joosten, LAB, Jordan, J, Joseph, B, Ju, DW, Ju, JS, Ju, JF, Juarez, E, Judith, D, Juhasz, G, Jun, Y, Jung, CH, Jung, S, Jung, YK, Jungbluth, H, Jungverdorben, J, Just, S, Kaarniranta, K, Kaasik, A, Kabuta, T, Kaganovich, D, Kahana, A, Kain, R, Kajimura, S, Kalamvoki, M, Kalia, M, Kalinowski, DS, Kaludercic, N, Kalvari, I, Kaminska, J, Kaminskyy, VO, Kanamori, H, Kanasaki, K, Kang, C, Kang, R, Kang, SS, Kaniyappan, S, Kanki, T, Kanneganti, TD, Kanthasamy, AG, Kanthasamy, A, Kantorow, M, Kapuy, O, Karamouzis, MV, Karim, MR, Karmakar, P, Katare, RG, Kato, M, Kaufmann, SHE, Kauppinen, A, Kaushal, GP, Kaushik, S, Kawasaki, K, Kazan, K, Ke, PY, Keating, DJ, Keber, U, Kehrl, JH, Keller, KE, Keller, CW, Kemper, JK, Kenific, CM, Kepp, O, Kermorgant, S, Kern, A, Ketteler, R, Keulers, TG, Khalfin, B, Khalil, H, Khambu, B, Khan, SY, Khandelwal, VKM, Khandia, R, Kho, W, Khobrekar, NV, Khuansuwan, S, Khundadze, M, Killackey, SA, Kim, D, Kim, DR, Kim, DH, Kim, DE, Kim, EY, Kim, EK, Kim, H, Kim, HS, Kim, HR, Kim, JH, Kim, JK, Kim, J, Kim, KI, Kim, PK, Kim, SJ, Kimball, SR, Kimchi, A, Kimmelman, AC, Kimura, T, King, MA, Kinghorn, KJ, Kinsey, CG, Kirkin, V, Kirshenbaum, LA, Kiselev, SL, Kishi, S, Kitamoto, K, Kitaoka, Y, Kitazato, K, Kitsis, RN, Kittler, JT, Kjaerulff, O, Klein, PS, Klopstock, T, Klucken, J, Knovelsrud, H, Knorr, RL, Ko, BB, Ko, F, Ko, JL, Kobayashi, H, Kobayashi, S, Koch, I, Koch, JC, Koenig, U, Kogel, D, Koh, YH, Koike, M, Kohlwein, SD, Kocaturk, NM, Komatsu, M, Konig, J, Kono, T, Kopp, BT, Korcsmaros, T, Korkmaz, G, Korolchuk, VI, Korsnes, MS, Koskela, A, Kota, J, Kotake, Y, Kotler, ML, Kou, YJ, Koukourakis, MI, Koustas, E, Kovacs, AL, Kovacs, T, Koya, D, Kozako, T, Kraft, C, Krainc, D, Kramer, H, Krasnodembskaya, AD, Kretz-Remy, C, Kroemer, G, Ktistakis, NT, Kuchitsu, K, Kuenen, S, Kuerschner, L, Kukar, T, Kumar, A, Kumar, D, Kumar, S, Kume, S, Kumsta, C, Kundu, CN, Kundu, M, Kunnumakkara, AB, Kurgan, L, Kutateladze, TG, Kutlu, O, Kwak, S, Kwon, HJ, Kwon, TK, Kwon, YT, Kyrmizi, I, La Spada, A, Labonte, P, Ladoire, S, Laface, I, Lafont, F, Lagace, DC, Lahiri, V, Lai, ZB, Laird, AS, Lakkaraju, A, Lamark, T, Lan, SH, Landajuela, A, Lane, DJR, Lane, JD, Lang, CH, Lange, C, Langer, R, Lapaquette, P, Laporte, J, LaRusso, NF, Lastres-Becker, I, Lau, WCY, Laurie, GW, Lavandero, S, Law, BYK, Law, HKW, Layfield, R, Le, WD, Le Stunff, H, Leary, AY, Lebrun, JJ, Leck, LYW, Leduc-Gaudet, JP, Lee, C, Lee, CP, Lee, DH, Lee, EB, Lee, EF, Lee, GM, Lee, HJ, Lee, HK, Lee, JM, Lee, JS, Lee, JA, Lee, JY, Lee, JH, Lee, M, Lee, MG, Lee, MJ, Lee, MS, Lee, SY, Lee, SJ, Lee, SB, Lee, WH, Lee, YR, Lee, YH, Lee, Y, Lefebvre, C, Legouis, R, Lei, YL, Lei, YC, Leikin, S, Leitinger, G, Lemus, L, Leng, SL, Lenoir, O, Lenz, G, Lenz, HJ, Lenzi, P, Leon, Y, Leopoldino, AM, Leschczyk, C, Leskela, S, Letellier, E, Leung, CT, Leung, PS, Leventhal, JS, Levine, B, Lewis, PA, Ley, K, Li, B, Li, DQ, Li, JM, Li, J, Li, K, Li, LW, Li, M, Li, MC, Li, PL, Li, MQ, Li, Q, Li, S, Li, TG, Li, W, Li, WM, Li, X, Li, YP, Li, Y, Li, ZQ, Li, ZY, Lian, JQ, Liang, CY, Liang, QR, Liang, WC, Liang, YH, Liang, YT, Liao, GH, Liao, LJ, Liao, MZ, Liao, YF, Librizzi, M, Lie, PPY, Lilly, MA, Lim, HJ, Lima, TRR, Limana, F, Lin, C, Lin, CW, Lin, DS, Lin, FC, Lin, JDD, Lin, KM, Lin, KH, Lin, LT, Lin, PH, Lin, Q, Lin, SF, Lin, SJ, Lin, WY, Lin, XY, Lin, YX, Lin, YS, Linden, R, Lindner, P, Ling, SC, Lingor, P, Linnemann, AK, Liou, Y, Lipinski, MM, Lipovsek, S, Lira, VA, Lisiak, N, Liton, PB, Liu, C, Liu, CH, Liu, CF, Liu, F, Liu, H, Liu, HS, Liu, HF, Liu, J, Liu, JL, Liu, LY, Liu, LH, Liu, ML, Liu, Q, Liu, W, Liu, WD, Liu, XH, Liu, XD, Liu, XG, Liu, X, Liu, YF, Liu, Y, Liu, YY, Liu, YL, Livingston, JA, Lizard, G, Lizcano, JM, Ljubojevic-Holzer, S, LLeonart, ME, Llobet-Navas, D, Llorente, A, Lo, CH, Lobato-Marquez, D, Long, Q, Long, YC, Loos, B, Loos, JA, Lopez, MG, Lopez-Domenech, G, Lopez-Guerrero, JA, Lopez-Jimenez, AT, Lopez-Valero, I, Lorenowicz, MJ, Lorente, M, Lorincz, P, Lossi, L, Lotersztajn, S, Lovat, PE, Lovell, JF, Lovy, A, Lu, G, Lu, HC, Lu, JH, Lu, JJ, Lu, MJ, Lu, SY, Luciani, A, Lucocq, JM, Ludovico, P, Luftig, MA, Luhr, M, Luis-Ravelo, D, Lum, JJ, Luna-Dulcey, L, Lund, AH, Lund, VK, Lunemann, JD, Luningschror, P, Luo, HL, Luo, RC, Luo, SQ, Luo, Z, Luparello, C, Luscher, B, Luu, L, Lyakhovich, A, Lyamzaev, KG, Lystad, AH, Lytvynchuk, L, Ma, AC, Ma, CL, Ma, MX, Ma, NF, Ma, QH, Ma, XL, Ma, YY, Ma, ZY, MacDougald, OA, Macian, F, MacIntosh, GC, MacKeigan, JP, Macleod, KF, Maday, S, Madeo, F, Madesh, M, Madl, T, Madrigal-Matute, J, Maeda, A, Maejima, Y, Magarinos, M, Mahavadi, P, Maiani, E, Maiese, K, Maiti, P, Maiuri, MC, Majello, B, Major, MB, Makareeva, E, Malik, F, Mallilankaraman, K, Malorni, W, Maloyan, A, Mammadova, N, Man, GCW, Manai, F, Mancias, JD, Mandelkow, EM, Mandell, MA, Manfredi, AA, Manjili, MH, Manjithaya, R, Manque, P, Manshian, BB, Manzano, R, Manzoni, C, Mao, K, Marchese, C, Marchetti, S, Marconi, AM, Marcucci, F, Mardente, S, Mareninova, OA, Margeta, M, Mari, M, Marinelli, S, Marinelli, O, Marino, G, Mariotto, S, Marshall, RS, Marten, MR, Martens, S, Martin, APJ, Martin, KR, Martin, S, Martin-Segura, A, Martin-Acebes, MA, Martin-Burriel, I, Martin-Rincon, M, Martin-Sanz, P, Martina, JA, Martinet, W, Martinez, A, Martinez, J, Velazquez, MM, Martinez-Lopez, N, Martinez-Vicente, M, Martins, DO, Lange, U, Lopez-Perez, O, Martins, JO, Martins, WK, Martins-Marques, T, Marzetti, E, Masaldan, S, Masclaux-Daubresse, C, Mashek, DG, Massa, V, Massieu, L, Masson, GR, Masuelli, L, Masyuk, AI, Masyuk, TV, Matarrese, P, Matheu, A, Matoba, S, Matsuzaki, S, Mattar, P, Matte, A, Mattoscio, D, Mauriz, JL, Mauthe, M, Mauvezin, C, Maverakis, E, Maycotte, P, Mayer, J, Mazzoccoli, G, Mazzoni, C, Mazzulli, JR, McCarty, N, McDonald, C, McGill, MR, McKenna, SL, McLaughlin, B, McLoughlin, F, McNiven, MA, McWilliams, TG, Mechta-Grigoriou, F, Medeiros, TC, Medina, DL, Megeney, LA, Megyeri, K, Mehrpour, M, Mehta, JL, Meijer, AJ, Meijer, AH, Mejlvang, J, Melendez, A, Melk, A, Memisoglu, G, Mendes, AF, Meng, D, Meng, F, Meng, T, Menna-Barreto, R, Menon, MB, Mercer, C, Mercier, AE, Mergny, JL, Merighi, A, Merkley, SD, Merla, G, Meske, V, Mestre, AC, Metur, SP, Meyer, C, Meyer, H, Mi, WY, Mialet-Perez, J, Miao, JY, Micale, L, Miki, Y, Milan, E, Miller, DL, Miller, SI, Miller, S, Millward, SW, Milosevic, I, Minina, EA, Mirzaei, H, Mirzaei, HR, Mirzaei, M, Mishra, A, Mishra, N, Mishra, PK, Marjanovic, MM, Misasi, R, Misra, A, Misso, G, Mitchell, C, Mitou, G, Miura, T, Miyamoto, S, Miyazaki, M, Miyazaki, T, Miyazawa, K, Mizushima, N, Mogensen, TH, Mograbi, B, Mohammadinejad, R, Mohamud, Y, Mohanty, A, Mohapatra, S, Mohlmann, T, Mohmmed, A, Moles, A, Moley, KH, Molinari, M, Mollace, V, Muller, AB, Mollereau, B, Mollinedo, F, Montagna, C, Monteiro, MJ, Montella, A, Montes, LR, Montico, B, Mony, VK, Compagnoni, GM, Moore, MN, Moosavi, MA, Mora, AL, Mora, M, Morales-Alamo, D, Moratalla, R, Moreira, PI, Morelli, E, Moreno, S, Moreno-Blas, D, Moresi, V, Morga, B, Morgan, AH, Morin, F, Morishita, H, Moritz, OL, Moriyama, M, Moriyasu, Y, Morleo, M, Morselli, E, Moruno-Manchon, JF, Moscat, J, Mostowy, S, Motori, E, Moura, AF, Moustaid-Moussa, N, Mrakovcic, M, MucinoHernandez, G, Mukherjee, A, Mukhopadhyay, S, Levy, JMM, Mulero, V, Muller, S, Munch, C, Munjal, A, Munoz-Canoves, P, Munoz-Galdeano, T, Munz, C, Murakawa, T, Muratori, C, Murphy, BM, Murphy, JP, Murthy, A, Myohanen, TT, Mysorekar, IU, Mytych, J, Nabavi, SM, Nabissi, M, Nagy, P, Nah, J, Nahimana, A, Nakagawa, I, Nakamura, K, Nakatogawa, H, Nandi, SS, Nanjundan, M, Nanni, M, Napolitano, G, Nardacci, R, Narita, M, Nassif, M, Nathan, I, Natsumeda, M, Naude, RJ, Naumann, C, Naveiras, O, Navid, F, Nawrocki, ST, Nazarko, TY, Nazio, F, Negoita, F, Neill, T, Neisch, AL, Neri, LM, Netea, MG, Neubert, P, Neufeld, TP, Neumann, D, Neutzner, A, Newton, PT, Ney, PA, Nezis, IP, Ng, CCW, Ng, TB, Nguyen, HTT, Nguyen, LT, Ni, HM, Cheallaigh, CN, Ni, Z, Nicolao, MC, Nicoli, F, Nieto-Diaz, M, Nilsson, P, Ning, S, Niranjan, R, Nishimune, H, Niso-Santano, M, Nixon, RA, Nobili, A, Nobrega, C, Noda, T, Nogueira-Recalde, U, Nolan, TM, Nombela, I, Novak, I, Novoa, B, Nozawa, T, Nukina, N, Nussbaum-Krammer, C, Nylandsted, J, O'Donovan, TR, O'Leary, SM, O'Rourke, EJ, O'Sullivan, MP, O'Sullivan, TE, Oddo, S, Oehme, I, Ogawa, M, Ogier-Denis, E, Ogmundsdottir, MH, Ogretmen, B, Oh, GT, Oh, SH, Oh, YJ, Ohama, T, Ohashi, Y, Ohmuraya, M, Oikonomou, V, Ojha, R, Okamoto, K, Okazawa, H, Oku, M, Olivan, S, Oliveira, JMA, Ollmann, M, Olzmann, JA, Omari, S, Omary, MB, Onal, G, Ondrej, M, Ong, SB, Ong, SG, Onnis, A, Orellana, JA, Orellana-Munoz, S, Ortega-Villaizan, MD, Ortiz-Gonzalez, XR, Ortona, E, Osiewacz, HD, Osman, AHK, Osta, R, Otegui, MS, Otsu, K, Ott, C, Ottobrini, L, Ou, JHJ, Outeiro, TF, Oynebraten, I, Ozturk, M, Pages, G, Pahari, S, Pajares, M, Pajvani, UB, Pal, R, Paladino, S, Pallet, N, Palmieri, M, Palmisano, G, Palumbo, C, Pampaloni, F, Pan, LF, Pan, QJ, Pan, WL, Pan, X, Panasyuk, G, Pandey, R, Pandey, UB, Pandya, V, Paneni, F, Pang, SY, Panzarini, E, Papademetrio, DL, Papaleo, E, Papinski, D, Papp, D, Park, EC, Park, HT, Park, JM, Park, J, Park, JT, Park, SC, Park, SY, Parola, AH, Parys, JB, Pasquier, A, Pasquier, B, Passos, JF, Pastore, N, Patel, HH, Patschan, D, Pattingre, S, Pedraza-Alva, G, Pedraza-Chaverri, J, Pedrozo, Z, Pei, G, Pei, JM, Peled-Zehavi, H, Pellegrini, JM, Pelletier, J, Penalva, MA, Peng, D, Peng, Y, Penna, F, Pennuto, M, Pentimalli, F, Pereira, CM, Pereira, GJS, Pereira, LC, de Almeida, LP, Perera, ND, PerezOliva, AB, Perez-Perez, ME, Periyasamy, P, Perl, A, Perrotta, C, Perrotta, I, Pestell, RG, Petersen, M, Petrache, I, Petrovski, G, Pfirrmann, T, Pfister, AS, Philips, JA, Pi, HF, Picca, A, Pickrell, AM, Picot, S, Pierantoni, GM, Pierdominici, M, Pierre, P, Pierrefite-Carle, V, Pierzynowska, K, Pietrocola, F, Pietruczuk, M, Pignata, C, PimentelMuinos, FX, Pinar, M, Pinheiro, RO, Pinkas-Kramarski, R, Pinton, P, Pircs, K, Piya, S, Pizzo, P, Plantinga, TS, Platta, HW, Plaza-Zabala, A, Plomann, M, Plotnikov, EY, Plun-Favreau, H, Pluta, R, Pocock, R, Poggeler, S, Pohl, C, Poirot, M, Poletti, A, Ponpuak, M, Popelka, H, Popova, B, Porta, H, Alcon, SP, Portilla-Fernandez, E, Post, M, Potts, MB, Poulton, J, Powers, T, Prahlad, V, Prajsnar, TK, Pratico, D, Prencipe, R, Priault, M, ProikasCezanne, T, Promponas, VJ, Proud, CG, Puertollano, R, Puglielli, L, Pulinilkunnil, T, Puri, D, Puri, R, Puyal, J, Qi, XP, Qi, YM, Qian, WB, Qiang, L, Qiu, Y, Quadrilatero, J, Quarleri, J, Raben, N, Rabinowich, H, Ragona, D, Ragusa, MJ, Rahimi, N, Rahmati, M, Raia, V, Raimundo, N, Rajasekaran, NS, Rao, SR, Rami, A, Ramirez-Pardo, I, Ramsden, DB, Randow, F, Rangarajan, PN, Ranieri, D, Rao, H, Rao, L, Rao, R, Rathore, S, Ratnayaka, JA, Ratovitski, EA, Ravanan, P, Ravegnini, G, Ray, SK, Razani, B, Rebecca, V, Reggiori, F, Regnier-Vigouroux, A, Reichert, AS, Reigada, D, Reiling, JH, Rein, T, Reipert, S, Rekha, RS, Ren, HM, Ren, J, Ren, WC, Renault, T, Renga, G, Reue, K, Rewitz, K, Ramos, BRD, Riazuddin, SA, Ribeiro-Rodrigues, TM, Ricci, JE, Ricci, R, Riccio, V, Richardson, D, Rikihisa, Y, Risbud, MV, Risueno, RM, Ritis, K, Rizza, S, Rizzuto, R, Roberts, HC, Roberts, LD, Robinson, KJ, Roccheri, MC, Rocchi, S, Rodney, GG, Rodrigues, T, Silva, VRR, Rodriguez, A, Rodriguez-Barrueco, R, Rodriguez-Henche, N, Rodriguez-Rocha, H, Roelofs, J, Rogers, RS, Rogov, VV, Rojo, AI, Rolka, K, Romanello, V, Romani, L, Romano, A, Romano, PS, Romeo-Guitart, D, Romero, LC, Romero, M, Roney, JC, Rongo, C, Roperto, S, Rosenfeldt, MT, Rosenstiel, P, Rosenwald, AG, Roth, KA, Roth, L, Roth, S, Rouschop, KMA, Roussel, BD, Roux, S, Rovere-Querini, P, Roy, A, Rozieres, A, Ruano, D, Rubinsztein, DC, Rubtsova, MP, Ruckdeschel, K, Ruckenstuhl, C, Rudolf, E, Rudolf, R, Ruggieri, A, Ruparelia, AA, Rusmini, P, Russell, RR, Russo, GL, Russo, M, Russo, R, Ryabaya, OO, Ryan, KM, Ryu, KY, Sabater-Arcis, M, Sachdev, U, Sacher, M, Sachse, C, Sadhu, A, Sadoshima, J, Safren, N, Saftig, P, Sagona, AP, Sahay, G, Sahebkar, A, Sahin, M, Sahin, O, Sahni, S, Saito, N, Saito, S, Saito, T, Sakai, R, Sakai, Y, Sakamaki, JI, Saksela, K, Salazar, G, Salazar-Degracia, A, Salekdeh, GH, Saluja, AK, Sampaio-Marques, B, Sanchez, MC, Sanchez-Alcazar, JA, Sanchez-Vera, V, Sancho-Shimizu, V, Sanderson, JT, Sandri, M, Santaguida, S, Santambrogio, L, Santana, MM, Santoni, G, Sanz, A, Sanz, P, Saran, S, Sardiello, M, Sargeant, TJ, Sarin, A, Sarkar, C, Sarkar, S, Sarrias, MR, Sarmah, DT, Sarparanta, J, Sathyanarayan, A, Sathyanarayanan, R, Scaglione, KM, Scatozza, F, Schaefer, L, Schafer, ZT, Schaible, UE, Schapira, AHV, Scharl, M, Schatzl, HM, Schein, CH, Scheper, W, Scheuring, D, Schiaffino, MV, Schiappacassi, M, Schindl, R, Schlattner, U, Schmidt, O, Schmitt, R, Schmidt, SD, Schmitz, I, Schmukler, E, Schneider, A, Schneider, BE, Schober, R, Schoijet, AC, Schott, MB, Schramm, M, Schroder, B, Schuh, K, Schuller, C, Schulze, RJ, Schurmanns, L, Schwamborn, JC, Schwarten, M, Scialo, F, Sciarretta, S, Scott, MJ, Scotto, KW, Scovassi, AI, Scrima, A, Scrivo, A, Sebastian, D, Sebti, S, Sedej, S, Segatori, L, Segev, N, Seglen, PO, Seiliez, I, Seki, E, Selleck, SB, Sellke, FW, Perez-Lara, A, Selsby, JT, Sendtner, M, Senturk, S, Seranova, E, Sergi, C, Serra-Moreno, R, Sesaki, H, Settembre, C, Setty, SRG, Sgarbi, G, Sha, O, Shacka, JJ, Shah, JA, Shang, DT, Shao, CS, Shao, F, Sharbati, S, Sharkey, LM, Sharma, D, Sharma, G, Sharma, K, Sharma, P, Sharma, S, Shen, HM, Shen, HT, Shen, JG, Shen, M, Shen, WL, Shen, ZN, Sheng, R, Sheng, Z, Sheng, ZH, Shi, JJ, Shi, XB, Shi, YH, Shiba-Fukushima, K, Shieh, J, Shimada, Y, Shimizu, S, Shimozawa, M, Shintani, T, Shoemaker, CJ, Shojaei, S, Shoji, I, Shravage, BV, Shridhar, V, Shu, CW, Shu, HB, Shui, K, Shukla, AK, Shutt, TE, Sica, V, Siddiqui, A, Sierra, A, Sierra-Torre, V, Signorelli, S, Sil, P, Silva, BJD, Silva, JD, Silva-Pavez, E, Silvente-Poirot, S, Simmonds, RE, Simon, AK, Simon, HU, Simons, M, Singh, A, Singh, LP, Singh, R, Singh, SV, Singh, SK, Singh, SB, Singh, S, Singh, SP, Sinha, D, Sinha, RA, Sinha, S, Sirko, A, Sirohi, K, Sivridis, EL, Skendros, P, Skirycz, A, Slaninova, I, Smaili, SS, Smertenko, A, Smith, MD, Soenen, SJ, Sohn, EJ, Sok, SPM, Solaini, G, Soldati, T, Soleimanpour, SA, Soler, RM, Solovchenko, A, Somarelli, JA, Sonawane, A, Song, FY, Song, HK, Song, JX, Song, KH, Song, ZY, Soria, LR, Sorice, M, Soukas, AA, Soukup, SF, Sousa, D, Sousa, N, Spagnuolo, PA, Spector, SA, Bharath, MMS, St Clair, D, Stagni, V, Staiano, L, Stalnecker, CA, Stankov, MV, Stathopulos, PB, Stefan, K, Stefan, SM, Stefanis, L, Steffan, JS, Steinkasserer, A, Stenmark, H, Sterneckert, J, Stevens, C, Stoka, V, Storch, S, Stork, B, Strappazzon, F, Strohecker, AM, Stupack, DG, Su, HX, Su, LY, Su, LX, SuarezFontes, AM, Subauste, CS, Subbian, S, Subirada, PV, Sudhandiran, G, Sue, CM, Sui, XB, Summers, C, Sun, GC, Sun, J, Sun, K, Sun, MX, Sun, QM, Sun, Y, Sun, ZJ, Sunahara, KKS, Sundberg, E, Susztak, K, Sutovsky, P, Suzuki, H, Sweeney, G, Symons, JD, Sze, SCW, Szewczyk, NJ, Tabolacci, C, Tacke, F, Taegtmeyer, H, Tafani, M, Tagaya, M, Tai, HR, Tait, SWG, Takahashi, Y, Takats, S, Talwar, P, Tam, C, Tam, SY, Tampellini, D, Tamura, A, Tan, CT, Tan, EK, Tan, YQ, Tanaka, M, Tang, D, Tang, JF, Tang, TS, Tanida, I, Tao, ZP, Taouis, M, Tatenhorst, L, Tavernarakis, N, Taylor, A, Taylor, GA, Taylor, JM, Tchetina, E, Tee, AR, Tegeder, I, Teis, D, Teixeira, N, Teixeira-Clerc, F, Tekirdag, KA, Tencomnao, T, Tenreiro, S, Tepikin, AV, Testillano, PS, Tettamanti, G, Tharaux, P, Thedieck, K, Thekkinghat, AA, Thellung, S, Thinwa, JW, Thirumalaikumar, VP, Thomas, SM, Thomes, PG, Thorburn, A, Thukral, L, Thum, T, Thumm, M, Tian, L, Tichy, A, Till, A, Timmerman, V, Titorenko, VI, Todi, SV, Todorova, K, Toivonen, JM, Tomaipitinca, L, Tomar, D, Tomas-Zapico, C, Tong, BCK, Tong, C, Tong, X, Tooze, SA, Torgersen, ML, Torii, S, Torres-Lopez, L, Torriglia, A, Towers, CG, Towns, R, Toyokuni, S, Trajkovic, V, Tramontano, D, Tran, Q, Travassos, LH, Trelford, CB, Tremel, S, Trougakos, IP, Tsao, BP, Tschan, MP, Tse, HF, Tse, TF, Tsugawa, H, Tsvetkov, AS, Tumbarello, DA, Tumtas, Y, Tunon, MJ, Turcotte, S, Turk, B, Turk, V, Turner, BJ, Tuxworth, RI, Tyler, JK, Tyutereva, EV, Uchiyama, Y, UgunKlusek, A, Uhlig, HH, Ulasov, IV, Umekawa, M, Ungermann, C, Unno, R, Urbe, S, Uribe-Carretero, E, Ustun, S, Uversky, VN, Vaccari, T, Vaccaro, MI, Vahsen, BF, Vakifahmetoglu-Norberg, H, Valdor, R, Valente, MJ, Valko, A, Vallee, RB, Valverde, AM, Van den Berghe, G, van Der Veen, S, Van Kaer, L, van Loosdregt, J, van Wijk, SJL, Vandenberghe, W, Vanhorebeek, I, Vannier-Santos, MA, Vannini, N, Vanrell, MC, Vantaggiato, C, Varano, G, Varela-Nieto, I, Varga, M, Vasconcelos, MH, Vats, S, Vavvas, DG, VegaNaredo, I, Vega-Rubin-de-Celis, S, Velasco, G, Velazquez, AP, Vellai, T, Vellenga, E, Velotti, F, Verdier, M, Verginis, P, Vergne, I, Verkade, P, Verma, M, Verstreken, P, Vervliet, T, Vervoorts, J, Vessoni, AT, Victor, VM, Vidal, M, Vidoni, C, Vieira, OV, Vierstra, RD, Vigano, S, Vihinen, H, Vijayan, V, Vila, M, Vilar, M, Villalba, JM, Villalobo, A, Villarejo-Zori, B, Villarroya, F, Villarroya, J, Vincent, O, Vindis, C, Viret, C, Viscomi, MT, Visnjic, D, Vitale, I, Vocadlo, DJ, Voitsekhovskaja, OV, Volonte, C, Volta, M, Vomero, M, Von Haefen, C, Vooijs, MA, Voos, W, Vucicevic, L, Wade-Martins, R, Waguri, S, Waite, KA, Wakatsuki, S, Walker, DW, Walker, MJ, Walker, SA, Walter, J, Wandosell, FG, Wang, B, Wang, CY, Wang, C, Wang, CR, Wang, CW, Wang, D, Wang, FY, Wang, F, Wang, FM, Wang, GS, Wang, H, Wang, HX, Wang, HG, Wang, JR, Wang, JG, Wang, J, Wang, JD, Wang, K, Wang, LR, Wang, LM, Wang, MH, Wang, MQ, Wang, NB, Wang, PW, Wang, PP, Wang, P, Wang, QJ, Wang, Q, Wang, QK, Wang, QA, Wang, WT, Wang, WY, Wang, XN, Wang, XJ, Wang, Y, Wang, YC, Wang, YZ, Wang, YY, Wang, YH, Wang, YP, Wang, YQ, Wang, Z, Wang, ZY, Wang, ZG, Warnes, G, Warnsmann, V, Watada, H, Watanabe, E, Watchon, M, Weaver, TE, Wegrzyn, G, Wehman, AM, Wei, HF, Wei, L, Wei, TT, Wei, YJ, Weiergraber, OH, Weihl, CC, Weindl, G, Weiskirchen, R, Wells, A, Wen, RXH, Wen, X, Werner, A, Weykopf, B, Wheatley, SP, Whitton, JL, Whitworth, AJ, Wiktorska, K, Wildenberg, ME, Wileman, T, Wilkinson, S, Willbold, D, Williams, B, Williams, RSB, Williams, RL, Williamson, PR, Wilson, RA, Winner, B, Winsor, NJ, Witkin, SS, Wodrich, H, Woehlbier, U, Wollert, T, Wong, E, Wong, JH, Wong, RW, Wong, VKW, Wong, WWL, Wu, AG, Wu, CB, Wu, J, Wu, JF, Wu, KK, Wu, M, Wu, SY, Wu, SZ, Wu, SF, Wu, WKK, Wu, XH, Wu, XQ, Wu, YW, Wu, YH, Xavier, RJ, Xia, HG, Xia, LX, Xia, ZY, Xiang, G, Xiang, J, Xiang, ML, Xiang, W, Xiao, B, Xiao, GZ, Xiao, HY, Xiao, HT, Xiao, J, Xiao, L, Xiao, S, Xiao, Y, Xie, BM, Xie, CM, Xie, M, Xie, YX, Xie, ZP, Xie, ZL, Xilouri, M, Xu, CF, Xu, E, Xu, HX, Xu, J, Xu, JR, Xu, L, Xu, WW, Xu, XL, Xue, Y, Yakhine-Diop, SMS, Yamaguchi, M, Yamaguchi, O, Yamamoto, A, Yamashina, S, Yan, SM, Yan, SJ, Yan, Z, Yanagi, Y, Yang, CB, Yang, DS, Yang, H, Yang, HT, Yang, JM, Yang, J, Yang, JY, Yang, L, Yang, M, Yang, PM, Yang, Q, Yang, S, Yang, SF, Yang, WN, Yang, WY, Yang, XY, Yang, XS, Yang, Y, Yao, HH, Yao, SG, Yao, XQ, Yao, YG, Yao, YM, Yasui, T, Yazdankhah, M, Yen, PM, Yi, C, Yin, XM, Yin, YH, Yin, ZY, Ying, MD, Ying, Z, Yip, CK, Yiu, SPT, Yoo, YH, Yoshida, K, Yoshii, SR, Yoshimori, T, Yousefi, B, Yu, BX, Yu, HY, Yu, J, Yu, L, Yu, ML, Yu, SW, Yu, VC, Yu, WH, Yu, ZP, Yu, Z, Yuan, JY, Yuan, LQ, Yuan, SL, Yuan, SSF, Yuan, YG, Yuan, ZQ, Yue, JB, Yue, ZY, Yun, J, Yung, RL, Zacks, DN, Zaffagnini, G, Zambelli, VO, Zanella, I, Zang, QS, Zanivan, S, Zappavigna, S, Zaragoza, P, Zarbalis, KS, Zarebkohan, A, Zarrouk, A, Zeitlin, SO, Zeng, JL, Zeng, JD, Zerovnik, E, Zhan, LX, Zhang, B, Zhang, DD, Zhang, HL, Zhang, H, Zhang, HH, Zhang, HF, Zhang, HY, Zhang, JB, Zhang, JH, Zhang, JP, Zhang, KLYB, Zhang, LSW, Zhang, L, Zhang, LS, Zhang, LY, Zhang, MH, Zhang, P, Zhang, S, Zhang, W, Zhang, XN, Zhang, XW, Zhang, XL, Zhang, XY, Zhang, X, Zhang, XX, Zhang, XD, Zhang, Y, Zhang, YJ, Zhang, YD, Zhang, YM, Zhang, YY, Zhang, YC, Zhang, Z, Zhang, ZG, Zhang, ZB, Zhang, ZH, Zhang, ZY, Zhang, ZL, Zhao, HB, Zhao, L, Zhao, S, Zhao, TB, Zhao, XF, Zhao, Y, Zhao, YC, Zhao, YL, Zhao, YT, Zheng, GP, Zheng, K, Zheng, L, Zheng, SZ, Zheng, XL, Zheng, Y, Zheng, ZG, Zhivotovsky, B, Zhong, Q, Zhou, A, Zhou, B, Zhou, CF, Zhou, G, Zhou, H, Zhou, HB, Zhou, J, Zhou, JY, Zhou, KL, Zhou, RJ, Zhou, XJ, Zhou, YS, Zhou, YH, Zhou, YB, Zhou, ZY, Zhou, Z, Zhu, BL, Zhu, CL, Zhu, GQ, Zhu, HN, Zhu, HX, Zhu, H, Zhu, WG, Zhu, YP, Zhu, YS, Zhuang, HX, Zhuang, XH, Zientara-Rytter, K, Zimmermann, CM, Ziviani, E, Zoladek, T, Zong, WX, Zorov, DB, Zorzano, A, Zou, WP, Zou, Z, Zou, ZZ, Zuryn, S, Zwerschke, W, Brand-Saberi, B, Dong, XC, Kenchappa, CS, Li, ZG, Lin, Y, Oshima, S, Rong, YG, Sluimer, JC, Stallings, CL, and Tong, CK
- Subjects
flux ,macroautophagy ,phagophore ,stress ,vacuole ,Autophagosome ,LC3 ,lysosome ,neurodegeneration ,cancer - Abstract
In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field.
- Published
- 2021
7. Exploiting Web Images for Multi-Output Classification: From Category to Subcategories.
- Author
-
Yao, Y, Shen, F, Xie, G, Liu, L, Zhu, F, Zhang, J, Shen, HT, Yao, Y, Shen, F, Xie, G, Liu, L, Zhu, F, Zhang, J, and Shen, HT
- Abstract
Studies present that dividing categories into subcategories contributes to better image classification. Existing image subcategorization works relying on expert knowledge and labeled images are both time-consuming and labor-intensive. In this article, we propose to select and subsequently classify images into categories and subcategories. Specifically, we first obtain a list of candidate subcategory labels from untagged corpora. Then, we purify these subcategory labels through calculating the relevance to the target category. To suppress the search error and noisy subcategory label-induced outlier images, we formulate outlier images removing and the optimal classification models learning as a unified problem to jointly learn multiple classifiers, where the classifier for a category is obtained by combining multiple subcategory classifiers. Compared with the existing subcategorization works, our approach eliminates the dependence on expert knowledge and labeled images. Extensive experiments on image categorization and subcategorization demonstrate the superiority of our proposed approach.
- Published
- 2020
8. Towards Automatic Construction of Diverse, High-Quality Image Datasets
- Author
-
Yao, Y, Zhang, J, Shen, F, Liu, L, Zhu, F, Zhang, D, Shen, HT, Yao, Y, Zhang, J, Shen, F, Liu, L, Zhu, F, Zhang, D, and Shen, HT
- Published
- 2020
9. Embedding and predicting the event at early stage
- Author
-
Liu, Z, Yang, Y, Huang, Z, Shen, F, Zhang, D, and Shen, HT
- Subjects
Information Systems - Abstract
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Social media has become one of the most credible sources for delivering messages, breaking news, as well as events. Predicting the future dynamics of an event at a very early stage is significantly valuable, e.g, helping company anticipate marketing trends before the event becomes mature. However, this prediction is non-trivial because a) social events always stay with “noise” under the same topic and b) the information obtained at its early stage is too sparse and limited to support an accurate prediction. In order to overcome these two problems, in this paper, we design an event early embedding model (EEEM) that can 1) extract social events from noise, 2) find the previous similar events, and 3) predict future dynamics of a new event with very limited information. Specifically, a denoising approach is derived from the knowledge of signal analysis to eliminate social noise and extract events. Moreover, we propose a novel predicting scheme based on locally linear embedding algorithm to construct the volume of a new event from its k nearest neighbors. Compared to previous work only fitting the historical volume dynamics to make a prediction, our predictive model is based on both the volume information and content information of events. Extensive experiments conducted on a large-scale dataset of Twitter data demonstrate the capacity of our model on extract events and the promising performance of prediction by considering both volume information as well as content information. Compared with predicting with only the content or the volume feature, we find the best performance of considering they both with our proposed fusion method.
- Published
- 2019
10. Transfer Independently Together: A Generalized Framework for Domain Adaptation
- Author
-
Li, J, Lu, K, Huang, Z, Zhu, L, Shen, HT, Li, J, Lu, K, Huang, Z, Zhu, L, and Shen, HT
- Abstract
© 2013 IEEE. Currently, unsupervised heterogeneous domain adaptation in a generalized setting, which is the most common scenario in real-world applications, is under insufficient exploration. Existing approaches either are limited to special cases or require labeled target samples for training. This paper aims to overcome these limitations by proposing a generalized framework, named as transfer independently together (TIT). Specifically, we learn multiple transformations, one for each domain (independently), to map data onto a shared latent space, where the domains are well aligned. The multiple transformations are jointly optimized in a unified framework (together) by an effective formulation. In addition, to learn robust transformations, we further propose a novel landmark selection algorithm to reweight samples, i.e., increase the weight of pivot samples and decrease the weight of outliers. Our landmark selection is based on graph optimization. It focuses on sample geometric relationship rather than sample features. As a result, by abstracting feature vectors to graph vertices, only a simple and fast integer arithmetic is involved in our algorithm instead of matrix operations with float point arithmetic in existing approaches. At last, we effectively optimize our objective via a dimensionality reduction procedure. TIT is applicable to arbitrary sample dimensionality and does not need labeled target samples for training. Extensive evaluations on several standard benchmarks and large-scale datasets of image classification, text categorization and text-to-image recognition verify the superiority of our approach.
- Published
- 2019
11. Hierarchical Multi-Clue Modelling for POI Popularity Prediction with Heterogeneous Tourist Information
- Author
-
Yang, Y, Duan, Y, Wang, X, Huang, Z, Xie, N, Shen, HT, Yang, Y, Duan, Y, Wang, X, Huang, Z, Xie, N, and Shen, HT
- Abstract
© 1989-2012 IEEE. Predicting the popularity of Point of Interest (POI) has become increasingly crucial for location-based services, such as POI recommendation. Most of the existing methods can seldom achieve satisfactory performance due to the scarcity of POI's information, which tendentiously confines the recommendation to popular scene spots, and ignores the unpopular attractions with potentially precious values. In this paper, we propose a novel approach, termed Hierarchical Multi-Clue Fusion (HMCF), for predicting the popularity of POIs. Specifically, in order to cope with the problem of data sparsity, we propose to comprehensively describe POI using various types of user generated content (UGC) (e.g., text and image) from multiple sources. Then, we devise an effective POI modelling method in a hierarchical manner, which simultaneously injects semantic knowledge as well as multi-clue representative power into POIs. For evaluation, we construct a multi-source POI dataset by collecting all the textual and visual content of several specific provinces in China from four main-stream tourism platforms during 2006 to 2017. Extensive experimental results show that the proposed method can significantly improve the performance of predicting the attractions' popularity as compared to several baseline methods.
- Published
- 2019
12. Collective Reconstructive Embeddings for Cross-Modal Hashing
- Author
-
Hu, M, Yang, Y, Shen, F, Xie, N, Hong, R, Shen, HT, Hu, M, Yang, Y, Shen, F, Xie, N, Hong, R, and Shen, HT
- Abstract
© 1992-2012 IEEE. In this paper, we study the problem of cross-modal retrieval by hashing-based approximate nearest neighbor search techniques. Most existing cross-modal hashing works mainly address the issue of multi-modal integration complexity using the same mapping and similarity calculation for data from different media types. Nonetheless, this may cause information loss during the mapping process due to overlooking the specifics of each individual modality. In this paper, we propose a simple yet effective cross-modal hashing approach, termed collective reconstructive embeddings (CRE), which can simultaneously solve the heterogeneity and integration complexity of multi-modal data. To address the heterogeneity challenge, we propose to process heterogeneous types of data using different modality-specific models. Specifically, we model textual data with cosine similarity-based reconstructive embedding to alleviate the data sparsity to the greatest extent, while for image data, we utilize the Euclidean distance to characterize the relationships of the projected hash codes. Meanwhile, we unify the projections of text and image to the Hamming space into a common reconstructive embedding through rigid mathematical reformulation, which not only reduces the optimization complexity significantly but also facilitates the inter-modal similarity preservation among different modalities. We further incorporate the code balance and uncorrelation criteria into the problem and devise an efficient iterative algorithm for optimization. Comprehensive experiments on four widely used multimodal benchmarks show that the proposed CRE can achieve a superior performance compared with the state of the art on several challenging cross-modal tasks.
- Published
- 2019
13. Describing video with attention-based bidirectional LSTM
- Author
-
Bin, Y, Yang, Y, Shen, F, Xie, N, Shen, HT, Li, X, Bin, Y, Yang, Y, Shen, F, Xie, N, Shen, HT, and Li, X
- Abstract
© 2013 IEEE. Video captioning has been attracting broad research attention in the multimedia community. However, most existing approaches heavily rely on static visual information or partially capture the local temporal knowledge (e.g., within 16 frames), thus hardly describing motions accurately from a global view. In this paper, we propose a novel video captioning framework, which integrates bidirectional long-short term memory (BiLSTM) and a soft attention mechanism to generate better global representations for videos as well as enhance the recognition of lasting motions in videos. To generate video captions, we exploit another long-short term memory as a decoder to fully explore global contextual information. The benefits of our proposed method are two fold: 1) the BiLSTM structure comprehensively preserves global temporal and visual information and 2) the soft attention mechanism enables a language decoder to recognize and focus on principle targets from the complex content. We verify the effectiveness of our proposed video captioning framework on two widely used benchmarks, that is, microsoft video description corpus and MSR-video to text, and the experimental results demonstrate the superiority of the proposed approach compared to several state-of-the-art methods.
- Published
- 2019
14. Stroke-based stylization by learning sequential drawing examples
- Author
-
Xie, N, Yang, Y, Shen, HT, and Zhao, TT
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Artificial Intelligence & Image Processing ,GeneralLiterature_MISCELLANEOUS ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
© 2018 Elsevier Inc. Among various traditional art forms, brush stroke drawing is one of the widely used styles in modern computer graphic tools such as GIMP, Photoshop and Painter. In this paper, we develop an AI-aided art authoring (A4) system of non-photorealistic rendering that allows users to automatically generate brush stroke paintings in a specific artist's style. Within the reinforcement learning framework of brush stroke generation proposed by Xie et al. (2012), the first contribution in this paper is the application of regularized policy gradient method, which is more suitable for the stroke generation task; the other contribution is to learn artists’ drawing styles from video-captured stroke data by inverse reinforcement learning. Through experiments, we demonstrate that our system can successfully learn artists’ styles and render pictures with consistent and smooth brush strokes.
- Published
- 2018
15. Semi-Paired Discrete Hashing: Learning Latent Hash Codes for Semi-Paired Cross-View Retrieval
- Author
-
Shen, X, Shen, F, Sun, Q-S, Yang, Y, Yuan, Y-H, and Shen, HT
- Subjects
Artificial Intelligence & Image Processing - Published
- 2017
16. Video-based person re-identification via self-paced learning and deep reinforcement learning framework
- Author
-
Ouyang, D, Shao, J, Zhang, Y, Yang, Y, Shen, HT, Ouyang, D, Shao, J, Zhang, Y, Yang, Y, and Shen, HT
- Abstract
© 2018 Association for Computing Machinery. Person re-identification is an important task in video surveillance, focusing on finding the same person across different cameras. However, most existing methods of video-based person re-identification still have some limitations (e.g., the lack of effective deep learning framework, the robustness of the model, and the same treatment for all video frames) which make them unable to achieve better recognition performance. In this paper, we propose a novel self-paced learning algorithm for video-based person re-identification, which could gradually learn from simple to complex samples for a mature and stable model. Self-paced learning is employed to enhance video-based person re-identification based on deep neural network, so that deep neural network and self-paced learning are unified into one frame. Then, based on the trained self-paced learning, we propose to employ deep reinforcement learning to discard misleading and confounding frames and find the most representative frames from video pairs. With the advantage of deep reinforcement learning, our method can learn strategies to select the optimal frame groups. Experiments show that the proposed framework outperforms the existing methods on the iLIDS-VID, PRID-2011 and MARS datasets.
- Published
- 2018
17. Learning binary codes with local and inner data structure
- Author
-
He, S, Ye, G, Hu, M, Yang, Y, Shen, F, Shen, HT, Li, X, He, S, Ye, G, Hu, M, Yang, Y, Shen, F, Shen, HT, and Li, X
- Abstract
© 2017 Elsevier B.V. Recent years have witnessed the promising capacity of hashing techniques in tackling nearest neighbor search because of the high efficiency in storage and retrieval. Data-independent approaches (e.g., Locality Sensitive Hashing) normally construct hash functions using random projections, which neglect intrinsic data properties. To compensate this drawback, learning-based approaches propose to explore local data structure and/or supervised information for boosting hashing performance. However, due to the construction of Laplacian matrix, existing methods usually suffer from the unaffordable training cost. In this paper, we propose a novel supervised hashing scheme, which has the merits of (1) exploring the inherent neighborhoods of samples; (2) significantly saving training cost confronted with massive training data by employing approximate anchor graph; as well as (3) preserving semantic similarity by leveraging pair-wise supervised knowledge. Besides, we integrate discrete constraint to significantly eliminate accumulated errors in learning reliable hash codes and hash functions. We devise an alternative algorithm to efficiently solve the optimization problem. Extensive experiments on various image datasets demonstrate that our proposed method is superior to the state-of-the-arts.
- Published
- 2018
18. Personalized semantic trajectory privacy preservation through trajectory reconstruction
- Author
-
Dai, Y, Shao, J, Wei, C, Zhang, D, Shen, HT, Dai, Y, Shao, J, Wei, C, Zhang, D, and Shen, HT
- Abstract
© 2017, Springer Science+Business Media, LLC. Trajectory data gathered by mobile positioning techniques and location-aware devices contain plenty of sensitive spatial-temporal and semantic information, and can support many applications through data analysing and mining. However, attribute-linkage and re-identification attacks on such data may cause privacy leakage, and lead to unexpected serious consequences. Existing privacy preserving techniques for trajectory data often ignore the different privacy requirements of different moving objects or largely scarify the availability of trajectory data. In view of these issues, we propose an effective personalized trajectory privacy preserving method which can strike a good balance between user-defined privacy requirement and data availability in off-line trajectory publishing scenario. The main idea is to firstly label semantic attributes of all sampling points on the trajectory and build a corresponding taxonomy tree, next extract sensitive stop points, then for different types of sensitive stop points, adopt different strategies to select the appropriate points of user interests to replace while considering user speed and avoiding reverse mutation, and finally publish the reconstructed trajectory. Besides, to make our method more realistic we further consider possible obstacles appeared in the user space environment. In the experiments, average identification possibility, trajectory semantic consistency and trajectory shape similarity are taken as evaluation criteria, and the performance of our method is comprehensively evaluated. The results show that our method can improve the user trajectory availability as much as possible, while effectively achieving the different trajectory privacy requirements.
- Published
- 2018
19. Hashing with angular reconstructive embeddings
- Author
-
Hu, M, Yang, Y, Shen, F, Xie, N, Shen, HT, Hu, M, Yang, Y, Shen, F, Xie, N, and Shen, HT
- Abstract
© 1992-2012 IEEE. Large-scale search methods are increasingly critical for many content-based visual analysis applications, among which hashing-based approximate nearest neighbor search techniques have attracted broad interests due to their high efficiency in storage and retrieval. However, existing hashing works are commonly designed for measuring data similarity by the Euclidean distances. In this paper, we focus on the problem of learning compact binary codes using the cosine similarity. Specifically, we proposed novel angular reconstructive embeddings (ARE) method, which aims at learning binary codes by minimizing the reconstruction error between the cosine similarities computed by original features and the resulting binary embeddings. Furthermore, we devise two efficient algorithms for optimizing our ARE in continuous and discrete manners, respectively. We extensively evaluate the proposed ARE on several large-scale image benchmarks. The results demonstrate that ARE outperforms several state-of-the-art methods.
- Published
- 2018
20. Recognition and detection of two-person interactive actions using automatically selected skeleton features
- Author
-
Wu, H, Shao, J, Xu, X, Ji, Y, Shen, F, Shen, HT, Wu, H, Shao, J, Xu, X, Ji, Y, Shen, F, and Shen, HT
- Abstract
© 2013 IEEE. Recognition and detection of interactive actions performed by multiple persons have a wide range of real-world applications. Existing studies on the human activity analysis focus mainly on classifying video clips of simple actions performed by a single person, whereas the problem of understanding complex human activities with causal relationships between two people has not been sufficiently addressed yet. In this paper, we employ systematically organized skeleton features enhanced with directional features, and utilize sparse-group lasso to automatically choose discriminative factors that help in dealing with interactive action recognition and real-time detection tasks. Experiments on two person interaction datasets demonstrate the superiority of our approach to the state-of-the-art methods.
- Published
- 2018
21. Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization
- Author
-
Shen, F, Xu, Y, Liu, L, Yang, Y, Huang, Z, Shen, HT, Shen, F, Xu, Y, Liu, L, Yang, Y, Huang, Z, and Shen, HT
- Abstract
© 1979-2012 IEEE. Recent vision and learning studies show that learning compact hash codes can facilitate massive data processing with significantly reduced storage and computation. Particularly, learning deep hash functions has greatly improved the retrieval performance, typically under the semantic supervision. In contrast, current unsupervised deep hashing algorithms can hardly achieve satisfactory performance due to either the relaxed optimization or absence of similarity-sensitive objective. In this work, we propose a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization. The key difference from the widely-used two-step hashing method is that the output representations of the learned deep model help update the similarity graph matrix, which is then used to improve the subsequent code optimization. In addition, for producing high-quality binary codes, we devise an effective discrete optimization algorithm which can directly handle the binary constraints with a general hashing loss. Extensive experiments validate the efficacy of SADH, which consistently outperforms the state-of-the-arts by large gaps.
- Published
- 2018
22. A Survey on Learning to Hash
- Author
-
Wang, J, Zhang, T, Song, J, Sebe, N, Shen, HT, Wang, J, Zhang, T, Song, J, Sebe, N, and Shen, HT
- Abstract
© 1979-2012 IEEE. Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics.
- Published
- 2018
23. Video Captioning by Adversarial LSTM
- Author
-
Yang, Y, Zhou, J, Ai, J, Bin, Y, Hanjalic, A, Shen, HT, Ji, Y, Yang, Y, Zhou, J, Ai, J, Bin, Y, Hanjalic, A, Shen, HT, and Ji, Y
- Abstract
© 1992-2012 IEEE. In this paper, we propose a novel approach to video captioning based on adversarial learning and long short-term memory (LSTM). With this solution concept, we aim at compensating for the deficiencies of LSTM-based video captioning methods that generally show potential to effectively handle temporal nature of video data when generating captions but also typically suffer from exponential error accumulation. Specifically, we adopt a standard generative adversarial network (GAN) architecture, characterized by an interplay of two competing processes: a 'generator' that generates textual sentences given the visual content of a video and a 'discriminator' that controls the accuracy of the generated sentences. The discriminator acts as an 'adversary' toward the generator, and with its controlling mechanism, it helps the generator to become more accurate. For the generator module, we take an existing video captioning concept using LSTM network. For the discriminator, we propose a novel realization specifically tuned for the video captioning problem and taking both the sentences and video features as input. This leads to our proposed LSTM-GAN system architecture, for which we show experimentally to significantly outperform the existing methods on standard public datasets.
- Published
- 2018
24. Deep appearance and motion learning for egocentric activity recognition
- Author
-
Wang, X, Gao, L, Song, J, Zhen, X, Sebe, N, Shen, HT, Wang, X, Gao, L, Song, J, Zhen, X, Sebe, N, and Shen, HT
- Abstract
© 2017 Elsevier B.V. Egocentric activity recognition has recently generated great popularity in computer vision due to its widespread applications in egocentric video analysis. However, it poses new challenges comparing to the conventional third-person activity recognition tasks, which are caused by significant body shaking, varied lengths, and poor recoding quality, etc. To handle these challenges, in this paper, we propose deep appearance and motion learning (DAML) for egocentric activity recognition, which leverages the great strength of deep learning networks in feature learning. In contrast to hand-crafted visual features or pre-trained convolutional neural network (CNN) features with limited generality to new egocentric videos, the proposed DAML is built on the deep autoencoder (DAE), and directly extracts appearance and motion feature, the main cue of activities, from egocentric videos. The DAML takes advantages of the great effectiveness and efficiency of the DAE in unsupervised feature learning, which provides a new representation learning framework of egocentric videos. The learned appearance and motion features by the DAML are seamlessly fused to accomplish a rich informative egocentric activity representation which can be readily fed into any supervised learning models for activity recognition. Experimental results on two challenging benchmark datasets show that the DAML achieves high performance on both short- and long-term egocentric activity recognition tasks, which is comparable to or even better than the state-of-the-art counterparts.
- Published
- 2018
25. Exploring auxiliary context: Discrete semantic transfer hashing for scalable image retrieval
- Author
-
Zhu, L, Huang, Z, Li, Z, Xie, L, Shen, HT, Zhu, L, Huang, Z, Li, Z, Xie, L, and Shen, HT
- Abstract
© 2018 IEEE. Unsupervised hashing can desirably support scalable content-based image retrieval for its appealing advantages of semantic label independence, memory, and search efficiency. However, the learned hash codes are embedded with limited discriminative semantics due to the intrinsic limitation of image representation. To address the problem, in this paper, we propose a novel hashing approach, dubbed as discrete semantic transfer hashing (DSTH). The key idea is to directly augment the semantics of discrete image hash codes by exploring auxiliary contextual modalities. To this end, a unified hashing framework is formulated to simultaneously preserve visual similarities of images and perform semantic transfer from contextual modalities. Furthermore, to guarantee direct semantic transfer and avoid information loss, we explicitly impose the discrete constraint, bit-uncorrelation constraint, and bit-balance constraint on hash codes. A novel and effective discrete optimization method based on augmented Lagrangian multiplier is developed to iteratively solve the optimization problem. The whole learning process has linear computation complexity and desirable scalability. Experiments on three benchmark data sets demonstrate the superiority of DSTH compared with several state-of-the-art approaches.
- Published
- 2018
26. Augmented keyword search on spatial entity databases
- Author
-
Zhang, D, Li, Y, Cao, X, Shao, J, Shen, HT, Zhang, D, Li, Y, Cao, X, Shao, J, and Shen, HT
- Abstract
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. In this paper, we propose a new type of query that augments the spatial keyword search with an additional boolean expression constraint. The query is issued against a corpus of structured or semi-structured spatial entities and is very useful in applications like mobile search and targeted location-aware advertising. We devise three types of indexing and filtering strategies. First, we utilize the hybrid IR 2-tree and propose a novel hashing scheme for efficient pruning. Second, we propose an inverted index-based solution, named BE-Inv, that is more cache concious and exhibits great pruning power for boolean expression matching. Our third method, named SKB-Inv, adopts a novel two-level partitioning scheme to organize the spatial entities into inverted lists and effectively facilitate the pruning in the spatial, textual, and boolean expression dimensions. In addition, we propose an adaptive query processing strategy that takes into account the selectivity of query keywords and predicates for early termination. We conduct our experiments using two real datasets with 3.5 million Foursquare venues and 50 million Twitter geo-profiles. The results show that the methods based on inverted index are superior to the hybrid IR 2-tree; and SKB-Inv achieves the best performance.
- Published
- 2018
27. Efficient binary coding for subspace-based qery-by-image video retrieval
- Author
-
Xu, R, Yang, Y, Shen, F, Xie, N, and Shen, HT
- Abstract
© 2017 Association for Computing Machinery. Subspace representations have been widely applied for videos in many tasks. In particular, the subspace-based query-by-image video retrieval (QBIVR), facing high challenges on similarity-preserving measurements and efficient retrieval schemes, urgently needs considerable research attention. In this paper, we propose a novel subspace-based QBIVR framework to enable efficient video search. We first define a new geometry-preserving distance metric to measure the image-to-video distance, which transforms the QBIVR task to be the Maximum Inner Product Search (MIPS) problem. The merit of this distance metric lies in that it helps to preserve the genuine geometric relationship between query images and database videos to the greatest extent. To boost the efficiency of solving the MIPS problem, we introduce two asymmetric hashing schemes which can bridge the domain gap of images and videos properly. The first approach, termed Inner-product Binary Coding (IBC), achieves high-quality binary codes by learning the binary codes and coding functions simultaneously without continuous relaxations. The other one, Bilinear Binary Coding (BBC) approach, employs compact bilinear projections instead of a single large projection matrix to further improve the retrieval efficiency. Extensive experiments on four real-world video datasets verify the effectiveness of our proposed approaches, as compared to the state-of-the-art methods.
- Published
- 2017
28. Compact indexing and judicious searching for billion-scale microblog retrieval
- Author
-
Zhang, D, Nie, L, Luan, H, Tan, KL, Chua, TS, and Shen, HT
- Subjects
Information Systems - Abstract
© 2017 ACM. In this article, we study the problem of efficient top-k disjunctive query processing in a huge microblog dataset. In terms of compact indexing, we categorize the keywords into rare terms and common terms based on inverse document frequency (idf) and propose tailored block-oriented organization to save memory consumption. In terms of fast searching, we classify the queries into three types based on term category and judiciously design an efficient search algorithm for each type. We conducted extensive experiments on a billion-scale Twitter dataset and examined the performance with both simple and more advanced ranking functions. The results showed that with much smaller index size, our search algorithm achieves a factor of 2-3 times faster speedup over state-of-the-art solutions in both ranking scenarios.
- Published
- 2017
29. Processing long queries against short text: Top-k advertisement matching in news stream applications
- Author
-
Zhang, D, Li, Y, Fan, J, Gao, L, Shen, F, and Shen, HT
- Subjects
Information Systems - Abstract
© 2017 ACM. Many real applications in real-time news stream advertising call for efficient processing of long queries against short text. In such applications, dynamic news feeds are regarded as queries to match against an advertisement (ad) database for retrieving the k most relevant ads. The existing approaches to keyword retrieval cannot work well in this search scenario when queries are triggered at a very high frequency. To address the problem, we introduce new techniques to significantly improve search performance. First, we devise a two-level partitioning for tight upper bound estimation and a lazy evaluation scheme to delay full evaluation of unpromising candidates, which can bring three to four times performance boosting in a database with 7 million ads. Second, we propose a novel rank-aware block-oriented inverted index to further improve performance. In this index scheme, each entry in an inverted list is assigned a rank according to its importance in the ad. Then, we introduce a block-at-a-time search strategy based on the index scheme to support amuch tighter upper bound estimation and a very early termination. We have conducted experiments with real datasets, and the results show that the rank-aware method can further improve performance by an order of magnitude.
- Published
- 2017
30. BMC@MediaEval 2017 multimedia satellite task via regression random forest
- Author
-
Fu, X, Bin, Y, Peng, L, Zhou, J, Yang, Y, and Shen, HT
- Abstract
© 2017 Author/owner(s). In the MediaEval 2017 Multimedia Satellite Task, we propose an approach based on regression random forest which can extract valuable information from a few images and their corresponding metadata. The experimental results show that when processing social media images, the proposed method can be high-performance in circumstances where the images features are low-level and the training samples are relatively small of number.Additionally,when the low-level color features of satellite images are too ambiguous to analyze, random forest is also a efiective way to detect flooding area.
- Published
- 2017
31. CFM@MediaEval 2017 Retrieving diverse social images task via re-ranking and hierarchical clustering
- Author
-
Peng, L, Bin, Y, Fu, X, Zhou, J, Yang, Y, and Shen, HT
- Abstract
© 2017 Author/owner(s). This paper presents an approach based on re-ranking and hierarchical clustering (HC) for MediaEval 2017 Retrieving Diverse Social Images Task. The experimental results on the development and test set demonstrate that the proposed approach can significantly improve relevance and visual diversity of the query results. Our approach achieves a good tradeoff between relevance and diversity and a result in F1@20 of 0.6533 for the employed test data.
- Published
- 2017
32. Web-based semantic fragment discovery for online lingual-visual similarity
- Author
-
Sun, X, Cao, J, Li, C, Zhu, L, and Shen, HT
- Abstract
Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In this paper, we present an automatic approach for on-line discovery of visual-lingual semantic fragments from weakly labeled Internet images. Instead of learning region-entity correspondences from well-labeled image-sentence pairs, our approach directly collects and enhances the weakly labeled visual contents from the Web and constructs an adaptive visual representation which automatically links generic lingual phrases to their related visual contents. To ensure reliable and efficient semantic discovery, we adopt non-parametric density estimation to re-rank the related visual instances and proposed a fast self-similarity-based quality assessment method to identify the high-quality semantic fragments. The discovered semantic fragments provide an adaptive joint representation for texts and images, based on which lingual-visual similarity can be defined for further co-analysis of heterogeneous multimedia data. Experimental results on semantic fragment quality assessment, sentence-based image retrieval, automatic multimedia insertion and ordering demonstrated the effectiveness of the proposed framework. The experiments show that the proposed methods can make effective use of the Web knowledge, and are able to generate competitive results compared to state-of-the-art approaches in various tasks.
- Published
- 2017
33. Adversarial cross-modal retrieval
- Author
-
Wang, B, Yang, Y, Xu, X, Hanjalic, A, Shen, HT, Wang, B, Yang, Y, Xu, X, Hanjalic, A, and Shen, HT
- Abstract
© 2017 Association for Computing Machinery. Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of crossmodal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other. In this paper, we present a novel Adversarial Cross-Modal Retrieval (ACMR) method, which seeks an effective common subspace based on adversarial learning. Adversarial learning is implemented as an interplay between two processes. The first process, a feature projector, tries to generate a modality-invariant representation in the common subspace and to confuse the other process, modality classifier, which tries to discriminate between different modalities based on the generated representation. We further impose triplet constraints on the feature projector in order to minimize the gap among the representations of all items from different modalities with same semantic labels, while maximizing the distances among semantically different images and texts. Through the joint exploitation of the above, the underlying cross-modal semantic structure of multimedia data is better preserved when this data is projected into the common subspace. Comprehensive experimental results on four widely used benchmark datasets show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods.
- Published
- 2017
34. Distributed shortest path query processing on dynamic road networks
- Author
-
Zhang, D, Yang, D, Wang, Y, Tan, KL, Cao, J, Shen, HT, Zhang, D, Yang, D, Wang, Y, Tan, KL, Cao, J, and Shen, HT
- Abstract
© 2017, Springer-Verlag Berlin Heidelberg. Shortest path query processing on dynamic road networks is a fundamental component for real-time navigation systems. In the face of an enormous volume of customer demand from Uber and similar apps, it is desirable to study distributed shortest path query processing that can be deployed on elastic and fault-tolerant cloud platforms. In this paper, we combine the merits of distributed streaming computing systems and lightweight indexing to build an efficient shortest path query processing engine on top of Yahoo S4. We propose two types of asynchronous communication algorithms for early termination. One is first-in-first-out message propagation with certain optimizations, and the other is prioritized message propagation with the help of navigational intelligence. Extensive experiments were conducted on large-scale real road networks, and the results show that the query efficiency of our methods can meet the real-time requirement and is superior to Pregel and Pregel+. The source code of our system is publicly available at https://github.com/yangdingyu/cands.
- Published
- 2017
35. Classification by retrieval: Binarizing data and classifiers
- Author
-
Shen, F, Mu, Y, Yang, Y, Liu, W, Liu, L, Song, J, Shen, HT, Shen, F, Mu, Y, Yang, Y, Liu, W, Liu, L, Song, J, and Shen, HT
- Abstract
© 2017 Copyright held by the owner/author(s). This paper proposes a generic formulation that significantly expedites the training and deployment of image classification models, particularly under the scenarios of many image categories and high feature dimensions. As the core idea, our method represents both the images and learned classifiers using binary hash codes, which are simultaneously learned from the training data. Classifying an image thereby reduces to retrieving its nearest class codes in the Hamming space. Specifically, we formulate multiclass image classification as an optimization problem over binary variables. The optimization alternatingly proceeds over the binary classifiers and image hash codes. Profiting from the special property of binary codes, we show that the sub-problems can be efficiently solved through either a binary quadratic program (BQP) or a linear program. In particular, for attacking the BQP problem, we propose a novel bit-flipping procedure which enjoys high efficacy and a local optimality guarantee. Our formulation supports a large family of empirical loss functions and is, in specific, instantiated by exponential and linear losses. Comprehensive evaluations are conducted on several representative image benchmarks. The experiments consistently exhibit reduced computational and memory complexities of model training and deployment, without sacrificing classification accuracy.
- Published
- 2017
36. IF-Matching: Towards Accurate Map-Matching with Information Fusion
- Author
-
Hu, G, Shao, J, Liu, F, Wang, Y, Shen, HT, Hu, G, Shao, J, Liu, F, Wang, Y, and Shen, HT
- Abstract
© 2016 IEEE. With the advance of various location-acquisition technologies, a myriad of GPS trajectories can be collected every day. However, the raw coordinate data captured by sensors often cannot reflect real positions due to many physical constraints and some rules of law. How to accurately match GPS trajectories to roads on a digital map is an important issue. The problem of map-matching is fundamental for many applications. Unfortunately, many existing methods still cannot meet stringent performance requirements in engineering. In particular, low/unstable sampling rate and noisy/lost data are usually big challenges. Information fusion of different data sources is becoming increasingly promising nowadays. As in practice, some other measurements such as speed and moving direction are collected together with the spatial locations acquired, we can make use of not only location coordinates but all data collected. In this paper, we propose a novel model using the related meta-information to describe a moving object, and present an algorithm called IF-Matching for map-matching. It can handle many ambiguous cases which cannot be correctly matched by existing methods. We run our algorithm with taxi trajectory data on a city-wide road network. Compared with two state-of-the-art algorithms of ST-Matching and the winner of GIS Cup 2012, our approach achieves more accurate results.
- Published
- 2017
37. Robust Web Image Annotation via Exploring Multi-Facet and Structural Knowledge
- Author
-
Hu, M, Yang, Y, Shen, F, Zhang, L, Shen, HT, Li, X, Hu, M, Yang, Y, Shen, F, Zhang, L, Shen, HT, and Li, X
- Abstract
© 2017 IEEE. Driven by the rapid development of Internet and digital technologies, we have witnessed the explosive growth of Web images in recent years. Seeing that labels can reflect the semantic contents of the images, automatic image annotation, which can further facilitate the procedure of image semantic indexing, retrieval, and other image management tasks, has become one of the most crucial research directions in multimedia. Most of the existing annotation methods, heavily rely on well-labeled training data (expensive to collect) and/or single view of visual features (insufficient representative power). In this paper, inspired by the promising advance of feature engineering (e.g., CNN feature and scale-invariant feature transform feature) and inexhaustible image data (associated with noisy and incomplete labels) on the Web, we propose an effective and robust scheme, termed robust multi-view semi-supervised learning (RMSL), for facilitating image annotation task. Specifically, we exploit both labeled images and unlabeled images to uncover the intrinsic data structural information. Meanwhile, to comprehensively describe an individual datum, we take advantage of the correlated and complemental information derived from multiple facets of image data (i.e., multiple views or features). We devise a robust pairwise constraint on outcomes of different views to achieve annotation consistency. Furthermore, we integrate a robust classifier learning component via ℓ2,p loss, which can provide effective noise identification power during the learning process. Finally, we devise an efficient iterative algorithm to solve the optimization problem in RMSL. We conduct comprehensive experiments on three different data sets, and the results illustrate that our proposed approach is promising for automatic image annotation.
- Published
- 2017
38. Hierarchical Latent Concept Discovery for Video Event Detection
- Author
-
Li, C, Huang, Z, Yang, Y, Cao, J, Sun, X, Shen, HT, Li, C, Huang, Z, Yang, Y, Cao, J, Sun, X, and Shen, HT
- Abstract
© 1992-2012 IEEE. Semantic information is important for video event detection. How to automatically discover, model, and utilize semantic information to facilitate video event detection has been a challenging problem. In this paper, we propose a novel hierarchical video event detection model, which deliberately unifies the processes of underlying semantics discovery and event modeling from video data. Specially, different from most of the approaches based on manually pre-defined concepts, we devise an effective model to automatically uncover video semantics by hierarchically capturing latent static-visual concepts in frame-level and latent activity concepts (i.e., temporal sequence relationships of static-visual concepts) in segment-level. The unified model not only enables a discriminative and descriptive representation for videos, but also alleviates error propagation problem from video representation to event modeling existing in previous methods. A max-margin framework is employed to learn the model. Extensive experiments on four challenging video event datasets, i.e., MED11, CCV, UQE50, and FCVID, have been conducted to demonstrate the effectiveness of the proposed method.
- Published
- 2017
39. Leveraging Weak Semantic Relevance for Complex Video Event Classification
- Author
-
Shen, HT, Li, C, Cao, J, Huang, Z, Zhu, L, Shen, HT, Li, C, Cao, J, Huang, Z, and Zhu, L
- Abstract
© 2017 IEEE. Existing video event classification approaches suffer from limited human-labeled semantic annotations. Weak semantic annotations can be harvested from Web-knowledge without involving any human interaction. However such weak annotations are noisy, thus can not be effectively utilized without distinguishing its reliability. In this paper, we propose a novel approach to automatically maximize the utility of weak semantic annotations (formalized as the semantic relevance of video shots to the target event) to facilitate video event classification. A novel attention model is designed to determine the attention scores of video shots, where the weak semantic relevance is considered as atten-tional guidance. Specifically, our model jointly optimizes two objectives at different levels. The first one is the classification loss corresponding to video-level groundtruth labels, and the second is the shot-level relevance loss corresponding to weak semantic relevance. We use a long short-term memory (LSTM) layer to capture the temporal information carried by the shots of a video. In each timestep, the LSTM employs the attention model to weight the current shot under the guidance of its weak semantic relevance to the event of interest. Thus, we can automatically exploit weak semantic relevance to assist video event classification. Extensive experiments have been conducted on three complex large-scale video event datasets i.e., MEDTest14, ActivityNet and FCVID. Our approach achieves the state-of-the-art classification performance on all three datasets. The significant performance improvement upon the conventional attention model also demonstrates the effectiveness of our model.
- Published
- 2017
40. Exploring consistent preferences: Discrete hashing with pair-exemplar for scalable landmark search
- Author
-
Zhu, L, Huang, Z, Chang, X, Song, J, Shen, HT, Zhu, L, Huang, Z, Chang, X, Song, J, and Shen, HT
- Abstract
© 2017 ACM. Content-based visual landmark search (CBVLS) enjoys great importance in many practical applications. In this paper, we propose a novel discrete hashing with pair-exemplar (DHPE) to support scalable and efficient large-scale CBVLS. Our approach mainly solves two essential problems in scalable landmark hashing: 1) Intralandmark visual diversity, and 2) Discrete optimization of hashing codes. Motivated by the characteristic of landmark, we explore the consistent preferences of tourists on landmark as pair-exemplars for scalable discrete hashing learning. In this paper, a pair-exemplar is comprised of a canonical view and the corresponding representative tags. Canonical view captures the key visual component of landmarks, and representative tags potentially involve landmark-specific semantics that can cope with the visual variations of intra-landmark. Based on pair-exemplars, a unified hashing learning framework is formulated to combine visual preserving with exemplar graph and the semantic guidance from representative tags. Further to guarantee direct semantic transfer for hashing codes and remove information redundancy, we design a novel optimization method based on augmented Lagrange multiplier to explicitly deal with the discrete constraint, the bit uncorrelated constraint and balance constraint. The whole learning process has linear computation complexity and enjoys desirable scalability. Experiments demonstrate the superior performance of DHPE compared with state-of-the-art methods.
- Published
- 2017
41. Asymmetric sparse hashing
- Author
-
Gao, X, Shen, F, Yang, Y, Xu, X, Li, H, Shen, HT, Gao, X, Shen, F, Yang, Y, Xu, X, Li, H, and Shen, HT
- Abstract
© 2017 IEEE. Learning based hashing has become increasingly popular because of its high efficiency in handling the large scale image retrieval. Preserving the pairwise similarities of data points in the Hamming space is critical in state-of-the-art hashing techniques. However, most previous methods ignore to capture the local geometric structure residing on original data, which is essential for similarity search. In this paper, we propose a novel hashing framework, which simultaneously optimizes similarity preserving hash codes and reconstructs the locally linear structures of data in the Hamming space. In specific, we learn two hash functions such that the resulting two sets of binary codes can well preserve the pairwise similarity and sparse neighborhood in the original feature space. By taking advantage of the flexibility of asymmetric hash functions, we devise an efficient alternating algorithm to optimize the hash coding function and high-quality binary codes jointly. We evaluate the proposed method on several large-scale image datasets, and the results demonstrate it significantly outperforms recent state-of-the-art hashing methods on large-scale image retrieval problems.
- Published
- 2017
42. Unsupervised cross-modal retrieval through adversarial learning
- Author
-
He, L, Xu, X, Lu, H, Yang, Y, Shen, F, Shen, HT, He, L, Xu, X, Lu, H, Yang, Y, Shen, F, and Shen, HT
- Abstract
© 2017 IEEE. The core of existing cross-modal retrieval approaches is to close the gap between different modalities either by finding a maximally correlated subspace or by jointly learning a set of dictionaries. However, the statistical characteristics of the transformed features were never considered. Inspired by recent advances in adversarial learning and domain adaptation, we propose a novel Unsupervised Cross-modal retrieval method based on Adversarial Learning, namely UCAL. In addition to maximizing the correlations between modalities, we add an additional regularization by introducing adversarial learning. In particular, we introduce a modality classifier to predict the modality of a transformed feature. This can be viewed as a regularization on the statistical aspect of the feature transforms, which ensures that the transformed features are also statistically indistinguishable. Experiments on popular multimodal datasets show that UCAL achieves competitive performance compared to state of the art supervised cross-modal retrieval methods.
- Published
- 2017
43. Video Captioning with Attention-Based LSTM and Semantic Consistency
- Author
-
Gao, L, Guo, Z, Zhang, H, Xu, X, Shen, HT, Gao, L, Guo, Z, Zhang, H, Xu, X, and Shen, HT
- Abstract
© 1999-2012 IEEE. Recent progress in using long short-term memory (LSTM) for image captioning has motivated the exploration of their applications for video captioning. By taking a video as a sequence of features, an LSTM model is trained on video-sentence pairs and learns to associate a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention mechanism which allows for selecting salient features. Furthermore, existing approaches usually model the translating error, but ignore the correlations between sentence semantics and visual content. To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multimodal representations (i.e., words and visual content) for generating sentences with rich semantic content. Specifically, we first propose an attention mechanism that uses the dynamic weighted sum of local two-dimensional convolutional neural network representations. Then, an LSTM decoder takes these visual features at time t and the word-embedding feature at time t-1 to generate important words. Finally, we use multimodal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate that our method using single feature can achieve competitive or even better results than the state-of-the-art baselines for video captioning in both BLEU and METEOR.
- Published
- 2017
44. A system for spatiotemporal anomaly localization in surveillance videos
- Author
-
Wu, H, Shao, J, Xu, X, Shen, F, Shen, HT, Wu, H, Shao, J, Xu, X, Shen, F, and Shen, HT
- Abstract
© 2017 Copyright held by the owner/author(s). Anomaly detection and localization in surveillance videos have attracted broad attention in both academic and industry for its importance to public safety, which however remain challenging. In this demonstration, we propose an anomaly detection algorithm called 2stream-VAE/GAN by embedding VAE/GANin a two-stream architecture. By taking both spatial and temporal information into consideration, normality can be captured and anomaly detection can be achieved. With an outlier detection rule, the system automatically locates anomaly based on a pre-trained model, which suits well for both streaming and local videos.
- Published
- 2017
45. WebPainter: Collaborative Stroke-Based Rendering Through HTML5 and WebGL
- Author
-
Xie, N, Ren, M, Yang, W, Yang, Y, Shen, HT, Xie, N, Ren, M, Yang, W, Yang, Y, and Shen, HT
- Abstract
© 2017, Springer International Publishing AG. Computer-aided drawing system assists users to convert the input real photos into painterly style images. Nowadays, it is widely developed as cloud brush engine service in many creative software tools and applications of artistic rendering such as Prisma [1], Photoshop [2], and Meitu [3], because the machine learning server has more powerful than the stand-alone version. In this paper, we propose a web collaborative Stroke-based Learning and Rendering (WebSBLR) system. Different from the existing methods that are mainly focused on the artistic filters, we concentrate on the stroke realistic rendering engine for browser on client using WebGL and HTML5. Moreover, we implement the learning-based stroke drawing path generation module on the server. By this way, we enable to achieve the computer-supported cooperative work (CSCW), especially for multi-screen synchronous interaction. The experiments demonstrated our method are efficient to web-based multi-screen painting simulation.
- Published
- 2017
46. Hierarchical LSTM with adjusted temporal attention for video captioning
- Author
-
Song, J, Gao, L, Guo, Z, Liu, W, Zhang, D, Shen, HT, Song, J, Gao, L, Guo, Z, Liu, W, Zhang, D, and Shen, HT
- Abstract
Recent progress has been made in using attention based encoder-decoder framework for video captioning. However, most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention. Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of video captioning. To address this issue, we propose a hierarchical LSTM with adjusted temporal attention (hLSTMat) approach for video captioning. Specifically, the proposed framework utilizes the temporal attention for selecting specific frames to predict the related words, while the adjusted temporal attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and high-level language context information to support the video caption generation. To demonstrate the effectiveness of our proposed framework, we test our method on two prevalent datasets: MSVD and MSR-VTT, and experimental results show that our approach outperforms the state-of-the-art methods on both two datasets.
- Published
- 2017
47. Asymmetric Binary Coding for Image Search
- Author
-
Shen, F, Yang, Y, Liu, L, Liu, W, Tao, D, Shen, HT, Shen, F, Yang, Y, Liu, L, Liu, W, Tao, D, and Shen, HT
- Abstract
© 2017 IEEE. Learning to hash has attracted broad research interests in recent computer vision and machine learning studies, due to its ability to accomplish efficient approximate nearest neighbor search. However, the closely related task, maximum inner product search (MIPS), has rarely been studied in this literature. To facilitate the MIPS study, in this paper, we introduce a general binary coding framework based on asymmetric hash functions, named asymmetric inner-product binary coding (AIBC). In particular, AIBC learns two different hash functions, which can reveal the inner products between original data vectors by the generated binary vectors. Although conceptually simple, the associated optimization is very challenging due to the highly nonsmooth nature of the objective that involves sign functions. We tackle the nonsmooth optimization in an alternating manner, by which each single coding function is optimized in an efficient discrete manner. We also simplify the objective by discarding the quadratic regularization term which significantly boosts the learning efficiency. Both problems are optimized in an effective discrete way without continuous relaxations, which produces high-quality hash codes. In addition, we extend the AIBC approach to the supervised hashing scenario, where the inner products of learned binary codes are forced to fit the supervised similarities. Extensive experiments on several benchmark image retrieval databases validate the superiority of the AIBC approaches over many recently proposed hashing algorithms.
- Published
- 2017
48. A deep approach for multi-modal user attribute modeling
- Author
-
Huang, X, Yang, Z, Yang, Y, Shen, F, Xie, N, Shen, HT, Huang, X, Yang, Z, Yang, Y, Shen, F, Xie, N, and Shen, HT
- Abstract
© 2017, Springer International Publishing AG. With the explosive growth of user-generated contents (e.g., texts, images and videos) on social networks, it is of great significance to analyze and extract people’s interests from the massive social media data, thus providing more accurate personalized recommendations and services. In this paper, we propose a novel multimodal deep learning algorithm for user profiling, dubbed multi-modal User Attribute Model (mmUAM), which explores the intrinsic semantic correlations across different modalities. Our proposed model is based on Poisson Gamma Belief Network (PGBN), which is a deep learning topic model for count data in documents. By improving PGBN, we succeed in addressing the problem of learning a shared representation between texts and images in order to obtain textual and visual attributes for users. To evaluate the effectiveness of our proposed method, we collect a real dataset from Sina Weibo. Experimental results demonstrate that the proposed algorithm achieves encouraging performance compared with several state-of-the-art methods.
- Published
- 2017
49. Exploiting depth from single monocular images for object detection and semantic segmentation
- Author
-
Cao, Y, Shen, C, Shen, HT, Cao, Y, Shen, C, and Shen, HT
- Abstract
© 1992-2012 IEEE. Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision, including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then, we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. In addition, we propose an RGB-D semantic segmentation method, which applies a multi-task training scheme: Semantic label prediction and depth value regression. We test our methods on several data sets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.
- Published
- 2017
50. Cosmetic-Vis: Sample-based 3D facial editor for cosmetic medical visualization
- Author
-
Ren, M, Xie, N, Yang, Y, Shen, HT, Ren, M, Xie, N, Yang, Y, and Shen, HT
- Abstract
© 2017 Copyright held by the owner/author(s). Cosmetic medical visualization has become an important application in computer graphics, especially for facial appearance visualization[Chandawarkar et al. 2013]. Recent approaches have reached very realistic results by blend shape[Ma et al. 2012], which is the most practical tool to make the facial appearance and expression animation in application domains on the entertainment industry (VFXs and games). In many role-playing games (RPGs), players enable to edit the character's facial appearance. However, it is unrealistic since arbitrary discontinuities and position relationship violations (a selected nose might be at a higher position that the bottom of the eyes selected from a different character) caused by players' manual operation. Moreover, the validity on changing facial organs has not be considered well yet.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.