233 results on '"Kohli, Pushmeet"'
Search Results
2. Collaboration between clinicians and vision–language models in radiology report generation
- Author
-
Tanno, Ryutaro, Barrett, David G. T., Sellergren, Andrew, Ghaisas, Sumedh, Dathathri, Sumanth, See, Abigail, Welbl, Johannes, Lau, Charles, Tu, Tao, Azizi, Shekoofeh, Singhal, Karan, Schaekermann, Mike, May, Rhys, Lee, Roy, Man, SiWai, Mahdavi, Sara, Ahmed, Zahra, Matias, Yossi, Barral, Joelle, Eslami, S. M. Ali, Belgrave, Danielle, Liu, Yun, Kalidindi, Sreenivasa Raju, Shetty, Shravya, Natarajan, Vivek, Kohli, Pushmeet, Huang, Po-Sen, Karthikesalingam, Alan, and Ktena, Ira
- Published
- 2024
- Full Text
- View/download PDF
3. Scalable watermarking for identifying large language model outputs
- Author
-
Dathathri, Sumanth, See, Abigail, Ghaisas, Sumedh, Huang, Po-Sen, McAdam, Rob, Welbl, Johannes, Bachani, Vandana, Kaskasoli, Alex, Stanforth, Robert, Matejovicova, Tatiana, Hayes, Jamie, Vyas, Nidhi, Merey, Majd Al, Brown-Cohen, Jonah, Bunel, Rudy, Balle, Borja, Cemgil, Taylan, Ahmed, Zahra, Stacpoole, Kitty, Shumailov, Ilia, Baetu, Ciprian, Gowal, Sven, Hassabis, Demis, and Kohli, Pushmeet
- Published
- 2024
- Full Text
- View/download PDF
4. Accurate structure prediction of biomolecular interactions with AlphaFold 3
- Author
-
Abramson, Josh, Adler, Jonas, Dunger, Jack, Evans, Richard, Green, Tim, Pritzel, Alexander, Ronneberger, Olaf, Willmore, Lindsay, Ballard, Andrew J., Bambrick, Joshua, Bodenstein, Sebastian W., Evans, David A., Hung, Chia-Chun, O’Neill, Michael, Reiman, David, Tunyasuvunakool, Kathryn, Wu, Zachary, Žemgulytė, Akvilė, Arvaniti, Eirini, Beattie, Charles, Bertolli, Ottavia, Bridgland, Alex, Cherepanov, Alexey, Congreve, Miles, Cowen-Rivers, Alexander I., Cowie, Andrew, Figurnov, Michael, Fuchs, Fabian B., Gladman, Hannah, Jain, Rishub, Khan, Yousuf A., Low, Caroline M. R., Perlin, Kuba, Potapenko, Anna, Savy, Pascal, Singh, Sukhdeep, Stecula, Adrian, Thillaisundaram, Ashok, Tong, Catherine, Yakneen, Sergei, Zhong, Ellen D., Zielinski, Michal, Žídek, Augustin, Bapst, Victor, Kohli, Pushmeet, Jaderberg, Max, Hassabis, Demis, and Jumper, John M.
- Published
- 2024
- Full Text
- View/download PDF
5. Addendum: Accurate structure prediction of biomolecular interactions with AlphaFold 3
- Author
-
Abramson, Josh, Adler, Jonas, Dunger, Jack, Evans, Richard, Green, Tim, Pritzel, Alexander, Ronneberger, Olaf, Willmore, Lindsay, Ballard, Andrew J., Bambrick, Joshua, Bodenstein, Sebastian W., Evans, David A., Hung, Chia-Chun, O’Neill, Michael, Reiman, David, Tunyasuvunakool, Kathryn, Wu, Zachary, Žemgulytė, Akvilė, Arvaniti, Eirini, Beattie, Charles, Bertolli, Ottavia, Bridgland, Alex, Cherepanov, Alexey, Congreve, Miles, Cowen-Rivers, Alexander I., Cowie, Andrew, Figurnov, Michael, Fuchs, Fabian B., Gladman, Hannah, Jain, Rishub, Khan, Yousuf A., Low, Caroline M. R., Perlin, Kuba, Potapenko, Anna, Savy, Pascal, Singh, Sukhdeep, Stecula, Adrian, Thillaisundaram, Ashok, Tong, Catherine, Yakneen, Sergei, Zhong, Ellen D., Zielinski, Michal, Žídek, Augustin, Bapst, Victor, Kohli, Pushmeet, Jaderberg, Max, Hassabis, Demis, and Jumper, John M.
- Published
- 2024
- Full Text
- View/download PDF
6. Generative models improve fairness of medical classifiers under distribution shifts
- Author
-
Ktena, Ira, Wiles, Olivia, Albuquerque, Isabela, Rebuffi, Sylvestre-Alvise, Tanno, Ryutaro, Roy, Abhijit Guha, Azizi, Shekoofeh, Belgrave, Danielle, Kohli, Pushmeet, Cemgil, Taylan, Karthikesalingam, Alan, and Gowal, Sven
- Published
- 2024
- Full Text
- View/download PDF
7. Mathematical discoveries from program search with large language models
- Author
-
Romera-Paredes, Bernardino, Barekatain, Mohammadamin, Novikov, Alexander, Balog, Matej, Kumar, M. Pawan, Dupont, Emilien, Ruiz, Francisco J. R., Ellenberg, Jordan S., Wang, Pengming, Fawzi, Omar, Kohli, Pushmeet, and Fawzi, Alhussein
- Published
- 2024
- Full Text
- View/download PDF
8. Scientific discovery in the age of artificial intelligence
- Author
-
Wang, Hanchen, Fu, Tianfan, Du, Yuanqi, Gao, Wenhao, Huang, Kexin, Liu, Ziming, Chandak, Payal, Liu, Shengchao, Van Katwyk, Peter, Deac, Andreea, Anandkumar, Anima, Bergen, Karianne, Gomes, Carla P., Ho, Shirley, Kohli, Pushmeet, Lasenby, Joan, Leskovec, Jure, Liu, Tie-Yan, Manrai, Arjun, Marks, Debora, Ramsundar, Bharath, Song, Le, Sun, Jimeng, Tang, Jian, Veličković, Petar, Welling, Max, Zhang, Linfeng, Coley, Connor W., Bengio, Yoshua, and Zitnik, Marinka
- Published
- 2023
- Full Text
- View/download PDF
9. Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians
- Author
-
Dvijotham, Krishnamurthy (Dj), Winkens, Jim, Barsbey, Melih, Ghaisas, Sumedh, Stanforth, Robert, Pawlowski, Nick, Strachan, Patricia, Ahmed, Zahra, Azizi, Shekoofeh, Bachrach, Yoram, Culp, Laura, Daswani, Mayank, Freyberg, Jan, Kelly, Christopher, Kiraly, Atilla, Kohlberger, Timo, McKinney, Scott, Mustafa, Basil, Natarajan, Vivek, Geras, Krzysztof, Witowski, Jan, Qin, Zhi Zhen, Creswell, Jacob, Shetty, Shravya, Sieniek, Marcin, Spitz, Terry, Corrado, Greg, Kohli, Pushmeet, Cemgil, Taylan, and Karthikesalingam, Alan
- Published
- 2023
- Full Text
- View/download PDF
10. Discovering faster matrix multiplication algorithms with reinforcement learning
- Author
-
Fawzi, Alhussein, Balog, Matej, Huang, Aja, Hubert, Thomas, Romera-Paredes, Bernardino, Barekatain, Mohammadamin, Novikov, Alexander, R. Ruiz, Francisco J., Schrittwieser, Julian, Swirszcz, Grzegorz, Silver, David, Hassabis, Demis, and Kohli, Pushmeet
- Published
- 2022
- Full Text
- View/download PDF
11. Publisher Correction: Scientific discovery in the age of artificial intelligence
- Author
-
Wang, Hanchen, Fu, Tianfan, Du, Yuanqi, Gao, Wenhao, Huang, Kexin, Liu, Ziming, Chandak, Payal, Liu, Shengchao, Van Katwyk, Peter, Deac, Andreea, Anandkumar, Anima, Bergen, Karianne, Gomes, Carla P., Ho, Shirley, Kohli, Pushmeet, Lasenby, Joan, Leskovec, Jure, Liu, Tie-Yan, Manrai, Arjun, Marks, Debora, Ramsundar, Bharath, Song, Le, Sun, Jimeng, Tang, Jian, Veličković, Petar, Welling, Max, Zhang, Linfeng, Coley, Connor W., Bengio, Yoshua, and Zitnik, Marinka
- Published
- 2023
- Full Text
- View/download PDF
12. Magnetic control of tokamak plasmas through deep reinforcement learning
- Author
-
Degrave, Jonas, Felici, Federico, Buchli, Jonas, Neunert, Michael, Tracey, Brendan, Carpanese, Francesco, Ewalds, Timo, Hafner, Roland, Abdolmaleki, Abbas, de las Casas, Diego, Donner, Craig, Fritz, Leslie, Galperti, Cristian, Huber, Andrea, Keeling, James, Tsimpoukelli, Maria, Kay, Jackie, Merle, Antoine, Moret, Jean-Marc, Noury, Seb, Pesamosca, Federico, Pfau, David, Sauter, Olivier, Sommariva, Cristian, Coda, Stefano, Duval, Basil, Fasoli, Ambrogio, Kohli, Pushmeet, Kavukcuoglu, Koray, Hassabis, Demis, and Riedmiller, Martin
- Published
- 2022
- Full Text
- View/download PDF
13. Advancing mathematics by guiding human intuition with AI
- Author
-
Davies, Alex, Veličković, Petar, Buesing, Lars, Blackwell, Sam, Zheng, Daniel, Tomašev, Nenad, Tanburn, Richard, Battaglia, Peter, Blundell, Charles, Juhász, András, Lackenby, Marc, Williamson, Geordie, Hassabis, Demis, and Kohli, Pushmeet
- Published
- 2021
- Full Text
- View/download PDF
14. Making sense of raw input
- Author
-
Evans, Richard, Bošnjak, Matko, Buesing, Lars, Ellis, Kevin, Pfau, David, Kohli, Pushmeet, and Sergot, Marek
- Published
- 2021
- Full Text
- View/download PDF
15. Is AI the Right Tool to Solve That Problem?
- Author
-
Cervini, Paolo, Farronato, Chiara, Kohli, Pushmeet, and Van Alstyne, Marshall W.
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,GENERATIVE artificial intelligence ,LANGUAGE models ,MULTI-objective optimization - Abstract
The article discusses the effectiveness of AI in solving problems with specific features and provides guidance on identifying suitable problems for AI solutions. It emphasizes the importance of high-quality data, clear objectives, and adaptability in AI projects. Examples from Google DeepMind are used to illustrate successful AI applications. The article also highlights the challenges of defining clear objectives and the need for human feedback in AI systems. It concludes by suggesting a strategic framework for leveraging AI's potential for innovation and societal progress. [Extracted from the article]
- Published
- 2024
16. Effective gene expression prediction from sequence by integrating long-range interactions
- Author
-
Avsec, Žiga, Agarwal, Vikram, Visentin, Daniel, Ledsam, Joseph R., Grabska-Barwinska, Agnieszka, Taylor, Kyle R., Assael, Yannis, Jumper, John, Kohli, Pushmeet, and Kelley, David R.
- Published
- 2021
- Full Text
- View/download PDF
17. Highly accurate protein structure prediction with AlphaFold
- Author
-
Jumper, John, Evans, Richard, Pritzel, Alexander, Green, Tim, Figurnov, Michael, Ronneberger, Olaf, Tunyasuvunakool, Kathryn, Bates, Russ, Žídek, Augustin, Potapenko, Anna, Bridgland, Alex, Meyer, Clemens, Kohl, Simon A. A., Ballard, Andrew J., Cowie, Andrew, Romera-Paredes, Bernardino, Nikolov, Stanislav, Jain, Rishub, Adler, Jonas, Back, Trevor, Petersen, Stig, Reiman, David, Clancy, Ellen, Zielinski, Michal, Steinegger, Martin, Pacholska, Michalina, Berghammer, Tamas, Bodenstein, Sebastian, Silver, David, Vinyals, Oriol, Senior, Andrew W., Kavukcuoglu, Koray, Kohli, Pushmeet, and Hassabis, Demis
- Published
- 2021
- Full Text
- View/download PDF
18. Highly accurate protein structure prediction for the human proteome
- Author
-
Tunyasuvunakool, Kathryn, Adler, Jonas, Wu, Zachary, Green, Tim, Zielinski, Michal, Žídek, Augustin, Bridgland, Alex, Cowie, Andrew, Meyer, Clemens, Laydon, Agata, Velankar, Sameer, Kleywegt, Gerard J., Bateman, Alex, Evans, Richard, Pritzel, Alexander, Figurnov, Michael, Ronneberger, Olaf, Bates, Russ, Kohl, Simon A. A., Potapenko, Anna, Ballard, Andrew J., Romera-Paredes, Bernardino, Nikolov, Stanislav, Jain, Rishub, Clancy, Ellen, Reiman, David, Petersen, Stig, Senior, Andrew W., Kavukcuoglu, Koray, Birney, Ewan, Kohli, Pushmeet, Jumper, John, and Hassabis, Demis
- Published
- 2021
- Full Text
- View/download PDF
19. Making sense of sensory input
- Author
-
Evans, Richard, Hernández-Orallo, José, Welbl, Johannes, Kohli, Pushmeet, and Sergot, Marek
- Published
- 2021
- Full Text
- View/download PDF
20. Efficient Exploratory Synthesis of Quaternary Cesium Chlorides Guided by In Silico Predictions.
- Author
-
Miura, Akira, Aykol, Muratahan, Kozaki, Shumma, Moriyoshi, Chikako, Kobayashi, Shintaro, Kawaguchi, Shogo, Lee, Chul-Ho, Wang, Yongming, Merchant, Amil, Batzner, Simon, Kageyama, Hiroshi, Tadanaga, Kiyoharu, Kohli, Pushmeet, and Cubuk, Ekin Dogus
- Published
- 2024
- Full Text
- View/download PDF
21. Improved protein structure prediction using potentials from deep learning
- Author
-
Senior, Andrew W., Evans, Richard, Jumper, John, Kirkpatrick, James, Sifre, Laurent, Green, Tim, Qin, Chongli, Žídek, Augustin, Nelson, Alexander W. R., Bridgland, Alex, Penedones, Hugo, Petersen, Stig, Simonyan, Karen, Crossan, Steve, Kohli, Pushmeet, Jones, David T., Silver, David, Kavukcuoglu, Koray, and Hassabis, Demis
- Published
- 2020
- Full Text
- View/download PDF
22. Minimizing dynamic and higher order energy functions using graph cuts
- Author
-
Kohli, Pushmeet
- Subjects
006.37 - Published
- 2007
23. Fast and accurate scene text understanding with image binarization and off-the-shelf OCR
- Author
-
Milyaev, Sergey, Barinova, Olga, Novikova, Tatiana, Kohli, Pushmeet, and Lempitsky, Victor
- Published
- 2015
- Full Text
- View/download PDF
24. Manifestations of user personality in website choice and behaviour on online social networks
- Author
-
Kosinski, Michal, Bachrach, Yoram, Kohli, Pushmeet, Stillwell, David, and Graepel, Thore
- Published
- 2014
- Full Text
- View/download PDF
25. Modeling SARS‐CoV‐2 proteins in the CASP‐commons experiment
- Author
-
Kryshtafovych, Andriy, Moult, John, Billings, Wendy, Della Corte, Dennis, Fidelis, Krzysztof, Kwon, Sohee, Olechnovič, Kliment, Seok, Chaok, Venclovas, Česlovas, Won, Jonghun, Adhikari, Badri, Adiyaman, Recep, Aguirre-Plans, Joaquim, Anishchenko, Ivan, Baek, Minkyung, Baker, David, Baldassarre, Frederico, Barger, Jacob, Bhattacharya, Sutanu, Bhattacharya, Debswapna, Bitton, Mor, Cao, Renzhi, Cheng, Jianlin, Christoffer, Charles, Czaplewski, Cezary, Elofsson, Arne, Faraggi, Eshel, Feig, Michael, Fernandez-Fuentes, Narcis, Grishin, Nick, Grudinin, Sergei, Guo, Zhiye, Hanazono, Yuya, Hassabis, Demis, Hedelius, Bryce, Heo, Lim, Hiranuma, Naozumi, Hunt, Cassandra, Igashov, Ilia, Ishida, Takashi, Jernigan, Robert, Jones, David, Jumper, John, Kadukova, Maria, Kandathil, Shaun, Keasar, Chen, Kihara, Daisuke, Kinch, Lisa, Kiyota, Yasuomi, Kloczkowski, Andrzje, Kohli, Pushmeet, Kogut, Mateusz, Laine, Elodie, Lilley, Cade, Liu, Jian, Liwo, Adam, Lubecka, Emilia, Mondal, Arup, Morris, Connor, Mcguffin, Liam, Molina, Alexis, Nakamura, Tsukasa, Oliva, Baldo, Perez, Alberto, Pozzati, Gabriele, Sarkar, Daipayan, Sato, Rin, Schwede, Torsten, Shrestha, Bikash, Sidi, Tomer, Studer, Gabriel, Shuvo, Md Hossain, Takeda-Shitaka, Mayuko, Takei, Yuma, Terashi, Genki, Tomii, Kentaro, Tsuchiya, Yuko, Tunyasuvunakool, Kathryn, Waliner, Björn, Wu, Tianqi, Xu, Jinbo, Yamamori, Yu, Zhang, Chengxin, Zhang, Yang, Zheng, Wei, University of California [Davis] (UC Davis), University of California (UC), Institute for Bioscience and Biotechnology Research [Rockville, MD, États-Unis] (IBBR), University of Maryland [College Park], University of Maryland System-University of Maryland System, Brigham Young University (BYU), Seoul National University [Seoul] (SNU), Vilnius University [Vilnius], University of Missouri [St. Louis], University of Missouri System, University of Reading (UOR), Universitat Pompeu Fabra [Barcelona] (UPF), Biorobotics Lab (University of Washington), University of Washington [Seattle], Royal Institute of Technology [Stockholm] (KTH ), Auburn University (AU), Ben-Gurion University of the Negev (BGU), Pacific Lutheran University [Tacoma] (PLU), University of Missouri [Columbia] (Mizzou), Purdue University [West Lafayette], University of Gdańsk (UG), Stockholm University, Indiana University - Purdue University Indianapolis (IUPUI), Indiana University System, Michigan State University [East Lansing], Michigan State University System, Aberystwyth University, University of Texas Southwestern Medical Center [Dallas], Données, Apprentissage et Optimisation (DAO), Laboratoire Jean Kuntzmann (LJK), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), National Institutes for Quantum and Radiological Science and Technology (QST), DeepMind [London], DeepMind Technologies, Moscow Institute of Physics and Technology [Moscow] (MIPT), Tokyo Institute of Technology [Tokyo] (TITECH), Iowa State University (ISU), University College of London [London] (UCL), Kitasato University, Ohio State University [Columbus] (OSU), Biologie Computationnelle et Quantitative = Laboratory of Computational and Quantitative Biology (LCQB), Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Institut de Biologie Paris Seine (IBPS), Institut National de la Santé et de la Recherche Médicale (INSERM)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS), University of Florida [Gainesville] (UF), Barcelona Supercomputing Center - Centro Nacional de Supercomputacion (BSC - CNS), Tohoku University [Sendai], University of Basel (Unibas), National Institute of Advanced Industrial Science and Technology (AIST), Linköping University (LIU), Toyota Technological Institute at Chicago [Chicago] (TTIC), University of Michigan [Ann Arbor], University of Michigan System, University of California, University of Maryland [Baltimore], Algorithms for Modeling and Simulation of Nanosystems (NANO-D), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), and Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )
- Subjects
Models, Molecular ,2019-20 coronavirus outbreak ,Research groups ,Protein Conformation ,Computer science ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Model accuracy ,Genome, Viral ,computer.software_genre ,Biochemistry ,SARS‐CoV‐2 ,Viroporin Proteins ,Domain (software engineering) ,Viral Proteins ,03 medical and health sciences ,Protein Domains ,Structural Biology ,EMA ,Humans ,CASP ,Molecular Biology ,Research Articles ,ComputingMilieux_MISCELLANEOUS ,030304 developmental biology ,COVID ,0303 health sciences ,SARS-CoV-2 ,030302 biochemistry & molecular biology ,COVID-19 ,Protein structure prediction ,Model quality ,Critical assessment ,Data mining ,[INFO.INFO-BI]Computer Science [cs]/Bioinformatics [q-bio.QM] ,computer ,Research Article - Abstract
International audience; Critical Assessment of Structure Prediction (CASP) is an organization aimed at advancing the state of the art in computing protein structure from sequence. In the spring of 2020, CASP launched a community project to compute the structures of the most structurally challenging proteins coded for in the SARS-CoV-2 genome. Forty-seven research groups submitted over 3000 three-dimensional models and 700 sets of accuracy estimates on 10 proteins. The resulting models were released to the public. CASP community members also worked together to provide estimates of local and global accuracy and identify structure-based domain boundaries for some proteins. Subsequently, two of these structures (ORF3a and ORF8) have been solved experimentally, allowing assessment of both model quality and the accuracy estimates. Models from the AlphaFold2 group were found to have good agreement with the experimental structures, with main chain GDT_TS accuracy scores ranging from 63 (a correct topology) to 87 (competitive with experiment).
- Published
- 2021
- Full Text
- View/download PDF
26. Inference Methods for CRFs with Co-occurrence Statistics
- Author
-
Ladický, Ľubor, Russell, Chris, Kohli, Pushmeet, and Torr, Philip H. S.
- Published
- 2013
- Full Text
- View/download PDF
27. Atomistic graph networks for experimental materials property prediction
- Author
-
Xie, Tian, Bapst, Victor, Gaunt, Alexander L., Obika, Annette, Back, Trevor, Hassabis, Demis, Kohli, Pushmeet, and Kirkpatrick, James
- Subjects
Condensed Matter - Materials Science ,Materials Science (cond-mat.mtrl-sci) ,FOS: Physical sciences - Abstract
Machine Learning (ML) has the potential to accelerate discovery of new materials and shed light on useful properties of existing materials. A key difficulty when applying ML in Materials Science is that experimental datasets of material properties tend to be small. In this work we show how material descriptors can be learned from the structures present in large scale datasets of material simulations; and how these descriptors can be used to improve the prediction of an experimental property, the energy of formation of a solid. The material descriptors are learned by training a Graph Neural Network to regress simulated formation energies from a material's atomistic structure. Using these learned features for experimental property predictions outperforms existing methods that are based solely on chemical composition. Moreover, we find that the advantage of our approach increases as the generalization requirements of the task are made more stringent, for example when limiting the amount of training data or when generalizing to unseen chemical spaces., 22 pages, preprint
- Published
- 2021
28. User-Centric Learning and Evaluation of Interactive Segmentation Systems
- Author
-
Kohli, Pushmeet, Nickisch, Hannes, Rother, Carsten, and Rhemann, Christoph
- Published
- 2012
- Full Text
- View/download PDF
29. Geometric Image Parsing in Man-Made Environments
- Author
-
Tretyak, Elena, Barinova, Olga, Kohli, Pushmeet, and Lempitsky, Victor
- Published
- 2012
- Full Text
- View/download PDF
30. Measuring uncertainty in graph cut solutions
- Author
-
Kohli, Pushmeet and Torr, Philip H.S.
- Published
- 2008
- Full Text
- View/download PDF
31. Robust Higher Order Potentials for Enforcing Label Consistency
- Author
-
Kohli, Pushmeet, Ladický, L’ubor, and Torr, Philip H. S.
- Published
- 2009
- Full Text
- View/download PDF
32. [P.sup.3] & beyond: move making algorithms for solving higher order functions
- Author
-
Kohli, Pushmeet, Kumar, M. Pawan, and Torr, Philip H.S.
- Subjects
Algorithm ,Algorithms -- Analysis ,Polynomials -- Analysis - Published
- 2009
33. Simultaneous Segmentation and Pose Estimation of Humans Using Dynamic Graph Cuts
- Author
-
Kohli, Pushmeet, Rihan, Jonathan, Bray, Matthieu, and Torr, Philip H. S.
- Published
- 2008
- Full Text
- View/download PDF
34. Training Generative Adversarial Networks by Solving Ordinary Differential Equations
- Author
-
Qin, Chongli, Wu, Yan, Springenberg, Jost Tobias, Brock, Andrew, Donahue, Jeff, Lillicrap, Timothy P., and Kohli, Pushmeet
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
The instability of Generative Adversarial Network (GAN) training has frequently been attributed to gradient descent. Consequently, recent methods have aimed to tailor the models and training procedures to stabilise the discrete updates. In contrast, we study the continuous-time dynamics induced by GAN training. Both theory and toy experiments suggest that these dynamics are in fact surprisingly stable. From this perspective, we hypothesise that instabilities in training GANs arise from the integration error in discretising the continuous dynamics. We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training - when combined with a regulariser that controls the integration error. Our approach represents a radical departure from previous methods which typically use adaptive optimisation and stabilisation techniques that constrain the functional space (e.g. Spectral Normalisation). Evaluation on CIFAR-10 and ImageNet shows that our method outperforms several strong baselines, demonstrating its efficacy.
- Published
- 2020
35. Evaluating the Apperception Engine
- Author
-
Evans, Richard, Hernandez-Orallo, Jose, Welbl, Johannes, Kohli, Pushmeet, and Sergot, Marek
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence - Abstract
The Apperception Engine is an unsupervised learning system. Given a sequence of sensory inputs, it constructs a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the theory - objects, properties, and laws - must be integrated into a coherent whole. Once a theory has been constructed, it can be applied to predict future sensor readings, retrodict earlier readings, or impute missing readings. In this paper, we evaluate the Apperception Engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The engine performs well in all these domains, significantly outperforming neural net baselines and state of the art inductive logic programming systems. These results are significant because neural nets typically struggle to solve the binding problem (where information from different modalities must somehow be combined together into different aspects of one unified object) and fail to solve occlusion tasks (in which objects are sometimes visible and sometimes obscured from view). We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence., arXiv admin note: substantial text overlap with arXiv:1910.02227
- Published
- 2020
36. Strong Generalization and Efficiency in Neural Programs
- Author
-
Li, Yujia, Gimeno, Felix, Kohli, Pushmeet, and Vinyals, Oriol
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Statistics - Machine Learning ,Computer Science - Neural and Evolutionary Computing ,Machine Learning (stat.ML) ,Neural and Evolutionary Computing (cs.NE) ,Machine Learning (cs.LG) - Abstract
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction. By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes, achieving strong generalization. Moreover, by using reinforcement learning, we optimize for program efficiency metrics, and discover new algorithms that surpass the teacher used in imitation. With this, our approach can learn to outperform custom-written solutions for a variety of problems, as we tested it on sorting, searching in ordered lists and the NP-complete 0/1 knapsack problem, which sets a notable milestone in the field of Neural Program Induction. As highlights, our learned model can perform sorting perfectly on any input data size we tested on, with $O(n log n)$ complexity, whilst outperforming hand-coded algorithms, including quick sort, in number of operations even for list sizes far beyond those seen during training.
- Published
- 2020
37. Conference paper
- Author
-
Bunel, Rudy, De Palma, Alessandro, Desmaison, Alban, Dvijotham, Krishnamurthy, Kohli, Pushmeet, Torr, Philip H. S., and Kumar, M. Pawan
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take. Previous methods have either used off-the-shelf solvers, discarding the problem structure, or relaxed the problem even further, making the bounds unnecessarily loose. We propose a novel approach based on Lagrangian Decomposition. Our formulation admits an efficient supergradient ascent algorithm, as well as an improved proximal algorithm. Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds. Empirically, we show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time, and obtain tighter bounds in the same time as previous dual algorithms. This results in an overall speed-up when employing the bounds for formal verification. Code for our algorithms is available at https://github.com/oval-group/decomposition-plnn-bounds., UAI 2020 conference paper
- Published
- 2020
38. Dynamic graph cuts for efficient inference in Markov random fields
- Author
-
Kohli, Pushmeet and Torr, Philip H.S.
- Subjects
Algorithm ,Algorithms -- Methods ,Markov processes -- Observations ,Image processing -- Methods - Abstract
In this paper, we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change. Index Terms--Energy minimization, Markov random fields, dynamic graph cuts, maximum flow, st-mincut, video segmentation.
- Published
- 2007
39. Learning Transferable Graph Exploration
- Author
-
Dai, Hanjun, Li, Yujia, Wang, Chenglong, Singh, Rishabh, Huang, Po-Sen, and Kohli, Pushmeet
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
This paper considers the problem of efficient exploration of unseen environments, a key challenge in AI. We propose a `learning to explore' framework where we learn a policy from a distribution of environments. At test time, presented with an unseen environment from the same distribution, the policy aims to generalize the exploration strategy to visit the maximum number of unique states in a limited number of steps. We particularly focus on environments with graph-structured state-spaces that are encountered in many important real-world applications like software testing and map building. We formulate this task as a reinforcement learning problem where the `exploration' agent is rewarded for transitioning to previously unseen environment states and employ a graph-structured memory to encode the agent's past trajectory. Experimental results demonstrate that our approach is extremely effective for exploration of spatial maps; and when applied on the challenging problems of coverage-guided software-testing of domain-specific programs and real-world mobile applications, it outperforms methods that have been hand-engineered by human experts., To appear in NeurIPS 2019
- Published
- 2019
40. An Alternative Surrogate Loss for PGD-based Adversarial Testing
- Author
-
Gowal, Sven, Uesato, Jonathan, Qin, Chongli, Huang, Po-Sen, Mann, Timothy, and Kohli, Pushmeet
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Adversarial testing methods based on Projected Gradient Descent (PGD) are widely used for searching norm-bounded perturbations that cause the inputs of neural networks to be misclassified. This paper takes a deeper look at these methods and explains the effect of different hyperparameters (i.e., optimizer, step size and surrogate loss). We introduce the concept of MultiTargeted testing, which makes clever use of alternative surrogate losses, and explain when and how MultiTargeted is guaranteed to find optimal perturbations. Finally, we demonstrate that MultiTargeted outperforms more sophisticated methods and often requires less iterative steps than other variants of PGD found in the literature. Notably, MultiTargeted ranks first on MadryLab's white-box MNIST and CIFAR-10 leaderboards, reducing the accuracy of their MNIST model to 88.36% (with $\ell_\infty$ perturbations of $\epsilon = 0.3$) and the accuracy of their CIFAR-10 model to 44.03% (at $\epsilon = 8/255$). MultiTargeted also ranks first on the TRADES leaderboard reducing the accuracy of their CIFAR-10 model to 53.07% (with $\ell_\infty$ perturbations of $\epsilon = 0.031$).
- Published
- 2019
41. Making sense of sensory input
- Author
-
Evans, Richard, Hernandez-Orallo, Jose, Welbl, Johannes, Kohli, Pushmeet, and Sergot, Marek
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence - Abstract
This paper attempts to answer a central question in unsupervised learning: what does it mean to "make sense" of a sensory sequence? In our formalization, making sense involves constructing a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the causal theory -- objects, properties, and laws -- must be integrated into a coherent whole. On our account, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis. Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy the above requirements. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the unity conditions. A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute (fill in the blanks of) missing sensory readings, in any combination. We tested the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The engine performs well in all these domains, significantly out-performing neural net baselines. We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.
- Published
- 2019
42. CLEVRER: CoLlision Events for Video REpresentation and Reasoning
- Author
-
Yi, Kexin, Gan, Chuang, Li, Yunzhu, Kohli, Pushmeet, Wu, Jiajun, Torralba, Antonio, and Tenenbaum, Joshua B.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Computation and Language (cs.CL) ,Machine Learning (cs.LG) - Abstract
The ability to reason about temporal and causal events from videos lies at the core of human intelligence. Most video reasoning benchmarks, however, focus on pattern recognition from complex visual and language input, instead of on causal structure. We study the complementary problem, exploring the temporal and causal structures behind videos of objects with simple visual appearance. To this end, we introduce the CoLlision Events for Video REpresentation and Reasoning (CLEVRER), a diagnostic video dataset for systematic evaluation of computational models on a wide range of reasoning tasks. Motivated by the theory of human casual judgment, CLEVRER includes four types of questions: descriptive (e.g., "what color"), explanatory ("what is responsible for"), predictive ("what will happen next"), and counterfactual ("what if"). We evaluate various state-of-the-art models for visual reasoning on our benchmark. While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations. We also study an oracle model that explicitly combines these components via symbolic representations., The first two authors contributed equally to this work. Accepted as Oral Spotlight as ICLR 2020. Project page: http://clevrer.csail.mit.edu/
- Published
- 2019
43. Learning disentangled representations with semi-supervised deep generative models
- Author
-
Siddharth, N., Paige, Brooks, van de Meent, Jan-Willem, Desmaison, Alban, Goodman, Noah, Kohli, Pushmeet, Wood, Frank, Torr, Philip H.S., Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R.
- Subjects
FOS: Computer and information sciences ,Computer Science - Learning ,Artificial Intelligence (cs.AI) ,Statistics - Machine Learning ,Computer Science - Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Machine Learning (stat.ML) ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,Machine Learning (cs.LG) - Abstract
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic encoder and decoder network. Typically these models encode all features of the data into a single variable. Here we are interested in learning disentangled representations that encode distinct aspects of the data into separate variables. We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder. This allows us to train partially-specified models that make relatively strong assumptions about a subset of interpretable variables and rely on the flexibility of neural networks to learn representations for the remaining variables. We further define a general objective for semi-supervised learning in this model class, which can be approximated using an importance sampling procedure. We evaluate our framework's ability to learn disentangled representations, both by qualitative exploration of its generative capacity, and quantitative evaluation of its discriminative ability on a variety of models and datasets., Accepted for publication at NIPS 2017
- Published
- 2019
- Full Text
- View/download PDF
44. A Hierarchical Probabilistic U-Net for Modeling Multi-Scale Ambiguities
- Author
-
Kohl, Simon A. A., Romera-Paredes, Bernardino, Maier-Hein, Klaus H., Rezende, Danilo Jimenez, Eslami, S. M. Ali, Kohli, Pushmeet, Zisserman, Andrew, and Ronneberger, Olaf
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Medical imaging only indirectly measures the molecular identity of the tissue within each voxel, which often produces only ambiguous image evidence for target measures of interest, like semantic segmentation. This diversity and the variations of plausible interpretations are often specific to given image regions and may thus manifest on various scales, spanning all the way from the pixel to the image level. In order to learn a flexible distribution that can account for multiple scales of variations, we propose the Hierarchical Probabilistic U-Net, a segmentation network with a conditional variational auto-encoder (cVAE) that uses a hierarchical latent space decomposition. We show that this model formulation enables sampling and reconstruction of segmenations with high fidelity, i.e. with finely resolved detail, while providing the flexibility to learn complex structured distributions across scales. We demonstrate these abilities on the task of segmenting ambiguous medical scans as well as on instance segmentation of neurobiological and natural images. Our model automatically separates independent factors across scales, an inductive bias that we deem beneficial in structured output prediction tasks beyond segmentation., 25 pages, 15 figures
- Published
- 2019
45. Analysing Mathematical Reasoning Abilities of Neural Models
- Author
-
Saxton, David, Grefenstette, Edward, Hill, Felix, and Kohli, Pushmeet
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules. In this paper, we present a new challenge for the evaluation (and eventually the design) of neural architectures and similar system, developing a task suite of mathematics problems involving sequential questions and answers in a free-form textual input/output format. The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes. Having described the data generation process and its potential future expansions, we conduct a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and find notable differences in their ability to resolve mathematical problems and generalize their knowledge.
- Published
- 2019
46. Meta-Learning surrogate models for sequential decision making
- Author
-
Galashov, Alexandre, Schwarz, Jonathan, Kim, Hyunjik, Garnelo, Marta, Saxton, David, Kohli, Pushmeet, Eslami, S. M. Ali, and Teh, Yee Whye
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
We introduce a unified probabilistic framework for solving sequential decision making problems ranging from Bayesian optimisation to contextual bandits and reinforcement learning. This is accomplished by a probabilistic model-based approach that explains observed data while capturing predictive uncertainty during the decision making process. Crucially, this probabilistic model is chosen to be a Meta-Learning system that allows learning from a distribution of related problems, allowing data efficient adaptation to a target task. As a suitable instantiation of this framework, we explore the use of Neural processes due to statistical and computational desiderata. We apply our framework to a broad range of problem domains, such as control problems, recommender systems and adversarial attacks on RL agents, demonstrating an efficient and general black-box learning approach.
- Published
- 2019
47. Verification of Non-Linear Specifications for Neural Networks
- Author
-
Qin, Chongli, Krishnamurthy, Dvijotham, O'Donoghue, Brendan, Bunel, Rudy, Stanforth, Robert, Gowal, Sven, Uesato, Jonathan, Swirszcz, Grzegorz, and Kohli, Pushmeet
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Prior work on neural network verification has focused on specifications that are linear functions of the output of the network, e.g., invariance of the classifier output under adversarial perturbations of the input. In this paper, we extend verification algorithms to be able to certify richer properties of neural networks. To do this we introduce the class of convex-relaxable specifications, which constitute nonlinear specifications that can be verified using a convex relaxation. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model of a physical system; semantic consistency of a classifier's output labels under adversarial perturbations and bounding errors in a system that predicts the summation of handwritten digits. Our experimental evaluation shows that our method is able to effectively verify these specifications. Moreover, our evaluation exposes the failure modes in models which cannot be verified to satisfy these specifications. Thus, emphasizing the importance of training models not just to fit training data but also to be consistent with specifications., ICLR conference paper
- Published
- 2019
48. Pushing the frontiers of density functionals by solving the fractional electron problem.
- Author
-
Kirkpatrick, James, McMorrow, Brendan, Turban, David H. P., Gaunt, Alexander L., Spencer, James S., Matthews, Alexander G. D. G., Obika, Annette, Thiry, Louis, Fortunato, Meire, Pfau, David, Castellanos, Lara Román, Petersen, Stig, Nelson, Alexander W. R., Kohli, Pushmeet, Mori-Sánchez, Paula, Hassabis, Demis, and Cohen, Aron J.
- Published
- 2021
- Full Text
- View/download PDF
49. Relational inductive biases, deep learning, and graph networks
- Author
-
Battaglia, Peter W., Hamrick, Jessica B., Bapst, Victor, Sanchez-Gonzalez, Alvaro, Zambaldi, Vinicius, Malinowski, Mateusz, Tacchetti, Andrea, Raposo, David, Santoro, Adam, Faulkner, Ryan, Gulcehre, Caglar, Song, Francis, Ballard, Andrew, Gilmer, Justin, Dahl, George, Vaswani, Ashish, Allen, Kelsey, Nash, Charles, Langston, Victoria, Dyer, Chris, Heess, Nicolas, Wierstra, Daan, Kohli, Pushmeet, Botvinick, Matt, Vinyals, Oriol, Li, Yujia, and Pascanu, Razvan
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between "hand-engineering" and "end-to-end" learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.
- Published
- 2018
50. Value Propagation Networks
- Author
-
Nardelli, Nantas, Synnaeve, Gabriel, Lin, Zeming, Kohli, Pushmeet, Torr, Philip H. S., and Usunier, Nicolas
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Machine Learning (cs.LG) - Abstract
We present Value Propagation (VProp), a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems. We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input., Updated to match ICLR 2019 OpenReview's version
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.