98 results on '"Yuhuai Wu"'
Search Results
2. REFACTOR: Learning to Extract Theorems from Proofs.
3. Don't Trust: Verify - Grounding LLM Quantitative Reasoning with Autoformalization.
4. Magnushammer: A Transformer-Based Approach to Premise Selection.
5. Meta-Designing Quantum Experiments with Language Models.
6. Hierarchical Transformers Are More Efficient Language Models.
7. Fast and Precise: Adjusting Planning Horizon with Adaptive Subgoal Search.
8. Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs.
9. Lexinvariant Language Models.
10. Focused Transformer: Contrastive Training for Context Scaling.
11. Holistic Evaluation of Language Models.
12. Length Generalization in Arithmetic Transformers.
13. Magnushammer: A Transformer-based Approach to Premise Selection.
14. Evaluating Language Models for Mathematics through Interactions.
15. Subgoal Search For Complex Reasoning Tasks.
16. LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning.
17. Efficient Statistical Tests: A Neural Tangent Kernel Approach.
18. Learning Branching Heuristics for Propositional Model Counting.
19. OPtions as REsponses: Grounding behavioural hierarchies in multi-agent reinforcement learning.
20. Memorizing Transformers.
21. Invariant Causal Representation Learning for Out-of-Distribution Generalization.
22. Proof Artifact Co-Training for Theorem Proving with Language Models.
23. STaR: Bootstrapping Reasoning With Reasoning.
24. Exploring Length Generalization in Large Language Models.
25. Insights into Pre-training via Simpler Synthetic Tasks.
26. Block-Recurrent Transformers.
27. Path Independent Equilibrium Models Can Better Exploit Test-Time Computation.
28. Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers.
29. Solving Quantitative Reasoning Problems with Language Models.
30. Autoformalization with Large Language Models.
31. Discrete Equidecomposability and Ehrhart Theory of Polygons.
32. Language Model Cascades.
33. Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs.
34. Holistic Evaluation of Language Models.
35. Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers.
36. Fast and Precise: Adjusting Planning Horizon with Adaptive Subgoal Search.
37. INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving.
38. IsarStep: a Benchmark for High-level Mathematical Reasoning.
39. The Importance of Sampling inMeta-Reinforcement Learning.
40. Grandmaster level in StarCraft II using multi-agent reinforcement learning.
41. Evaluating language models for mathematics through interactions.
42. Learning to Give Checkable Answers with Prover-Verifier Games.
43. Proof Artifact Co-training for Theorem Proving with Language Models.
44. Nonlinear Invariant Risk Minimization: A Causal Approach.
45. Hierarchical Transformers Are More Efficient Language Models.
46. Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
47. Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference.
48. Modelling High-Level Mathematical Reasoning in Mechanised Declarative Proofs.
49. Learning Branching Heuristics for Propositional Model Counting.
50. INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.