12,043 results on '"Hedman, Å"'
Search Results
2. A Survey of Cassini Images of Spokes in Saturn's Rings: Unusual Spoke Types and Seasonal Trends
- Author
-
Callos, S. R., Hedman, M. M., and Hamilton, D. P.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
Spokes are localized clouds of fine particles that appear over the outer part of Saturn's B ring. Over the course of the Cassini Mission, the Imaging Science Subsystem (ISS) obtained over 20,000 images of the outer B ring, providing the most comprehensive data set for quantifying spoke properties currently available. Consistent with prior work, we find that spokes typically appear as dark features when the lit side of the rings are viewed at low phase angles, and as bright features when the rings are viewed at high phase angles or the dark side of the rings are observed. However, we also find examples of spokes on the dark side of the rings that transition between being brighter and darker than the background ring as they move around the planet. Most interestingly, we also identify spokes that appear to be darker than the background ring near their center and brighter than the background ring near their edges. These "mixed spokes" indicate that the particle size distribution can vary spatially within a spoke. In addition, we document seasonal variations in the overall spoke activity over the course of the Cassini mission using statistics derived from lit-side imaging sequences. These statistics demonstrate that while spokes can be detected over a wide range of solar elevation angles, spoke activity increases dramatically when the Sun is within 10 degrees of the ring plane., Comment: 25 pages, 13 figures, accepted for publication in the Planetary Science Journal
- Published
- 2024
3. 4-LEGS: 4D Language Embedded Gaussian Splatting
- Author
-
Fiebelman, Gal, Cohen, Tamir, Morgenstern, Ayellet, Hedman, Peter, and Averbuch-Elor, Hadar
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
The emergence of neural representations has revolutionized our means for digitally viewing a wide range of 3D scenes, enabling the synthesis of photorealistic images rendered from novel views. Recently, several techniques have been proposed for connecting these low-level representations with the high-level semantics understanding embodied within the scene. These methods elevate the rich semantic understanding from 2D imagery to 3D representations, distilling high-dimensional spatial features onto 3D space. In our work, we are interested in connecting language with a dynamic modeling of the world. We show how to lift spatio-temporal features to a 4D representation based on 3D Gaussian Splatting. This enables an interactive interface where the user can spatiotemporally localize events in the video from text prompts. We demonstrate our system on public 3D video datasets of people and animals performing various actions., Comment: Project webpage: https://tau-vailab.github.io/4-LEGS/
- Published
- 2024
4. EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis
- Author
-
Mai, Alexander, Hedman, Peter, Kopanas, George, Verbin, Dor, Futschik, David, Xu, Qiangeng, Kuester, Falko, Barron, Jonathan T., and Zhang, Yinda
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering. Unlike recent rasterization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering, rather than alpha compositing 3D Gaussian billboards. As such, unlike 3DGS our formulation does not suffer from popping artifacts and view dependent density, but still achieves frame rates of $\sim\!30$ FPS at 720p on an NVIDIA RTX4090. Since our approach is built upon ray tracing it enables effects such as defocus blur and camera distortion (e.g. such as from fisheye cameras), which are difficult to achieve by rasterization. We show that our method is more accurate with fewer blending issues than 3DGS and follow-up work on view-consistent rendering, especially on the challenging large-scale scenes from the Zip-NeRF dataset where it achieves sharpest results among real-time techniques., Comment: Project page: https://half-potato.gitlab.io/posts/ever
- Published
- 2024
5. Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering
- Author
-
Attal, Benjamin, Verbin, Dor, Mildenhall, Ben, Hedman, Peter, Barron, Jonathan T., O'Toole, Matthew, and Srinivasan, Pratul P.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
State-of-the-art techniques for 3D reconstruction are largely based on volumetric scene representations, which require sampling multiple points to compute the color arriving along a ray. Using these representations for more general inverse rendering -- reconstructing geometry, materials, and lighting from observed images -- is challenging because recursively path-tracing such volumetric representations is expensive. Recent works alleviate this issue through the use of radiance caches: data structures that store the steady-state, infinite-bounce radiance arriving at any point from any direction. However, these solutions rely on approximations that introduce bias into the renderings and, more importantly, into the gradients used for optimization. We present a method that avoids these approximations while remaining computationally efficient. In particular, we leverage two techniques to reduce variance for unbiased estimators of the rendering equation: (1) an occlusion-aware importance sampler for incoming illumination and (2) a fast cache architecture that can be used as a control variate for the radiance from a high-quality, but more expensive, volumetric cache. We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular reflections., Comment: Website: https://benattal.github.io/flash-cache/
- Published
- 2024
6. Ellis Island Oral History Project, Series EI, no. 0011: Interview of Birgitta Hedman Fichter by Paul E. Sigrist, Jr., November 29, 1990.
7. InterNeRF: Scaling Radiance Fields via Parameter Interpolation
- Author
-
Wang, Clinton, Hedman, Peter, Golland, Polina, Barron, Jonathan T., and Duckworth, Daniel
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Neural Radiance Fields (NeRFs) have unmatched fidelity on large, real-world scenes. A common approach for scaling NeRFs is to partition the scene into regions, each of which is assigned its own parameters. When implemented naively, such an approach is limited by poor test-time scaling and inconsistent appearance and geometry. We instead propose InterNeRF, a novel architecture for rendering a target view using a subset of the model's parameters. Our approach enables out-of-core training and rendering, increasing total model capacity with only a modest increase to training time. We demonstrate significant improvements in multi-room scenes while remaining competitive on standard benchmarks., Comment: Presented at CVPR 2024 Neural Rendering Intelligence Workshop
- Published
- 2024
8. NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
- Author
-
Verbin, Dor, Srinivasan, Pratul P., Hedman, Peter, Mildenhall, Ben, Attal, Benjamin, Szeliski, Richard, and Barron, Jonathan T.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Neural Radiance Fields (NeRFs) typically struggle to reconstruct and render highly specular objects, whose appearance varies quickly with changes in viewpoint. Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content. Moreover, these techniques rely on large computationally-expensive neural networks to model outgoing radiance, which severely limits optimization and rendering speed. We address these issues with an approach based on ray tracing: instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts reflection rays from these points and traces them through the NeRF representation to render feature vectors which are decoded into color using a small inexpensive network. We demonstrate that our model outperforms prior methods for view synthesis of scenes containing shiny objects, and that it is the only existing NeRF method that can synthesize photorealistic specular appearance and reflections in real-world scenes, while requiring comparable optimization time to current state-of-the-art view synthesis models., Comment: Project page: http://nerf-casting.github.io
- Published
- 2024
9. Constraining Time Variations in Enceladus' Water-Vapor Plume With Near-Infrared Spectra from Cassini-VIMS
- Author
-
Denny, Katie, Hedman, Matthew, Bockelée-Morvan, Dominique, Filacchione, Gianrico, and Capaccioni, Fabrizio
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
Water vapor produces a series of diagnostic emission lines in the near infrared between 2.60 and 2.75 microns. The Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft detected this emission signal from Enceladus' plume, and so VIMS observations provide information about the variability of the plume's water vapor content. Using a data set of 249 spectral cubes with relatively high signal-to-noise ratios, we confirmed the strength of this water-vapor emission feature corresponds to a line-of-sight column density of order 10^20 molecules/m^2, which is consistent with previous measurements from Cassini's Ultraviolet Imaging Spectrograph (UVIS). Comparing observations made at different times indicates that the water-vapor flux is unlikely to vary systematically with Enceladus' orbital phase, unlike the particle flux, which does vary with orbital phase. However, variations in the column density on longer and shorter timescales cannot be ruled out and merit further investigation., Comment: 16 pages, 11 figures, 1 supplemental table (ascii format)
- Published
- 2024
10. A spiking-domain implementation of electronic structure theory
- Author
-
Yadav, Aakash, Hedman, Daniel, and Jeong, Hongsik
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Condensed Matter - Materials Science ,Physics - Computational Physics - Abstract
Electronic Structure Theory (EST) describes the behavior of electrons in matter and is used to predict material properties. Conventionally, this involves forming a Hamiltonian and solving the Schr\"odinger equation through discrete computation. Here, a new perspective to EST is provided by treating a perfectly crystalline material as a Linear Translation Invariant (LTI) system. The validity of this LTI-EST formalism is demonstrated by determining band structures for a one-dimensional chain of atoms, including the phenomenon of band structure folding in super cells. The proposed formalism allows for analytical traceability of band structure folding and offers computational advantage by bypassing the O(N) eigenvalue calculations. The spike-based computing nature of the proposed LTI-EST formalism is highlighted; thereby implying potential for material simulations solely in the spiking domain.
- Published
- 2024
11. Water-Ice Dominated Spectra of Saturn's Rings and Small Moons from JWST
- Author
-
Hedman, M. M., Tiscareno, M. S., Showalter, M. R., Fletcher, L. N., King, O. R. T., Harkett, J., Roman, M. T., Rowe-Gurney, N., Hammel, H. B., Milam, S. N., Moutamid, M. El, Cartwright, R. J., de Pater, I., and Molter, E.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
JWST measured the infrared spectra of Saturn's rings and several of its small moons (Epimetheus, Pandora, Telesto and Pallene) as part of Guaranteed Time Observation program 1247. The NIRSpec instrument obtained near-infrared spectra of the small moons between 0.6 and 5.3 microns, which are all dominated by water-ice absorption bands. The shapes of the water-ice bands for these moons suggests that their surfaces contain variable mixes of crystalline and amorphous ice or variable amounts of contaminants and/or sub-micron ice grains. The near-infrared spectrum of Saturn's A ring has exceptionally high signal-to-noise between 2.7 and 5 microns and is dominated by features due to highly crystalline water ice. The ring spectrum also confirms that the rings possess a 2-3% deep absorption at 4.13 microns due to deuterated water-ice previously seen by the Visual and Infrared Mapping Spectrometer onboard the Cassini spacecraft. This spectrum also constrains the fundamental absorption bands of carbon dioxide and carbon monoxide and may contain evidence for a weak aliphatic hydrocarbon band. Meanwhile, the MIRI instrument obtained mid-infrared spectra of the rings between 4.9 and 27.9 microns, where the observed signal is a combination of reflected sunlight and thermal emission. This region shows a strong reflectance peak centered around 9.3 microns that can be attributed to crystalline water ice. Since both the near and mid-infrared spectra are dominated by highly crystalline water ice, they should provide a useful baseline for interpreting the spectra of other objects in the outer solar system with more complex compositions., Comment: Accepted for Publication in JGR Planets
- Published
- 2024
12. Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
- Author
-
Reiser, Christian, Garbin, Stephan, Srinivasan, Pratul P., Verbin, Dor, Szeliski, Richard, Mildenhall, Ben, Barron, Jonathan T., Hedman, Peter, and Geiger, Andreas
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
While surface-based view synthesis algorithms are appealing due to their low computational requirements, they often struggle to reproduce thin structures. In contrast, more expensive methods that model the scene's geometry as a volumetric density field (e.g. NeRF) excel at reconstructing fine geometric detail. However, density fields often represent geometry in a "fuzzy" manner, which hinders exact localization of the surface. In this work, we modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures. First, we employ a discrete opacity grid representation instead of a continuous density field, which allows opacity values to discontinuously transition from zero to one at the surface. Second, we anti-alias by casting multiple rays per pixel, which allows occlusion boundaries and subpixel structures to be modelled without using semi-transparent voxels. Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training. Lastly, we develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting. The compact meshes produced by our model can be rendered in real-time on mobile devices and achieve significantly higher view synthesis quality compared to existing mesh-based approaches., Comment: Project page at https://binary-opacity-grid.github.io
- Published
- 2024
13. The Uranus System from Occultation Observations (1977-2006): Rings, Pole Direction, Gravity Field, and Masses of Cressida, Cordelia, and Ophelia
- Author
-
French, Richard G., Hedman, Matthew M., Nicholson, Philip D., Longaretti, Pierre-Yves, and McGhee-French, Colleen A.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
From 31 Earth-based and three Voyager 2 occultations spanning 1977--2006, we determine the orbital elements of the nine main Uranian rings with typical RMS residuals of 0.2 -- 0.4 km and 1-$\sigma$ errors in $a, ae,$ and $a\sin i$ of order 0.1 km, registered on an absolute radius scale accurate to 0.2 km at the 2-$\sigma$ level. The $\lambda$ ring shows more substantial scatter. In addition to the free modes $m=0$ in the $\gamma$ ring and $m=2$ in the $\delta$ ring, we find two additional outer Lindblad resonance (OLR) modes ($m=-1$ and $-2$) and a possible $m=3$ inner Lindblad resonance (ILR) mode in the $\gamma$ ring. No normal modes are detected for rings 6, 5, 4, $\alpha$, or $\beta$. Five normal modes are forced by small moonlets: the 3:2 inner ILR of Cressida with the $\eta$ ring, the 6:5 ILR of Ophelia with the $\gamma$ ring, the 23:22 ILR of Cordelia with the $\delta$ ring, the 14:13 ILR of Ophelia with the outer edge of the $\epsilon$ ring, and the counterpart 25:24 OLR of Cordelia with the ring's inner edge. We determine the width-radius relations for nearly all of the detected mode. We find no convincing evidence for librations of any of the rings. The Uranus pole direction at epoch TDB 1986 Jan 19 12:00 is $\alpha_P=77.311327\pm 0.000141^\circ$ and $\delta_P=15.172795\pm0.000618^\circ$. We determine the zonal gravitational coefficients $J_2=(3509.291\pm0.412)\times 10^{-6}, J_4=(-35.522\pm0.466)\times10^{-6}$, and $J_6$ fixed at $0.5\times 10^{-6}$, with a correlation coefficient $\rho(J_2,J_4)=0.9861$, for a reference radius $R=$25559 km. From the amplitudes and resonance radii of normal modes forced by moonlets, we determine the masses of Cressida, Cordelia, and Ophelia. Their estimated densities decrease systematically with increasing orbital radius and generally follow the radial trend of the Roche critical density for a shape parameter $\gamma=1.6$., Comment: 94 pages, 53 figures
- Published
- 2024
14. SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration
- Author
-
Duckworth, Daniel, Hedman, Peter, Reiser, Christian, Zhizhin, Peter, Thibert, Jean-François, Lučić, Mario, Szeliski, Richard, and Barron, Jonathan T.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Recent techniques for real-time view synthesis have rapidly advanced in fidelity and speed, and modern methods are capable of rendering near-photorealistic scenes at interactive frame rates. At the same time, a tension has arisen between explicit scene representations amenable to rasterization and neural fields built on ray marching, with state-of-the-art instances of the latter surpassing the former in quality while being prohibitively expensive for real-time applications. In this work, we introduce SMERF, a view synthesis approach that achieves state-of-the-art accuracy among real-time methods on large scenes with footprints up to 300 m$^2$ at a volumetric resolution of 3.5 mm$^3$. Our method is built upon two primary contributions: a hierarchical model partitioning scheme, which increases model capacity while constraining compute and memory consumption, and a distillation training strategy that simultaneously yields high fidelity and internal consistency. Our approach enables full six degrees of freedom (6DOF) navigation within a web browser and renders in real-time on commodity smartphones and laptops. Extensive experiments show that our method exceeds the current state-of-the-art in real-time novel view synthesis by 0.78 dB on standard benchmarks and 1.78 dB on large scenes, renders frames three orders of magnitude faster than state-of-the-art radiance field models, and achieves real-time performance across a wide variety of commodity devices, including smartphones. We encourage readers to explore these models interactively at our project website: https://smerf-3d.github.io., Comment: Camera Ready. Project website: https://smerf-3d.github.io
- Published
- 2023
15. Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion
- Author
-
Prabhu, Kira, Wu, Jane, Tsai, Lynn, Hedman, Peter, Goldman, Dan B, Poole, Ben, and Broxton, Michael
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
This paper presents a novel approach to inpainting 3D regions of a scene, given masked multi-view images, by distilling a 2D diffusion model into a learned 3D scene representation (e.g. a NeRF). Unlike 3D generative methods that explicitly condition the diffusion model on camera pose or multi-view information, our diffusion model is conditioned only on a single masked 2D image. Nevertheless, we show that this 2D diffusion model can still serve as a generative prior in a 3D multi-view reconstruction problem where we optimize a NeRF using a combination of score distillation sampling and NeRF reconstruction losses. Predicted depth is used as additional supervision to encourage accurate geometry. We compare our approach to 3D inpainting methods that focus on object removal. Because our method can generate content to fill any 3D masked region, we additionally demonstrate 3D object completion, 3D object replacement, and 3D scene completion.
- Published
- 2023
16. Saturn's Atmosphere in Northern Summer Revealed by JWST/MIRI
- Author
-
Fletcher, Leigh N., King, Oliver R. T., Harkett, Jake, Hammel, Heidi B., Roman, Michael T., Melin, Henrik, Hedman, Matthew M., Moses, Julianne I., Guerlet, Sandrine, Milam, Stefanie N., and Tiscareno, Matthew S.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
Saturn's northern summertime hemisphere was mapped by JWST/MIRI (4.9-27.9 $\mu$m) in November 2022, tracing the seasonal evolution of temperatures, aerosols, and chemical species in the five years since the end of the Cassini mission. The spectral region between reflected sunlight and thermal emission (5.1-6.8 $\mu$m) is mapped for the first time, enabling retrievals of phosphine, ammonia, and water, alongside a system of two aerosol layers (an upper tropospheric haze $p<0.3$ bars, and a deeper cloud layer at 1-2 bars). Ammonia displays substantial equatorial enrichment, suggesting similar dynamical processes to those found in Jupiter's equatorial zone. Saturn's North Polar Stratospheric Vortex has warmed since 2017, entrained by westward winds at $p<10$ mbar, and exhibits localised enhancements in several hydrocarbons. The strongest latitudinal temperature gradients are co-located with the peaks of the zonal winds, implying wind decay with altitude. Reflectivity contrasts at 5-6 $\mu$m compare favourably with albedo contrasts observed by Hubble, and several discrete vortices are observed. A warm equatorial stratospheric band in 2022 is not consistent with a 15-year repeatability for the equatorial oscillation. A stacked system of windshear zones dominates Saturn's equatorial stratosphere, and implies a westward equatorial jet near 1-5 mbar at this epoch. Lower stratospheric temperatures, and local minima in the distributions of several hydrocarbons, imply low-latitude upwelling and a reversal of Saturn's interhemispheric circulation since equinox. Latitudinal distributions of stratospheric ethylene, benzene, methyl and carbon dioxide are presented for the first time, and we report the first detection of propane bands in the 8-11 $\mu$m region., Comment: 53 pages, 25 figures, accepted for publication in JGR: Planets
- Published
- 2023
- Full Text
- View/download PDF
17. JWST molecular mapping and characterization of Enceladus' water plume feeding its torus
- Author
-
Villanueva, G. L., Hammel, H. B., Milam, S. N., Kofman, V., Faggi, S., Glein, C. R., Cartwright, R., Roth, L., Hand, K. P., Paganini, L., Spencer, J., Stansberry, J., Holler, B., Rowe-Gurney, N., Protopapa, S., Strazzulla, G., Liuzzi, G., Cruz-Mermy, G., Moutamid, M. El, Hedman, M., and Denny, K.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
Enceladus is a prime target in the search for life in our solar system, having an active plume likely connected to a large liquid water subsurface ocean. Using the sensitive NIRSpec instrument onboard JWST, we searched for organic compounds and characterized the plume's composition and structure. The observations directly sample the fluorescence emissions of H2O and reveal an extraordinarily extensive plume (up to 10,000 km or 40 Enceladus radii) at cryogenic temperatures (25 K) embedded in a large bath of emission originating from Enceladus' torus. Intriguingly, the observed outgassing rate (300 kg/s) is similar to that derived from close-up observations with Cassini 15 years ago, and the torus density is consistent with previous spatially unresolved measurements with Herschel 13 years ago, suggesting that the vigor of gas eruption from Enceladus has been relatively stable over decadal timescales. This level of activity is sufficient to maintain a derived column density of 4.5x1017 m-2 for the embedding equatorial torus, and establishes Enceladus as the prime source of water across the Saturnian system. We performed searches for several non-water gases (CO2, CO, CH4, C2H6, CH3OH), but none were identified in the spectra. On the surface of the trailing hemisphere, we observe strong H2O ice features, including its crystalline form, yet we do not recover CO2, CO nor NH3 ice signatures from these observations. As we prepare to send new spacecraft into the outer solar system, these observations demonstrate the unique ability of JWST in providing critical support to the exploration of distant icy bodies and cryovolcanic plumes., Comment: Accepted for publication in Nature Astronomy on May 17th 2023
- Published
- 2023
18. Eclipse: Disambiguating Illumination and Materials using Unintended Shadows
- Author
-
Verbin, Dor, Mildenhall, Ben, Hedman, Peter, Barron, Jonathan T., Zickler, Todd, and Srinivasan, Pratul P.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Decomposing an object's appearance into representations of its materials and the surrounding illumination is difficult, even when the object's 3D shape is known beforehand. This problem is especially challenging for diffuse objects: it is ill-conditioned because diffuse materials severely blur incoming light, and it is ill-posed because diffuse materials under high-frequency lighting can be indistinguishable from shiny materials under low-frequency lighting. We show that it is possible to recover precise materials and illumination -- even from diffuse objects -- by exploiting unintended shadows, like the ones cast onto an object by the photographer who moves around it. These shadows are a nuisance in most previous inverse rendering pipelines, but here we exploit them as signals that improve conditioning and help resolve material-lighting ambiguities. We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials, the surrounding illumination environment, and the shapes of the unseen light occluders who inadvertently cast shadows upon it., Comment: Project page: https://dorverbin.github.io/eclipse/
- Published
- 2023
19. New Insights into Variations in Enceladus Plume Particle Launch Velocities from Cassini-VIMS spectral data
- Author
-
Sharma, H., Hedman, M. M., and Vahidinia, S.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
Enceladus' plume consists mainly of a mixture of water vapor and solid ice particles that may originate from a subsurface ocean. The physical processes underlying Enceladus' plume particle dynamics are still being debated, and quantifying the particles' size distribution and launch velocities can help constrain these processes. Cassini's Visual and Infrared Mapping Spectrometer (VIMS) observed the Enceladus plume over a wavelength range of 0.9 micron to 5.0 microns for a significant fraction of Enceladus' orbital period on three dates in the summer of 2017. We find that the relative brightness of the plume on these different dates varies with wavelength, implying that the particle size distribution in the plume changes over time. These observations also enable us to study how the particles' launch velocities vary with time and observed wavelength. We find that the typical launch velocity of particles remains between 140 m/s and 148 m/s at wavelengths between 1.2 microns and 3.7 microns. This may not be consistent with prior models where particles are only accelerated by interactions with the vent walls and gas, and could imply that mutual particle collisions close to the vent are more important than previously recognized., Comment: 13 pages, 8 figures, accepted for publication in PSJ
- Published
- 2023
20. Examining Uranus' zeta ring in Voyager 2 Wide-Angle-Camera Observations: Quantifying the Ring's Structure in 1986 and its Modifications prior to the Year 2007
- Author
-
Hedman, M. M., Regan, I., Becker, T., Brooks, S. M., de Pater, I., and Showalter, M.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
The zeta ring is the innermost component of the Uranian ring system. It is of scientific interest because its morphology changed significantly between the Voyager 2 encounter in 1986 and subsequent Earth-based observations around 2007. It is also of practical interest because some Uranus mission concepts have the spacecraft pass through the inner flank of this ring. Recent re-examinations of the Voyager 2 images have revealed additional information about this ring that provide a more complete picture of the ring's radial brightness profile and phase function. These data reveal that this ring's brightness varies with phase angle in a manner similar to other tenuous rings, consistent with it being composed primarily of sub-millimeter-sized particles. The total cross section of particles within this ring can also be estimated from these data, but translating that number into the actual risk to a spacecraft flying through this region depends on a number of model-dependent parameters. Fortunately, comparisons with Saturn's G and D rings allows the zeta-ring's particle number density to be compared with regions previously encountered by the Voyager and Cassini spacecraft. Finally, these data indicate that the observed changes in the zeta-ring's structure between 1986 and 2007 are primarily due to a substantial increase in the amount of dust at distances between 38,000 km and 40,000 km from Uranus' center., Comment: 28 Pages, 12 Figures, Accepted for publication in PSJ, fixed a few small issues found in the proofs
- Published
- 2023
- Full Text
- View/download PDF
21. Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
- Author
-
Barron, Jonathan T., Mildenhall, Ben, Verbin, Dor, Srinivasan, Pratul P., and Hedman, Peter
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics ,Computer Science - Machine Learning - Abstract
Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density. However, these grid-based approaches lack an explicit understanding of scale and therefore often introduce aliasing, usually in the form of jaggies or missing scene content. Anti-aliasing has previously been addressed by mip-NeRF 360, which reasons about sub-volumes along a cone rather than points along a ray, but this approach is not natively compatible with current grid-based techniques. We show how ideas from rendering and signal processing can be used to construct a technique that combines mip-NeRF 360 and grid-based models such as Instant NGP to yield error rates that are 8% - 77% lower than either prior technique, and that trains 24x faster than mip-NeRF 360., Comment: Project page: https://jonbarron.info/zipnerf/
- Published
- 2023
22. Vox-E: Text-guided Voxel Editing of 3D Objects
- Author
-
Sella, Etai, Fiebelman, Gal, Hedman, Peter, and Averbuch-Elor, Hadar
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Large scale text-guided diffusion models have garnered significant attention due to their ability to synthesize diverse images that convey complex visual concepts. This generative power has more recently been leveraged to perform text-to-3D synthesis. In this work, we present a technique that harnesses the power of latent diffusion models for editing existing 3D objects. Our method takes oriented 2D images of a 3D object as input and learns a grid-based volumetric representation of it. To guide the volumetric representation to conform to a target text prompt, we follow unconditional text-to-3D methods and optimize a Score Distillation Sampling (SDS) loss. However, we observe that combining this diffusion-guided loss with an image-based regularization loss that encourages the representation not to deviate too strongly from the input object is challenging, as it requires achieving two conflicting goals while viewing only structure-and-appearance coupled 2D projections. Thus, we introduce a novel volumetric regularization loss that operates directly in 3D space, utilizing the explicit nature of our 3D representation to enforce correlation between the global structure of the original and edited object. Furthermore, we present a technique that optimizes cross-attention volumetric grids to refine the spatial extent of the edits. Extensive experiments and comparisons demonstrate the effectiveness of our approach in creating a myriad of edits which cannot be achieved by prior works., Comment: ICCV 2023. Project webpage: https://tau-vailab.github.io/Vox-E/
- Published
- 2023
23. BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
- Author
-
Yariv, Lior, Hedman, Peter, Reiser, Christian, Verbin, Dor, Srinivasan, Pratul P., Szeliski, Richard, Barron, Jonathan T., and Mildenhall, Ben
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis. We first optimize a hybrid neural volume-surface scene representation designed to have well-behaved level sets that correspond to surfaces in the scene. We then bake this representation into a high-quality triangle mesh, which we equip with a simple and fast view-dependent appearance model based on spherical Gaussians. Finally, we optimize this baked representation to best reproduce the captured viewpoints, resulting in a model that can leverage accelerated polygon rasterization pipelines for real-time view synthesis on commodity hardware. Our approach outperforms previous scene representations for real-time rendering in terms of accuracy, speed, and power consumption, and produces high quality meshes that enable applications such as appearance editing and physical simulation., Comment: Video and interactive web demo available at https://bakedsdf.github.io/
- Published
- 2023
24. MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes
- Author
-
Reiser, Christian, Szeliski, Richard, Verbin, Dor, Srinivasan, Pratul P., Mildenhall, Ben, Geiger, Andreas, Barron, Jonathan T., and Hedman, Peter
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field., Comment: Video and interactive web demo available at https://merf42.github.io
- Published
- 2023
25. Dynamics of growing carbon nanotube interfaces probed by machine learning-enabled molecular simulations
- Author
-
Hedman, Daniel, McLean, Ben, Bichara, Christophe, Maruyama, Shigeo, Larsson, J. Andreas, and Ding, Feng
- Subjects
Condensed Matter - Materials Science ,Physics - Computational Physics - Abstract
Carbon nanotubes (CNTs) are currently considered a successor to silicon in future nanoelectronic devices. To realize this, controlled growth of defect-free nanotubes is required. Until now, the understanding of atomic-scale CNT growth mechanisms provided by molecular dynamics simulations has been hampered by their short timescales. Here, we develop an efficient and accurate machine learning force field, DeepCNT-22, to simulate the complete growth of defect-free single-walled CNTs (SWCNTs) on iron catalysts at near-microsecond timescales. We provide atomic-level insight into the nucleation and growth processes of SWCNTs, including the evolution of the tube-catalyst interface and the mechanisms underlying defect formation and healing. Our simulations highlight the maximization of SWCNT-edge configurational entropy during growth and how defect-free CNTs can grow ultralong if carbon supply and temperature are carefully controlled., Comment: Supporting Videos can be found on YouTube at the following links S1: https://youtu.be/K90Ca6uDNEQ S2: https://youtu.be/x8Z5Go5iW58 S3: https://youtu.be/e1Yx14PQjkg S4: https://youtu.be/JFKhklSHgA4
- Published
- 2023
26. Shared and distinct effect mediators in exposure-based and traditional cognitive behavior therapy for fibromyalgia: Secondary analysis of a randomized controlled trial
- Author
-
Hedman-Lagerlöf, Maria, Buhrman, Monica, Hedman-Lagerlöf, Erik, Ljótsson, Brjánn, and Axelsson, Erland
- Published
- 2024
- Full Text
- View/download PDF
27. AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
- Author
-
Jiang, Yifan, Hedman, Peter, Mildenhall, Ben, Xu, Dejia, Barron, Jonathan T., Wang, Zhangyang, and Xue, Tianfan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Neural Radiance Fields (NeRFs) are a powerful representation for modeling a 3D scene as a continuous function. Though NeRF is able to render complex 3D scenes with view-dependent effects, few efforts have been devoted to exploring its limits in a high-resolution setting. Specifically, existing NeRF-based methods face several limitations when reconstructing high-resolution real scenes, including a very large number of parameters, misaligned input data, and overly smooth details. In this work, we conduct the first pilot study on training NeRF with high-resolution data and propose the corresponding solutions: 1) marrying the multilayer perceptron (MLP) with convolutional layers which can encode more neighborhood information while reducing the total number of parameters; 2) a novel training strategy to address misalignment caused by moving objects or small camera calibration errors; and 3) a high-frequency aware loss. Our approach is nearly free without introducing obvious training/testing costs, while experiments on different datasets demonstrate that it can recover more high-frequency details compared with the current state-of-the-art NeRF models. Project page: \url{https://yifanjiang.net/alignerf.}
- Published
- 2022
28. A First Course in Logic: An Introduction to Model Theory, Proof Theory, Computability, and Complexity Shawn Hedman
- Author
-
Urquhart, Alasdair
- Published
- 2007
29. MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
- Author
-
Chen, Zhiqin, Funkhouser, Thomas, Hedman, Peter, and Tagliasacchi, Andrea
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics ,Computer Science - Machine Learning - Abstract
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware. This paper introduces a new NeRF representation based on textured polygons that can synthesize novel images efficiently with standard rendering pipelines. The NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors. Traditional rendering of the polygons with a z-buffer yields an image with features at every pixel, which are interpreted by a small, view-dependent MLP running in a fragment shader to produce a final pixel color. This approach enables NeRFs to be rendered with the traditional polygon rasterization pipeline, which provides massive pixel-level parallelism, achieving interactive frame rates on a wide range of compute platforms, including mobile phones., Comment: CVPR 2023. Project page: https://mobile-nerf.github.io, code: https://github.com/google-research/jax3d/tree/main/jax3d/projects/mobilenerf
- Published
- 2022
30. Ring Seismology of the Ice Giants Uranus and Neptune
- Author
-
A'Hearn, Joseph A., Hedman, Matthew M., Mankovich, Christopher R., Aramona, Hima, and Marley, Mark S.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
We assess the prospect of using ring seismology to probe the interiors of the ice giants Uranus and Neptune. We do this by calculating normal mode spectra for different interior models of Uranus and Neptune using the stellar oscillation code GYRE. These spectra provide predictions of where in these planets' ring systems the effects of interior oscillations might be detected. We find that f-mode resonances with azimuthal order $m=2$ or $7 \leq m \leq 19$ fall among the inner rings (6, 5, 4, $\alpha$, and $\beta$) of Uranus, while f-mode resonances with $2 \leq m \leq 12$ fall in the tenuous $\zeta$ ring region. In addition, f-mode resonances with $m=2$ or $6 \leq m \leq 13$ may give azimuthal structure to Neptune's tenuous Galle ring. We also find that g-mode resonances may fall in the middle to outer rings of these planets. Although an orbiter is most likely required to confirm the association between any waves in the rings and planetary normal modes, the diversity of normal mode spectra implies that identification of just one or two modes in the rings of Uranus or Neptune would eliminate a variety of interior models, and thus aid in the interpretation of Voyager observations and future spacecraft measurements., Comment: 27 pages, 8 figures, accepted for publication in The Planetary Science Journal
- Published
- 2022
31. LFW-Beautified: A Dataset of Face Images with Beautification and Augmented Reality Filters
- Author
-
Hedman, Pontus, Skepetzis, Vasilios, Hernandez-Diaz, Kevin, Bigun, Josef, and Alonso-Fernandez, Fernando
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Selfie images enjoy huge popularity in social media. The same platforms centered around sharing this type of images offer filters to beautify them or incorporate augmented reality effects. Studies suggests that filtered images attract more views and engagement. Selfie images are also in increasing use in security applications due to mobiles becoming data hubs for many transactions. Also, video conference applications, boomed during the pandemic, include such filters. Such filters may destroy biometric features that would allow person recognition or even detection of the face itself, even if such commodity applications are not necessarily used to compromise facial systems. This could also affect subsequent investigations like crimes in social media, where automatic analysis is usually necessary given the amount of information posted in social sites or stored in devices or cloud repositories. To help in counteracting such issues, we contribute with a database of facial images that includes several manipulations. It includes image enhancement filters (which mostly modify contrast and lightning) and augmented reality filters that incorporate items like animal noses or glasses. Additionally, images with sunglasses are processed with a reconstruction network trained to learn to reverse such modifications. This is because obfuscating the eye region has been observed in the literature to have the highest impact on the accuracy of face detection or recognition. We start from the popular Labeled Faces in the Wild (LFW) database, to which we apply different modifications, generating 8 datasets. Each dataset contains 4,324 images of size 64 x 64, with a total of 34,592 images. The use of a public and widely employed face dataset allows for replication and comparison. The created database is available at https://github.com/HalmstadUniversityBiometrics/LFW-Beautified, Comment: Under consideration at Elsevier Data in Brief
- Published
- 2022
32. Recommendations for Deprescribing of Medication in the Last Phase of Life: An International Delphi Study
- Author
-
Elsten, Eline E.C.M., Pot, Iris E., Geijteman, Eric C.T., Hedman, Christel, van der Heide, Agnes, van der Kuy, P. Hugo M., Fürst, Carl-Johan, Eychmüller, Steffen, van Zuylen, Lia, and van der Rijt, Carin C.D.
- Published
- 2024
- Full Text
- View/download PDF
33. Percent predicted peak oxygen uptake is superior to weight-indexed peak oxygen uptake in risk stratification before lung cancer lobectomy
- Author
-
Kristenson, Karolina and Hedman, Kristofer
- Published
- 2024
- Full Text
- View/download PDF
34. Body mass index and risk of over 100 cancer forms and subtypes in 4.1 million individuals in Sweden: the Obesity and Disease Development Sweden (ODDS) pooled cohort study
- Author
-
Sun, Ming, da Silva, Marisa, Bjørge, Tone, Fritz, Josef, Mboya, Innocent B., Jerkeman, Mats, Stattin, Pär, Wahlström, Jens, Michaëlsson, Karl, van Guelpen, Bethany, Magnusson, Patrik K.E., Sandin, Sven, Yin, Weiyao, Lagerros, Ylva Trolle, Ye, Weimin, Nwaru, Bright, Kankaanranta, Hannu, Lönnberg, Lena, Chabok, Abbas, Isaksson, Karolin, Pedersen, Nancy L., Elmståhl, Sölve, Lind, Lars, Hedman, Linnea, Häggström, Christel, and Stocks, Tanja
- Published
- 2024
- Full Text
- View/download PDF
35. Time trends of the association of body mass index with mortality in 3.5 million young Swedish adults
- Author
-
Mboya, Innocent B., Fritz, Josef, da Silva, Marisa, Sun, Ming, Wahlström, Jens, Magnusson, Patrik K.E., Sandin, Sven, Yin, Weiyao, Söderberg, Stefan, Pedersen, Nancy L., Lagerros, Ylva Trolle, Nwaru, Bright I., Kankaanranta, Hannu, Chabok, Abbas, Leppert, Jerzy, Backman, Helena, Hedman, Linnea, Isaksson, Karolin, Michaëlsson, Karl, Häggström, Christel, and Stocks, Tanja
- Published
- 2024
- Full Text
- View/download PDF
36. Quantifying the economic costs of power outages owing to extreme events: A systematic review
- Author
-
Ghodeswar, Archana, Bhandari, Mahabir, and Hedman, Bruce
- Published
- 2025
- Full Text
- View/download PDF
37. Stable selenium nickel-iron electrocatalyst for oxygen evolution reaction in alkaline and natural seawater
- Author
-
Wang, Jue, Li, Zhi, Feng, Libei, Lu, Dachun, Fang, Wei, Zhang, Qinfang, Hedman, Daniel, and Tong, Shengfu
- Published
- 2025
- Full Text
- View/download PDF
38. Kronoseismology VI: Reading the recent history of Saturn's gravity field in its rings
- Author
-
Hedman, M. M., Nicholson, P. D., Moutamid, M. El, and Smotherman, S.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
Saturn's C ring contains multiple structures that appear to be density waves driven by time-variable anomalies in the planet's gravitational field. Semi-empirical extensions of density wave theory enable the observed wave properties to be translated into information about how the pattern speeds and amplitudes of these gravitational anomalies have changed over time. Combining these theoretical tools with wavelet-based analyses of data obtained by the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft reveals a suite of structures in Saturn's gravity field with azimuthal wavenumber 3, rotation rates between 804 degrees/day and 842 degrees/day and local gravitational potential amplitudes between 30 and 150 cm^2/s^2. Some of these anomalies are transient, appearing and disappearing over the course of a few Earth years, while others persist for decades. Most of these persistent patterns appear to have roughly constant pattern speeds, but there is at least one structure in the planet's gravitational field whose rotation rate steadily increased between 1970 and 2010. This gravitational field structure appears to induce two different asymmetries in the planet's gravity field, one with azimuthal wavenumber 3 that rotates at roughly 810 degrees/day and another with azimuthal wavenumber 1 rotating three times faster. The atmospheric processes responsible for generating the latter pattern may involve solar tides., Comment: 60 pages, 29 Figures, accepted for publication in the Planetary Science Journal
- Published
- 2022
39. Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
- Author
-
Verbin, Dor, Hedman, Peter, Mildenhall, Ben, Zickler, Todd, Barron, Jonathan T., and Srinivasan, Pratul P.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location. While NeRF-based techniques excel at representing fine geometric structures with smoothly varying view-dependent appearance, they often fail to accurately capture and reproduce the appearance of glossy surfaces. We address this limitation by introducing Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties. We show that together with a regularizer on normal vectors, our model significantly improves the realism and accuracy of specular reflections. Furthermore, we show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing., Comment: Project page: https://dorverbin.github.io/refnerf/
- Published
- 2021
40. Yoga for children and adolescents: A decade-long integrative review on feasibility and efficacy in school-based and psychiatric care interventions
- Author
-
Kerekes, Nóra, Söderström, Alexandra, Holmberg, Christine, and Hedman Ahlström, Britt
- Published
- 2024
- Full Text
- View/download PDF
41. Hydrological dynamics, wetland morphology and vegetation structure determine riparian arthropod communities in constructed wetlands
- Author
-
Åhlén, David, Hedman, Sofia, Jarsjö, Jerker, Klatt, Björn K., Schneider, Lea D., Strand, John, Tack, Ayco, Åhlén, Imenne, and Hambäck, Peter A.
- Published
- 2024
- Full Text
- View/download PDF
42. Effectiveness and prediction of treatment adherence to guided internet-based cognitive behavioral therapy for health anxiety: A cohort study in routine psychiatric care
- Author
-
Österman, Susanna, Axelsson, Erland, Forsell, Erik, Svanborg, Cecilia, Lindefors, Nils, Hedman-Lagerlöf, Erik, and Ivanov, Volen Z.
- Published
- 2024
- Full Text
- View/download PDF
43. Registry study of cardiovascular death in Sweden 2013–2019: Home as place of death and specialized palliative care are the preserve of a minority
- Author
-
Nyblom, Stina, Öhlén, Joakim, Larsdotter, Cecilia, Ozanne, Anneli, Fürst, Carl Johan, and Hedman, Ragnhild
- Published
- 2024
- Full Text
- View/download PDF
44. Assessment of DNA quality for whole genome library preparation
- Author
-
Jansson, Linda, Aili Fagerholm, Siri, Börkén, Emelie, Hedén Gynnå, Arvid, Sidstedt, Maja, Forsberg, Christina, Ansell, Ricky, Hedman, Johannes, and Tillmar, Andreas
- Published
- 2024
- Full Text
- View/download PDF
45. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images
- Author
-
Mildenhall, Ben, Hedman, Peter, Martin-Brualla, Ricardo, Srinivasan, Pratul, and Barron, Jonathan T.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Neural Radiance Fields (NeRF) is a technique for high quality novel view synthesis from a collection of posed input images. Like most view synthesis methods, NeRF uses tonemapped low dynamic range (LDR) as input; these images have been processed by a lossy camera pipeline that smooths detail, clips highlights, and distorts the simple noise distribution of raw sensor data. We modify NeRF to instead train directly on linear raw images, preserving the scene's full dynamic range. By rendering raw output images from the resulting NeRF, we can perform novel high dynamic range (HDR) view synthesis tasks. In addition to changing the camera viewpoint, we can manipulate focus, exposure, and tonemapping after the fact. Although a single raw image appears significantly more noisy than a postprocessed one, we show that NeRF is highly robust to the zero-mean distribution of raw noise. When optimized over many noisy raw inputs (25-200), NeRF produces a scene representation so accurate that its rendered novel views outperform dedicated single and multi-image deep raw denoisers run on the same wide baseline input images. As a result, our method, which we call RawNeRF, can reconstruct scenes from extremely noisy images captured in near-darkness., Comment: Project page: https://bmild.github.io/rawnerf/
- Published
- 2021
46. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
- Author
-
Barron, Jonathan T., Mildenhall, Ben, Verbin, Dor, Srinivasan, Pratul P., and Hedman, Peter
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF-like models often produce blurry or low-resolution renderings (due to the unbalanced detail and scale of nearby and distant objects), are slow to train, and may exhibit artifacts due to the inherent ambiguity of the task of reconstructing a large scene from a small set of images. We present an extension of mip-NeRF (a NeRF variant that addresses sampling and aliasing) that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes. Our model, which we dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees around a point, reduces mean-squared error by 57% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps for highly intricate, unbounded real-world scenes., Comment: https://jonbarron.info/mipnerf360/
- Published
- 2021
47. Constraining low-altitude lunar dust using the LADEE-UVS data
- Author
-
Sharma, H., Hedman, M. M., Wooden, D. H., Colaprete, A., and Cook, A. M.
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
Studying lunar dust is vital to the exploration of the Moon and other airless planetary bodies. The Ultraviolet and Visible Spectrometer (UVS) on board the Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft conducted a series of Almost Limb activities to look for dust near the dawn terminator region. During these activities the instrument stared at a fixed point in the zodiacal background off the Moon's limb while the spacecraft moved in retrograde orbit from the sunlit to the unlit side of the Moon. The spectra obtained from these activities probe altitudes within a few kilometers of the Moon's surface, a region whose dust populations were not well constrained by previous remote-sensing observations from orbiting spacecraft. Filtering these spectra to remove a varying instrumental signal enables constraints to be placed on potential signals from a dust atmosphere. These filtered spectra are compared with those predicted for dust atmospheres with various exponential scale heights and particle size distributions to yield upper limits on the dust number density for these potential populations. For a differential size distribution proportional to $s^{-3}$ (where $s$ is the particle size) and a scale height of 1 km, we obtain an upper limit on the number density of dust particles at the Moon's surface of 142 $m^{-3}$., Comment: Accepted for publication in JGR: Planets
- Published
- 2021
- Full Text
- View/download PDF
48. On the Effect of Selfie Beautification Filters on Face Detection and Recognition
- Author
-
Hedman, Pontus, Skepetzis, Vasilios, Hernandez-Diaz, Kevin, Bigun, Josef, and Alonso-Fernandez, Fernando
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Beautification and augmented reality filters are very popular in applications that use selfie images captured with smartphones or personal devices. However, they can distort or modify biometric features, severely affecting the capability of recognizing individuals' identity or even detecting the face. Accordingly, we address the effect of such filters on the accuracy of automated face detection and recognition. The social media image filters studied either modify the image contrast or illumination or occlude parts of the face with for example artificial glasses or animal noses. We observe that the effect of some of these filters is harmful both to face detection and identity recognition, specially if they obfuscate the eye or (to a lesser extent) the nose. To counteract such effect, we develop a method to reconstruct the applied manipulation with a modified version of the U-NET segmentation network. This is observed to contribute to a better face detection and recognition accuracy. From a recognition perspective, we employ distance measures and trained machine learning algorithms applied to features extracted using a ResNet-34 network trained to recognize faces. We also evaluate if incorporating filtered images to the training set of machine learning approaches are beneficial for identity recognition. Our results show good recognition when filters do not occlude important landmarks, specially the eyes (identification accuracy >99%, EER<2%). The combined effect of the proposed approaches also allow to mitigate the effect produced by filters that occlude parts of the face, achieving an identification accuracy of >92% with the majority of perturbations evaluated, and an EER <8%. Although there is room for improvement, when neither U-NET reconstruction nor training with filtered images is applied, the accuracy with filters that severely occlude the eye is <72% (identification) and >12% (EER), Comment: Published at Pattern Recognition Letters, 2022
- Published
- 2021
- Full Text
- View/download PDF
49. Identifying proteomic risk factors for overall, aggressive, and early onset prostate cancer using Mendelian Randomisation and tumour spatial transcriptomics
- Author
-
Desai, Trishna A., Hedman, Åsa K., Dimitriou, Marios, Koprulu, Mine, Figiel, Sandy, Yin, Wencheng, Johansson, Mattias, Watts, Eleanor L., Atkins, Joshua R., Sokolov, Aleksandr V., Schiöth, Helgi B., Gunter, Marc J., Tsilidis, Konstantinos K., Martin, Richard M., Pietzner, Maik, Langenberg, Claudia, Mills, Ian G., Lamb, Alastair D., Mälarstig, Anders, Key, Tim J., Travis, Ruth C., and Smith-Byrne, Karl
- Published
- 2024
- Full Text
- View/download PDF
50. Ultrasensitive sequencing of STR markers utilizing unique molecular identifiers and the SiMSen-Seq method
- Author
-
Sidstedt, Maja, Gynnå, Arvid H., Kiesler, Kevin M., Jansson, Linda, Steffen, Carolyn R., Håkansson, Joakim, Johansson, Gustav, Österlund, Tobias, Bogestål, Yalda, Tillmar, Andreas, Rådström, Peter, Ståhlberg, Anders, Vallone, Peter M., and Hedman, Johannes
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.