Li, Xiaoyin, Hathaway, Cassandra A., Small, Brent J., Tometich, Danielle B., Gudenkauf, Lisa M., Hoogland, Aasha I., Fox, Rina S., Victorson, David E., Salsman, John M., Gonzalez, Brian D., Jim, Heather S. L., Siegel, Erin M., Tworoger, Shelley S., and Oswald, Laura B.
Subjects
SOCIAL belonging, SOCIAL isolation, CONFIRMATORY factor analysis, STRUCTURAL equation modeling, YOUNG adults, SOCIAL anxiety
Abstract
Background: Social isolation and social connectedness are health determinants and aspects of social well‐being with strong associations with psychological distress. This study evaluated relationships among social isolation, social connectedness, and psychological distress (i.e., depression, anxiety) over 1 year in young adult (YA) cancer survivors 18–39 years old. Methods: Participants were YAs in a large cohort study that completed questionnaires every 2 months for 1 year. Social isolation, aspects of social connectedness (i.e., companionship, emotional support, instrumental support, and informational support), depression, and anxiety were assessed with Patient‐Reported Outcomes Measurement Information System short form measures. Mixed‐effect models were used to evaluate changes over time. Confirmatory factor analysis and multilevel structural equation modeling were used to define social connectedness as a latent construct and determine whether relationships between social isolation and psychological distress were mediated by social connectedness. Results: Participants (N = 304) were mean (M) = 33.5 years old (SD = 4.7) and M = 4.5 years (SD = 3.5) post‐initial cancer diagnosis. Most participants were female (67.4%) and non‐Hispanic White (68.4%). Average scores for social well‐being and psychological distress were within normative ranges and did not change (p values >.05). However, large proportions of participants reported at least mild social isolation (27%–30%), depressive symptoms (36%–37%), and symptoms of anxiety (49%–51%) at each time point. Across participants, more social isolation was related to less social connectedness (p values <.001), more depressive symptoms (p <.001), and more symptoms of anxiety (p <.001). Social connectedness mediated the relationship between social isolation and depression (p =.004), but not anxiety (p >.05). Conclusions: Social isolation and connectedness could be intervention targets for reducing depression among YA cancer survivors. Among young adult cancer survivors, social connectedness mediated the relationship between social isolation and depression, but not anxiety. Social isolation and connectedness could be intervention targets for reducing depression among young adult cancer survivors. [ABSTRACT FROM AUTHOR]
Kovar, John L., Papanicolaou, Athanasios N., Busch, Dennis L., Chatterjee, Amit, Cole, Kevin J., Dalzell, Brent J., Emmett, Bryan D., Johnson, Jane M. F., Malone, Robert W., Morrow, Amy J., Nowatzke, Laurie W., O'Brien, Peter L., Prueger, John H., Rogovska, Natalia, Ruis, Sabrina J., Todey, Dennis P., and Wacha, Ken M.
Liebig, Mark A., Abendroth, Lori J., Robertson, G. Philip, Augustine, David, Boughton, Elizabeth H., Bagley, Gwendolynn, Busch, Dennis L., Clark, Pat, Coffin, Alisa W., Dalzell, Brent J., Dell, Curtis J., Fortuna, Ann‐Marie, Freidenreich, Ariel, Heilman, Philip, Helseth, Christina, Huggins, David R., Johnson, Jane M. F., Khorchani, Makki, King, Kevin, and Kovar, John L.
Meziab, Omar, Seckeler, Michael D., Scherer, Katalin, and Barber, Brent J.
Abstract
Introduction/Aims: Type 1 myotonic dystrophy (DM1) is a neuromuscular disorder of multiple organ systems with important electrophysiologic (EP) manifestations, leading to a cumulative incidence of sudden death of 6.6%. Due to genetic anticipation, there is a pediatric subset of this patient population. However, most EP research on DM1 patients has been in adults, making cardiac care for pediatric patients difficult and directed by adult guidelines which often leads to cardiovascular implantable electronic device (CIED) implants. We sought to investigate the prevalence of CIEDs in the pediatric DM1 population. Methods: The Vizient® Clinical Data Base was queried from October 2019 to October 2023 for admissions with and without ICD‐10 code for myotonic dystrophy (G71.11), with and without codes for presence of a pacemaker or ICD (Z95.0, Z95.810). Patients who were identified were stratified by age: Pediatric (0–21 years) and Adult (22–50 years). Results: Prevalence of CIED in pediatric DM1 was 2.1% and in adult DM1 was 15.8%. When comparing to pediatric and adult patients with CIED and without DM1, the odds ratio for CIED in pediatric DM1 was 48.8, compared to 23.3 for CIED in adult DM1. Discussion: There are pediatric DM1 patients who have received CIED despite a lack of data to inform this decision‐making. Further research will be important to ensure appropriate use of CIED in this population and to develop appropriate guidelines to direct management. [ABSTRACT FROM AUTHOR]
Cookson, Adrian L., Devane, Meg, Marshall, Jonathan C., Moinet, Marie, Gardner, Amanda, Collis, Rose M., Rogers, Lynn, Biggs, Patrick J., Pita, Anthony B., Cornelius, Angela J., Haysom, Iain, Hayman, David T. S., Gilpin, Brent J., and Leonard, Margaret
Subjects
ESCHERICHIA coli, WATER quality, WATER sampling, ESCHERICHIA, RUMINANTS, CRYPTOSPORIDIUM
Abstract
Freshwater samples (n = 199) were obtained from 41 sites with contrasting land‐uses (avian, low impact, dairy, urban, sheep and beef, and mixed sheep, beef and dairy) and the E. coli phylotype of 3980 isolates (20 per water sample enrichment) was determined. Eight phylotypes were identified with B1 (48.04%), B2 (14.87%) and A (14.79%) the most abundant. Escherichia marmotae (n = 22), and Escherichia ruysiae (n = 1), were rare (0.68%) suggesting that these environmental strains are unlikely to confound water quality assessments. Phylotypes A and B1 were overrepresented in dairy and urban sites (p < 0.0001), whilst B2 were overrepresented in low impact sites (p < 0.0001). Pathogens ((Salmonella, Campylobacter, Cryptosporidium or Giardia) and the presence of diarrhoeagenic E. coli‐associated genes (stx and eae) were detected in 89.9% (179/199) samples, including 80.5% (33/41) of samples with putative non‐recent faecal inputs. Quantitative PCR to detect microbial source tracking targets from human, ruminant and avian contamination were concordant with land‐use type and E. coli phylotype abundance. This study demonstrated that a potential recreational health risk remains where pathogens occurred in water samples with low E. coli concentration, potential non‐recent faecal sources, low impact sites and where human, ruminant and avian faecal sources were absent. [ABSTRACT FROM AUTHOR]
Hoogland, Aasha I., Brohl, Andrew S., Small, Brent J., Michael, Lauren, Wuthrick, Evan, Eroglu, Zeynep, Blakaj, Dukagjin, Verschraegen, Claire, Khushalani, Nikhil I., Jim, Heather S. L., and Kim, Sungjune
Background: Merkel cell carcinoma is a rare skin cancer associated with poor survival. Based on a previous Phase II trial of adults with advanced Merkel cell carcinoma by Kim and colleagues (2022), there is now a strong rationale for combination therapy (i.e., nivolumab and ipilimumab) to become a treatment option for patients with advanced Merkel cell carcinoma. The goal of this paper was to report on the secondary outcome of quality of life (QOL) among patients on this trial. Methods: Patients receiving combined nivolumab and ipilimumab, with or without stereotactic body radiation therapy (SBRT), completed the European Organisation for Research and Treatment of Cancer (EORTC) QLQ‐C30 prior to starting treatment and every 2 weeks thereafter. Changes in QOL during treatment and post‐treatment were evaluated using piecewise random‐effects mixed models. Exploratory analyses compared changes in QOL between study arms. The original trial was registered with ClinicalTrials.gov (NCT03071406). Results: Study participants (n = 50) reported no changes in overall QOL (ps > 0.05), but emotional functioning improved during treatment (p = 0.01). Cognitive and social functioning worsened post‐treatment (ps < 0.01). In general, patients treated with combination therapy only (n = 25) reported no change in QOL over time, whereas patients also treated with SBRT (n = 25) consistently demonstrated worsening QOL post‐treatment. Conclusion: QOL is generally preserved in patients treated with combination therapy, but the addition of SBRT may worsen QOL. Combined with clinical efficacy data published previously, results support the use of combination therapy with nivolumab and ipilimumab as a treatment option for patients with advanced Merkel cell carcinoma. [ABSTRACT FROM AUTHOR]
Objective: Observational data suggest hope is associated with the quality of life and survival of people with cancer. This trial examined the feasibility, acceptability, and preliminary outcomes of "Pathways," a hope intervention for people in treatment for advanced lung cancer. Methods: Between 2020 and 2022, we conducted a single‐arm trial of Pathways among participants who were 3–12 weeks into systemic treatment. Pathways consisted of two individual sessions delivered during infusions and three phone calls in which participants discussed their values, goals, and goal strategies with a nurse or occupational therapist. Participants completed standardized measures of hope and goal interference pre‐ and post‐intervention. Feasibility was defined as ≥60% of eligible patients enrolling, ≥70% of participants completing three or more sessions, ≥70% of participants completing post‐assessments, and mean acceptability ratings ≥7 out of 10 on intervention relevance, helpfulness, and convenience. Linear regression fixed effects models with covariates modeled pre–post changes in complete case analysis and multiple imputation models. Results: Fifty two participants enrolled: female (59.6%), non‐Hispanic White (84.6%), rural (75.0%), and with low educational attainment (51.9% high school degree or less). Except for enrollment (54%), feasibility and acceptability markers were surpassed (77% adherence, 77% retention, acceptability ratings ≥8/10). There was moderate improvement in hope and goal interference from pre‐to post‐intervention (d = 0.51, p < 0.05 for hope; d = −0.70, p < 0.005 for goal interference). Conclusions: Strong feasibility, acceptability, and patient‐reported outcome data suggest Pathways is a promising intervention to increase hope and reduce cancer‐related goal interference during advanced lung cancer treatment. [ABSTRACT FROM AUTHOR]
Tometich, Danielle B., Welniak, Taylor, Gudenkauf, Lisa, Maconi, Melinda L., Fulton, Hayden J., Martinez Tyson, Dinorah, Zambrano, Kellie, Hasan, Syed, Rodriguez, Yvelise, Bryant, Crystal, Li, Xiaoyin, Reed, Damon R., Oswald, Laura B., Galligan, Andrew, Small, Brent J., and Jim, Heather S. L.
Subjects
PROSPECTIVE memory, CANCER survivors, COGNITIVE ability, YOUNG adults, CANCER patients, EXECUTIVE function
Abstract
Objective: There is a dearth of literature describing young adult (YA) cancer survivors' experiences with cancer‐related cognitive impairment (CRCI). We aimed to elucidate CRCI among YA cancer survivors and identify potentially modifiable risk factors. Methods: We conducted individual qualitative interviews with YA cancer survivors aged 18–30 years at study enrollment and used applied thematic analysis to identify themes across three topics (i.e., affected cognitive abilities, risk and protective factors influencing the impact of CRCI, and strategies for coping with CRCI). Results: YA cancer survivors (N = 20) were, on average, 23 years old at diagnosis and 26 years old when interviewed. Diverse cancer types and treatments were represented; most participants (85%) had completed cancer treatment. Participants described experiences across three qualitative topics: (1) affected cognitive abilities (i.e., concentration and attention, prospective memory, and long‐term memory), (2) Risk factors (i.e., fatigue, sleep problems, mood, stress/distractions, and social isolation) and protective factors (i.e., social support), and (3) coping strategies, including practical strategies that helped build self‐efficacy (e.g., writing things down, reducing distractions), beneficial emotion‐focused coping strategies (e.g., focus on health, faith/religion), strategies with mixed effects (i.e., apps/games, medications/supplements, and yoga), and "powering through" strategies that exacerbated stress. Conclusions: YA cancer survivors experience enduring cognitive difficulties after treatment. Specific concerns highlight the importance of attention and executive functioning impairments, long‐term memory recall, and sensitivity to distractions. Future work is needed to improve assessment and treatment of CRCI among YA cancer survivors. [ABSTRACT FROM AUTHOR]
Disorders of Gut‐Brain Interaction (DGBI) are widely prevalent and commonly encountered in gastroenterology practice. While several peripheral and central mechanisms have been implicated in the pathogenesis of DGBI, a recent body of work suggests an important role for the gut microbiome. In this review, we highlight how gut microbiota and their metabolites affect physiologic changes underlying symptoms in DGBI, with a particular focus on their mechanistic influence on GI transit, visceral sensitivity, intestinal barrier function and secretion, and CNS processing. This review emphasizes the complexity of local and distant effects of microbial metabolites on physiological function, influenced by factors such as metabolite concentration, duration of metabolite exposure, receptor location, host genetics, and underlying disease state. Large‐scale in vitro work has elucidated interactions between host receptors and the microbial metabolome but there is a need for future research to integrate such preclinical findings with clinical studies. The development of novel, targeted therapeutic strategies for DGBI hinges on a deeper understanding of these metabolite‐host interactions, offering exciting possibilities for the future of treatment of DGBI. [ABSTRACT FROM AUTHOR]
Li, Xiaoyin, Hoogland, Aasha I., Small, Brent J., Crowder, Sylvia L., Gonzalez, Brian D., Oswald, Laura B., Sleight, Alix G., Nguyen, Nathalie, Lorona, Nicole C., Damerell, Victoria, Komrokji, Khaled R., Mooney, Kathi, Playdon, Mary C., Ulrich, Cornelia M., Li, Christopher I., Shibata, David, Toriola, Adetunji T., Ose, Jennifer, Peoples, Anita R., and Siegel, Erin M.
Subjects
FATIGUE (Physiology), CANCER fatigue, COLORECTAL cancer, CANCER diagnosis, PATIENT experience, REGRESSION analysis
Abstract
Aim: This study sought to identify groups of colorectal cancer patients based upon trajectories of fatigue and examine how demographic, clinical and behavioural risk factors differentiate these groups. Method: Patients were from six cancer centres in the United States and Germany. Fatigue was measured using the fatigue subscale of the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ‐C30) at five time points (baseline/enrolment and 3, 6, 12 and 24 months after diagnosis). Piecewise growth mixture models identified latent trajectories of fatigue. Logistic regression models examined differences in demographic, clinical and behavioural characteristics between fatigue trajectory groups. Results: Among 1615 participants (57% men, 86% non‐Hispanic White, mean age 61 ± 13 years at diagnosis), three distinct groups were identified. In the high fatigue group (36%), fatigue significantly increased in the first 6 months after diagnosis and then showed statistically and clinically significant improvement from 6 to 24 months (P values < 0.01). Throughout the study period, average fatigue met or exceeded cutoffs for clinical significance. In the moderate (34%) and low (30%) fatigue groups, fatigue levels remained below or near population norms across the study period. Patients who were diagnosed with Stage II−IV disease and/or current smokers were more likely to be in the high fatigue than in the moderate fatigue group (P values < 0.05). Conclusion: A large proportion of colorectal cancer patients experienced sustained fatigue after initiation of cancer treatment. Patients with high fatigue at the time of diagnosis may benefit from early supportive care. [ABSTRACT FROM AUTHOR]
Turnbull, Kurtis F., McNeil, Jeremy N., and Sinclair, Brent J.
Subjects
SOIL depth, INSECTS, ENERGY conservation, SPRING, HIGH temperatures
Abstract
Conserving energy through winter is important for the fitness of temperate insects. While insects can use buffered microhabitats, metabolic suppression or decreases in the thermal sensitivity of metabolic rate to override seasonal‐scale thermal trends, the relative importance of these strategies for limiting energy use by insects overwintering in soil remains underexplored.We used a combined laboratory, field and simulation approach to investigate the overwintering energetics of the western bean cutworm (Striacosta albicosta), a univoltine lepidopteran pest of dry beans and corn that overwinters underground as a dormant prepupa.We hypothesised that (1) the selection of thermally buffered microhabitats (i.e. deeper soil sites) reduces energy use in early autumn and late spring, and that (2) changes in the metabolic rate–temperature relationship reduce the impact of elevated temperatures on overwintering energy use.We provide evidence that during the warmest parts of winter, dormant S. albicosta prepupae that had burrowed deep benefited from a cool, stable microclimate, whereas those near the soil surface appeared to rely on deeper metabolic suppression to maintain their energy stores.Although elevated temperatures in the laboratory depleted their energy reserves, these strategies appear sufficient to limit energy drain under natural conditions in the field.We suggest that small‐scale variation in the depth of soil refuges may mediate the interaction between the risk of energy drain and changes in the metabolic rate–temperature relationship in soil‐overwintering insects. Read the free Plain Language Summary for this article on the Journal blog. [ABSTRACT FROM AUTHOR]
Rentscher, Kelly E., Bethea, Traci N., Zhai, Wanting, Small, Brent J., Zhou, Xingtao, Ahles, Tim A., Ahn, Jaeil, Breen, Elizabeth C., Cohen, Harvey Jay, Extermann, Martine, Graham, Deena M. A., Jim, Heather S. L., McDonald, Brenna C., Nakamura, Zev M., Patel, Sunita K., Root, James C., Saykin, Andrew J., Van Dyk, Kathleen, Mandelblatt, Jeanne S., and Carroll, Judith E.
Subjects
CANCER survivors, BREAST cancer, EPIGENETICS, OLDER women, GERIATRIC assessment
Abstract
Background: Cancer and its treatments may accelerate aging in survivors; however, research has not examined epigenetic markers of aging in longer term breast cancer survivors. This study examined whether older breast cancer survivors showed greater epigenetic aging than noncancer controls and whether epigenetic aging related to functional outcomes. Methods: Nonmetastatic breast cancer survivors (n = 89) enrolled prior to systemic therapy and frequency‐matched controls (n = 101) ages 62 to 84 years provided two blood samples to derive epigenetic aging measures (Horvath, Extrinsic Epigenetic Age [EEA], PhenoAge, GrimAge, Dunedin Pace of Aging) and completed cognitive (Functional Assessment of Cancer Therapy‐Cognitive Function) and physical (Medical Outcomes Study Short Form‐12) function assessments at approximately 24 to 36 and 60 months after enrollment. Mixed‐effects models tested survivor‐control differences in epigenetic aging, adjusting for age and comorbidities; models for functional outcomes also adjusted for racial group, site, and cognitive reserve. Results: Survivors were 1.04 to 2.22 years biologically older than controls on Horvath, EEA, GrimAge, and DunedinPACE measures (p =.001–.04) at approximately 24 to 36 months after enrollment. Survivors exposed to chemotherapy were 1.97 to 2.71 years older (p =.001–.04), and among this group, an older EEA related to worse self‐reported cognition (p =.047) relative to controls. An older epigenetic age related to worse physical function in all women (p <.001–.01). Survivors and controls showed similar epigenetic aging over time, but Black survivors showed accelerated aging over time relative to non‐Hispanic White survivors. Conclusion: Older breast cancer survivors, particularly those exposed to chemotherapy, showed greater epigenetic aging than controls that may relate to worse outcomes. If replicated, measurement of biological aging could complement geriatric assessments to guide cancer care for older women. Older breast cancer survivors were biologically older than matched noncancer controls across multiple epigenetic aging measures at 24 months or more after enrollment (which was presystemic therapy for survivors). Older breast cancer survivors who had received chemotherapy showed the greatest epigenetic aging, and among this group, an older epigenetic age was associated with worse self‐reported cognition relative to controls. [ABSTRACT FROM AUTHOR]
Mandelblatt, Jeanne S., Small, Brent J., Zhou, Xingtao, Nakamura, Zev M., Cohen, Harvey J., Ahles, Tim A., Ahn, Jaeil, Bethea, Traci N., Extermann, Martine, Graham, Deena, Isaacs, Claudine, Jacobsen, Paul B., Jim, Heather S. L., McDonald, Brenna C., Patel, Sunita K., Rentscher, Kelly E., Root, James C., Saykin, Andrew J., Tometich, Danielle B., and Van Dyk, Kathleen
Background: Immune activation/inflammation markers (immune markers) were tested to explain differences in neurocognition among older breast cancer survivors versus noncancer controls. Methods: Women >60 years old with primary breast cancer (stages 0–III) (n = 400) were assessed before systemic therapy with frequency‐matched controls (n = 329) and followed annually to 60 months; blood was collected during annual assessments from 2016 to 2020. Neurocognition was measured by tests of attention, processing speed, and executive function (APE). Plasma levels of interleukin‐6 (IL‐6), IL‐8, IL‐10, tumor necrosis factor α (TNF‐α), and interferon γ were determined using multiplex testing. Mixed linear models were used to compare results of immune marker levels by survivor/control group by time and by controlling for age, racial/ethnic group, cognitive reserve, and study site. Covariate‐adjusted multilevel mediation analyses tested whether survivor/control group effects on cognition were explained by immune markers; secondary analyses examined the impact of additional covariates (e.g., comorbidity and obesity) on mediation effects. Results: Participants were aged 60–90 years (mean, 67.7 years). Most survivors had stage I (60.9%) estrogen receptor–positive tumors (87.6%). Survivors had significantly higher IL‐6 levels than controls before systemic therapy and at 12, 24, and 60 months (p ≤.001–.014) but there were no differences for other markers. Survivors had lower adjusted APE scores than controls (p <.05). Levels of IL‐6, IL‐10, and TNF‐α were related to APE, with IL‐6 explaining part of the relationship between survivor/control group and APE (p =.01). The magnitude of this mediation effect decreased but remained significant (p =.047) after the consideration of additional covariates. Conclusions: Older breast cancer survivors had worse long‐term neurocognitive performance than controls, and this relationship was explained in part by elevated IL‐6. The mechanisms that contribute to cognitive problems among cancer survivors remain unclear. This study found that one inflammatory/immune activation marker, interleukin‐6, mediated some of the relationship between older breast cancer survivors/noncancer control group and cognitive performance. [ABSTRACT FROM AUTHOR]
Objective: Subjective reports of cancer‐related cognitive impairment often far exceed that documented using in‐person neuropsychological assessment. This study evaluated whether subjective cognition was associated with real‐time objective cognitive performance in daily life versus performance on an in‐person neuropsychological battery, as well as fatigue and depressed mood. Methods: Participants were 47 women (M age = 53.3 years) who completed adjuvant treatment for early‐stage breast cancer 6–36 months previously. During an in‐person assessment, participants completed a neuropsychological battery and questionnaires on subjective cognition, fatigue, and depressed mood. Over 14 days, participants responded to up to 5 prompts that assessed real‐time processing speed and memory and self‐reported ratings of depressed mood and fatigue. In the evenings, participants rated their subjective cognition that day and reported on memory lapses (e.g., forgetting a word). Results: During the in‐person assessment, participants who rated their cognition worse reported worse depressed mood, but did not exhibit poorer objective cognitive performance. Women with worse rated daily subjective cognition reported more daily fatigue but did not demonstrate worse real‐time objective cognition. Finally, women who reported memory lapses at the end of the day reported more fatigue and depressed mood, demonstrated better real‐time performance on processing speed (p = 0.001), and worse in‐person processing speed and visuospatial skills (p's ≤ 0.02). Conclusion: Subjective cognition was consistently associated with self‐reported fatigue and depressed mood. Specific memory lapses were related to in‐person and daily objective cognitive performance. This suggests that incorporating reports of memory lapses may help clinicians identify those with objectively measured cancer‐related cognitive impairment. [ABSTRACT FROM AUTHOR]
Chmielewski, Matthew W., Naya, Skyler, Borghi, Monica, Cortese, Jen, Fernie, Alisdair R., Swartz, Mark T., Zografou, Konstantina, Sewall, Brent J., and Spigler, Rachel B.
Variation in pollinator foraging behavior can influence pollination effectiveness, community diversity, and plant–pollinator network structure. Although effects of interspecific variation have been widely documented, studies of intraspecific variation in pollinator foraging are relatively rare. Sex‐specific differences in resource use are a strong potential source of intraspecific variation, especially in species where the phenology of males and females differ. Differences may arise from encountering different flowering communities, sex‐specific traits, nutritional requirements, or a combination of these factors. We evaluated sex‐specific foraging patterns in the eastern regal fritillary butterfly (Argynnis idalia idalia), leveraging a 21‐year floral visitation dataset. Because A. i. idalia is protandrous, we determined whether foraging differences were due to divergent phenology by comparing visitation patterns between the entire season with restricted periods of male–female overlap. We quantified nectar carbohydrate and amino acid contents of the most visited plant species and compared those visited more frequently by males versus females. We demonstrate significant differences in visitation patterns between male and female A. i. idalia over two decades. Females visit a greater diversity of species, while dissimilarity in foraging patterns between sexes is persistent and comparable to differences between species. While differences are diminished or absent in some years during periods of male–female overlap, remaining signatures of foraging dissimilarity during implicate mechanisms other than phenology. Nectar of plants visited more by females had greater concentrations of total carbohydrates, glucose, and fructose and individual amino acids than male‐associated plants. Further work can test whether nutritional differences are a cause of visitation patterns or consequence, reflecting seasonal shifts in the nutritional landscape encountered by male and female A. i. idalia. We highlight the importance of considering sex‐specific foraging patterns when studying interaction networks, and in making conservation management decisions for this at‐risk butterfly and other species exhibiting strong intraspecific variation. [ABSTRACT FROM AUTHOR]
Background: Data about patient‐reported outcomes (PROs) among patients with head and neck squamous cell carcinoma (HNSCC) treated with immune checkpoint inhibitors are sparse. Our exploratory study evaluated PROs in patients with HNSCC starting treatment with immune checkpoint inhibitor monotherapy or combination therapy with cetuximab. Methods: Patients were recruited prior to receipt of their first checkpoint inhibitor therapy infusion. Participants completed measures of checkpoint inhibitor toxicities and quality of life (QOL) at on‐treatment clinic visits. Results: Among patients treated with checkpoint inhibitor monotherapy (n = 48) or combination therapy (n = 38) toxicity increased over time (p < 0.05), while overall QOL improved from baseline to 12 weeks, with stable or declining QOL thereafter (p < 0.05). There were no group differences in change in toxicity index or QOL. Toxicity index scores were significantly higher in the combination group at 18–20 weeks and 6 months post‐initiation of immune checkpoint inhibitor (p < 0.05). There were no significant group differences at baseline, the 6–8 week (p = 0.13) or 3‐month (p = 0.09) evaluations. The combination group reported better emotional well‐being at baseline than the monotherapy group (p = 0.04), There were no other group differences QOL at baseline or later timepoints. Conclusions: Despite increasing patient‐reported toxicity, checkpoint inhibitor monotherapy and combination therapy were associated with similar transient improvements, then worsening, of QOL in patients with HNSCC. [ABSTRACT FROM AUTHOR]
OATHS, DILEMMA, PROPERTY rights, SOCIAL & economic rights
Abstract
We investigate whether oaths can enforce property rights in a social dilemma and increase welfare. We examine the impact of mandatory and voluntary oaths in a laboratory experiment where individuals can produce wealth, protect accumulated wealth, and take wealth from others. Individuals are more productive when oaths are mandatory compared to a no‐oath environment. Subjects' voluntary signing oaths behave similarly to those who sign a mandatory oath. When the oath is voluntary, nonoath‐taking individuals engage in nonproductive behavior, negating the positive impact from the voluntary oath. Our results show that altering commitment mechanisms can result in varying welfare levels. [ABSTRACT FROM AUTHOR]
Nguyen, Jennifer, Doolan, Brent J, Pan, Yan, Vestergaard, Tine, Paul, Eldho, McLean, Catriona, Haskett, Martin, Kelly, John, Mar, Victoria, and Chamberlain, Alexander
Subjects
*MELANOMA, *DERMOSCOPY, *BODY image, *PHOTOGRAPHY, *DECISION making
Abstract
Background/Objectives: Sequential digital dermoscopic imaging (SDDI) and total body photography (TBP) are recommended as a two‐step surveillance method for individuals at high‐risk of developing cutaneous melanoma. Dermoscopic features specific to melanoma have been well described, however, dynamic changes on serial imaging are less understood. This study aims to identify and compare dermoscopic features in developing melanomas and benign naevi that underwent SDDI and TBP to understand which dermoscopic features may be associated with a malignant change. Method: Histopathology reports from a private specialist dermatology clinic from January 2007 to December 2019 were reviewed. Histopathologically confirmed melanoma and benign naevi that underwent SDDI and TBP with a minimum follow‐up interval of 3 months were included. Results: Eighty‐nine melanomas (38.2% invasive, median Breslow thickness 0.35 mm, range: 0.2–1.45 mm) and 48 benign naevi were evaluated by three experienced dermatologists for dermoscopic changes. Features most strongly associated with melanoma included the development of neovascularisation, asymmetry and growth in pigment network, additional colours, shiny white structures, regression, structureless areas and change to a multi‐component pattern. The presence of atypical vessels (p = 0.02) and shiny white structures (p = 0.02) were significantly associated with invasive melanoma. Conclusion: Evaluation for certain evolving dermoscopic features in melanocytic lesions monitored by SDDI and TBP is efficient in assisting clinical decision making. SDDI with TBP is an effective tool for early detection of melanoma. [ABSTRACT FROM AUTHOR]
Barata, Anna, Hoogland, Aasha I., Small, Brent J., Acevedo, Karina I., Antoni, Michael H., Gonzalez, Brian D., Jacobsen, Paul B., Lechner, Suzanne C., Tyson, Dinorah Martinez, Meade, Cathy D., Rodriguez, Yvelise, Salsman, John M., Sherman, Allen C., Sutton, Steven K., and Jim, Heather S. L.
Subjects
HISPANIC American women, WELL-being, QUALITY of life, CANCER patients, CANCER diagnosis
Abstract
Objective: Previous studies have examined whether spiritual well‐being is associated with cancer outcomes, but minority populations are under‐represented. This study examines associations of baseline spiritual well‐being and change in spiritual well‐being with change in distress and quality of life, and explores potential factors associated with changes in spiritual well‐being among Hispanic women undergoing chemotherapy. Methods: Participants completed measures examining spiritual well‐being, distress, and quality of life prior to beginning chemotherapy and at weeks 7 and 13. Participants' acculturation and sociodemographic data were collected prior to treatment. Mixed models were used to examine the association of baseline spiritual well‐being and change in spiritual well‐being during treatment with change in distress and quality of life, and to explore whether sociodemographic factors, acculturation and clinical variables were associated with change in spiritual well‐being. Results: A total of 242 participants provided data. Greater baseline spiritual well‐being was associated with less concurrent distress and better quality of life (p < 0.001), as well as with greater emotional and functional well‐being over time (p values < 0.01). Increases in spiritual well‐being were associated with improved social well‐being during treatment, whereas decreases in spiritual well‐being were associated with worsened social well‐being (p < 0.01). Married participants reported greater spiritual well‐being at baseline relative to non‐married participants (p < 0.001). Conclusions: Greater spiritual well‐being is associated with less concurrent distress and better quality of life, as well as with greater emotional, functional, and social well‐being over time among Hispanic women undergoing chemotherapy. Future work could include developing culturally targeted spiritual interventions to improve survivors' well‐being. [ABSTRACT FROM AUTHOR]
Combined heavy‐ and light‐load ballistic training is often employed in high‐performance sport to improve athletic performance and is accompanied by adaptations in muscle architecture. However, little is known about how training affects muscle‐tendon unit (MTU) kinematics during the execution of a sport‐specific skill (e.g., jumping), which could improve our understanding of how training improves athletic performance. The aim of this study was to investigate vastus lateralis (VL) MTU kinematics during a countermovement jump (CMJ) following combined ballistic training. Eighteen young, healthy males completed a 10‐week program consisting of weightlifting derivatives, plyometrics, and ballistic tasks under a range of loads. Ultrasonography of VL and force plate measurements during a CMJ were taken at baseline, mid‐test, and post‐test. The training program improved CMJ height by 11 ± 13%. During the CMJ, VL's MTU and series elastic element (SEE) length changes and velocities increased from baseline to post‐test, but VL's fascicle length change and velocity did not significantly change. It is speculated that altered lower limb coordination and increased force output of the lower limb muscles during the CMJ allowed more energy to be stored within VL's SEE. This may have contributed to enhanced VL MTU work during the propulsion phase and an improved CMJ performance following combined ballistic training. [ABSTRACT FROM AUTHOR]
Pereira, Stacey, Muñoz, Katrina A., Small, Brent J., Soda, Takahiro, Torgerson, Laura N., Sanchez, Clarissa E., Austin, Jehannine, Storch, Eric A., and Lázaro‐Muñoz, Gabriel
Bethea, Traci N., Zhai, Wanting, Zhou, Xingtao, Ahles, Tim A., Ahn, Jaeil, Cohen, Harvey J., Dilawari, Asma A., Graham, Deena M. A., Jim, Heather S. L., McDonald, Brenna C., Nakamura, Zev M., Patel, Sunita K., Rentscher, Kelly E., Root, James, Saykin, Andrew J., Small, Brent J., Van Dyk, Kathleen M., Mandelblatt, Jeanne S., and Carroll, Judith E.
Subjects
SLEEP interruptions, OLDER women, BREAST cancer, COVID-19 pandemic, MENTAL depression, WOMEN'S mental health
Abstract
Purpose: Several studies have reported sleep disturbances during the COVID‐19 virus pandemic. Little data exist about the impact of the pandemic on sleep and mental health among older women with breast cancer. We sought to examine whether women with and without breast cancer who experienced new sleep problems during the pandemic had worsening depression and anxiety. Methods: Breast cancer survivors aged ≥60 years with a history of nonmetastatic breast cancer (n = 242) and frequency‐matched noncancer controls (n = 158) active in a longitudinal cohort study completed a COVID‐19 virus pandemic survey from May to September 2020 (response rate 83%). Incident sleep disturbance was measured using the restless sleep item from the Center for Epidemiological Studies‐Depression Scale (CES‐D). CES‐D score (minus the sleep item) captured depressive symptoms; the State‐Anxiety subscale of the State Trait Anxiety Inventory measured anxiety symptoms. Multivariable linear regression models examined how the development of sleep disturbance affected changes in depressive or anxiety symptoms from the most recent prepandemic survey to the pandemic survey, controlling for covariates. Results: The prevalence of sleep disturbance during the pandemic was 22.3%, with incident sleep disturbance in 10% and 13.5% of survivors and controls, respectively. Depressive and anxiety symptoms significantly increased during the pandemic among women with incident sleep disturbance (vs. no disturbance) (β = 8.16, p < 0.01 and β = 6.14, p < 0.01, respectively), but there were no survivor‐control differences in the effect. Conclusion: Development of sleep disturbances during the COVID‐19 virus pandemic may negatively affect older women's mental health, but breast cancer survivors diagnosed with the nonmetastatic disease had similar experiences as women without cancer. [ABSTRACT FROM AUTHOR]
Meehan, Matthew L., Turnbull, Kurtis F., Sinclair, Brent J., and Lindo, Zoë
Subjects
PREDATORY mite, PREDATORY animals, BODY size, FOOD chains, PREDATION, ENERGY consumption, ECOSYSTEMS
Abstract
Climate warming may alter predator–prey interactions and predator feeding behaviour due to increased metabolic demands. How predators meet these increased demands may depend on trade‐offs in prey energy content and body size, handling time and other functional constraints.We tested hypotheses associated with these trade‐offs with the predatory mite Stratiolaelaps scimitus, and three prey that differed in body size, energy content, and defenses (Folsomia candida, Oppia nitens, and Carpoglyphus lactis). We estimated metabolic rate, predation in choice and no choice feeding trials, movement rate, and lipid and protein content for all four species at 16°C and 24°C. We used these data to estimate the predator's energy demands and compared these to estimated energy intake in the choice feeding trials.Predators had greater metabolic demands at 24°C than at 16°C, but temperature did not affect predator or prey movement rates. Warming decreased lipid content, but not protein content, of all three prey species, leading to lower energy content for C. lactis and O. nitens, but not F. candida. In both feeding trials at 24°C, predators increased their feeding on the smaller, energy‐poor C. lactis, but not the larger, energy‐rich F. candida, resulting in lower estimated energy intake. S. scimitus did not feed on O. nitens at either temperature.Predators increasingly fed on small‐bodied prey under warming and not the large‐bodied prey despite the potential for greater energetic gains from larger prey. We posit that predators minimized energy lost during feeding through lower handling costs associated with C. lactis, rather than maximize energy gain. We conclude that selection of prey based on body size changes with temperature as a trade‐off for predators to balance increased metabolic demands. As predators provide top‐down control and regulate energy flow through the consumption of their prey, changes to predator feeding behaviour with climate warming may affect food web dynamics and ecosystem‐level processes. Read the free Plain Language Summary for this article on the Journal blog. [ABSTRACT FROM AUTHOR]
Kaur, Har Simrat, Chen, Jessica Szu‐Chia, Doolan, Brent J., and Gupta, Monisha
Subjects
VITILIGO, PHOTOTHERAPY, DERMATOLOGISTS
Abstract
There is now a resource available to Australian dermatologists and practices to ensure efficacy and consistency of NBUVB phototherapy delivery. The aim of this study was to develop a consensus approach for the use of NBUVB phototherapy in the management of psoriasis, eczema and vitiligo. Phototherapy is the use of ultraviolet light to treat various skin dermatoses.1 In particular, the use of narrowband UVB (NBUVB) phototherapy has been demonstrated as an effective treatment for several cutaneous diseases including psoriasis, eczema and vitiligo.1 Due to the efficacy, safety and cost-effectiveness of NBUVB phototherapy, there has been an international directive to establish consensus-based guidelines for its use, as evidenced in the American,2 European3 and global working groups.4 To date, there has been no Australian consensus statement on phototherapy recommendations. NBUVB clinic-based phototherapy should be delivered to the patient by a doctor, nurse or receptionist, not by the patient themselves.
The monochromatic excimer light therapy (308‐nm excimer laser and lamp) is used to treat focal dermatoses with inflammation or hypopigmentation. In Australia, despite excimer light therapy being a proven effective treatment for many cutaneous conditions, barriers such as access and affordability provide considerable limitations to patients. This study aims to retrospectively evaluate the different applications of excimer light therapy in treating dermatologic conditions within the Australian setting and provide practical information for its use. [ABSTRACT FROM AUTHOR]
Turner, Gregory G., Sewall, Brent J., Scafini, Michael R., Lilley, Thomas M., Bitz, Daniel, and Johnson, Joseph S.
Subjects
*LITTLE brown bat, *BATS, *HABITAT selection, *VAPOR pressure, *UNDERGROUND areas, *WHITE-nose syndrome
Abstract
White‐nose syndrome (WNS) is a fungal disease that has caused precipitous declines in several North American bat species, creating an urgent need for conservation. We examined how microclimates and other characteristics of hibernacula have affected bat populations following WNS‐associated declines and evaluated whether cooling of warm, little‐used hibernacula could benefit bats. During the period following mass mortality (2013–2020), we conducted 191 winter surveys of 25 unmanipulated hibernacula and 6 manipulated hibernacula across Pennsylvania (USA). We joined these data with additional datasets on historical (pre‐WNS) bat counts and on the spatial distribution of underground sites. We used generalized linear mixed models and model selection to identify factors affecting bat populations. Winter counts of Myotis lucifugus were higher and increased over time in colder hibernacula (those with midwinter temperatures of 3–6 °C) compared with warmer (7–11 °C) hibernacula. Counts of Eptesicus fuscus, Myotis leibii, and Myotis septentrionalis were likewise higher in colder hibernacula (temperature effects = –0.73 [SE 0.15], –0.51 [0.18], and –0.97 [0.28], respectively). Populations of M. lucifugus and M. septentrionalis increased most over time in hibernacula surrounded by more nearby sites, whereas Eptesicus fuscus counts remained high where they had been high before WNS onset (pre‐WNS high count effect = 0.59 [0.22]). Winter counts of M. leibii were higher in hibernacula with high vapor pressure deficits (VPDs) (particularly over 0.1 kPa) compared with sites with lower VPDs (VPD effect = 15.3 [4.6]). Counts of M. lucifugus and E. fuscus also appeared higher where VPD was higher. In contrast, Perimyotis subflavus counts increased over time in relatively warm hibernacula and were unaffected by VPD. Where we manipulated hibernacula, we achieved cooling of on average 2.1 °C. At manipulated hibernacula, counts of M. lucifugus and P. subflavus increased over time (years since manipulation effect = 0.70 [0.28] and 0.51 [0.15], respectively). Further, there were more E. fuscus where cooling was greatest (temperature difference effect = –0.46 [SE 0.11]), and there was some evidence there were more P. subflavus in hibernacula sections that remained warm after manipulation. These data show bats are responding effectively to WNS through habitat selection. In M. lucifugus, M. septentrionalis, and possibly P. subflavus, this response is ongoing, with bats increasingly aggregating at suitable hibernacula, whereas E. fuscus remain in previously favored sites. Our results suggest that cooling warm sites receiving little use by bats is a viable strategy for combating WNS. [ABSTRACT FROM AUTHOR]
Costello, David M., Tiegs, Scott D., Boyero, Luz, Canhoto, Cristina, Capps, Krista A., Danger, Michael, Frost, Paul C., Gessner, Mark O., Griffiths, Natalie A., Halvorson, Halvor M., Kuehn, Kevin A., Marcarelli, Amy M., Royer, Todd V., Mathie, Devan M., Albariño, Ricardo J., Arango, Clay P., Aroviita, Jukka, Baxter, Colden V., Bellinger, Brent J., and Bruder, Andreas
Microbes play a critical role in plant litter decomposition and influence the fate of carbon in rivers and riparian zones. When decomposing low-nutrient plant litter, microbes acquire nitrogen (N) and phosphorus (P) from the environment (i.e., nutrient immobilization), and this process is potentially sensitive to nutrient loading and changing climate. Nonetheless, environmental controls on immobilization are poorly understood because rates are also influenced by plant litter chemistry, which is coupled to the same environmental factors. Here we used a standardized, low-nutrient organic matter substrate (cotton strips) to quantify nutrient immobilization at 100 paired stream and riparian sites representing 11 biomes worldwide. Immobilization rates varied by three orders of magnitude, were greater in rivers than riparian zones, and were strongly correlated to decomposition rates. In rivers, P immobilization rates were controlled by surface water phosphate concentrations, but N immobilization rates were not related to inorganic N. The N:P of immobilized nutrients was tightly constrained to a molar ratio of 10:1 despite wide variation in surface water N:P. Immobilization rates were temperature-dependent in riparian zones but not related to temperature in rivers. However, in rivers nutrient supply ultimately controlled whether microbes could achieve the maximum expected decomposition rate at a given temperature. Collectively, we demonstrated that exogenous nutrient supply and immobilization are critical control points for decomposition of organic matter. [ABSTRACT FROM AUTHOR]
Al‐Ali, Abdulwadood A., Elwakil, Ahmed S., Maundy, Brent J., Allagui, Anis, and Elamien, Mohamed B.
Subjects
FREQUENCY synthesizers, HILBERT transform, TELECOMMUNICATION systems, PHASE noise, POWER density, HILBERT-Huang transform
Abstract
Summary: Measuring phase noise in oscillators is crucial in communication systems, vibration analysis, and frequency synthesizers. Traditionally, this measurement is done in frequency domain by estimating the ratio of the power density at an offset frequency from the carrier to the power of the carrier signal. This approach is hardware intensive and dependent on the the offset frequency, for which there exists no standard. Here, we propose an alternative method to quantify phase noise but in the time domain in the form of a root‐mean‐square true phase angle deviation. This is done using a self‐created reference signal and by applying the Hilbert transform to generate a complex analytic signal from the real time‐domain data sequence. This enables the computation of instantaneous metrics such as phase noise. Simulations and experimental results are provided to validate the proposed technique. In addition, its relationship to the origin of phase noise is mathematically explained. [ABSTRACT FROM AUTHOR]
Summary: Experimental studies have shown that sleep deprivation may lead to worse performance on cognitive tests. However, few studies have considered how sleep is associated with perceived cognitive performance in the daily lives of hospital nurses who require high cognitive abilities to deliver high‐quality patient care. The current study examined the relationship between sleep and subjective cognition in nurses, and whether the relationship differed by work shift and workdays. Sixty in patient nurses working full‐time (M = 35 years; 39 day‐shift nurses, 21 night‐shift nurses) reported their sleep characteristics and daily subjective cognition using ecological momentary assessment for 14 days. Concurrently, objective sleep characteristics were measured with a sleep actigraphy device for 14 days. Using multilevel modelling, results indicated that at the within‐person and between‐person level, better sleep quality and higher sleep sufficiency were associated with better subjective cognition at the daily‐level and on average. Moderation analyses indicated at the within‐person level, better sleep quality and longer time in bed were associated with better next‐day cognition; these associations were stronger for night‐shift nurses compared with day‐shift nurses. At the between‐person level, better sleep quality and higher sleep sufficiency were also associated with better subjective cognition overall; these associations were significant for day‐shift nurses, but not for night‐shift nurses. The sleep—subjective cognition relationships were more apparent on workdays versus non‐workdays. Findings suggest that sufficient sleep recovery is important for nurses' reports of daily and overall cognitive functioning. Night‐shift nurses' subjective cognitive abilities may be more protected on days following better sleepquality and more sufficient sleep. [ABSTRACT FROM AUTHOR]
Rentscher, Kelly E., Zhou, Xingtao, Small, Brent J., Cohen, Harvey J., Dilawari, Asma A., Patel, Sunita K., Bethea, Traci N., Van Dyk, Kathleen M., Nakamura, Zev M., Ahn, Jaeil, Zhai, Wanting, Ahles, Tim A., Jim, Heather S. L., McDonald, Brenna C., Saykin, Andrew J., Root, James C., Graham, Deena M. A., Carroll, Judith E., and Mandelblatt, Jeanne S.
Subjects
LONELINESS, COVID-19 pandemic, BREAST cancer, COVID-19, MENTAL health, CANCER survivors
Abstract
Background: The coronavirus disease 2019 (COVID‐19) pandemic has had wide‐ranging health effects and increased isolation. Older with cancer patients might be especially vulnerable to loneliness and poor mental health during the pandemic. Methods: The authors included active participants enrolled in the longitudinal Thinking and Living With Cancer study of nonmetastatic breast cancer survivors aged 60 to 89 years (n = 262) and matched controls (n = 165) from 5 US regions. Participants completed questionnaires at parent study enrollment and then annually, including a web‐based or telephone COVID‐19 survey, between May 27 and September 11, 2020. Mixed‐effects models were used to examine changes in loneliness (a single item on the Center for Epidemiologic Studies–Depression [CES‐D] scale) from before to during the pandemic in survivors versus controls and to test survivor‐control differences in the associations between changes in loneliness and changes in mental health, including depression (CES‐D, excluding the loneliness item), anxiety (the State‐Trait Anxiety Inventory), and perceived stress (the Perceived Stress Scale). Models were adjusted for age, race, county COVID‐19 death rates, and time between assessments. Results: Loneliness increased from before to during the pandemic (0.211; P =.001), with no survivor‐control differences. Increased loneliness was associated with worsening depression (3.958; P <.001) and anxiety (3.242; P <.001) symptoms and higher stress (1.172; P <.001) during the pandemic, also with no survivor‐control differences. Conclusions: Cancer survivors reported changes in loneliness and mental health similar to those reported by women without cancer. However, both groups reported increased loneliness from before to during the pandemic that was related to worsening mental health, suggesting that screening for loneliness during medical care interactions will be important for identifying all older women at risk for adverse mental health effects of the pandemic. Older breast cancer survivors and matched noncancer controls experienced similar increases in loneliness from before to during the COVID‐19 pandemic. Women who reported increased loneliness also experienced worsening depression and anxiety symptoms and higher stress during the pandemic. [ABSTRACT FROM AUTHOR]
Barata, Anna, Hoogland, Aasha I., Hyland, Kelly A., Otto, Amy K., Kommalapati, Anuhya, Jayani, Reena V., Irizarry‐Arroyo, Nathaly, Collier, Aaron, Rodriguez, Yvelise, Welniak, Taylor L., Booth‐Jones, Margaret, Logue, Jennifer, Small, Brent J., Jain, Michael D., Reblin, Maija, Locke, Frederick L., and Jim, Heather S. L.
Subjects
CAREGIVERS, CHIMERIC antigen receptors, SERVICES for caregivers, BURDEN of care, PHYSICAL mobility, QUALITY of life
Abstract
Objective: Informal family caregivers provide critical support for patients receiving chimeric antigen receptor (CAR) T‐cell therapy. However, caregivers' experiences are largely unstudied. This study examined quality of life (QOL; physical functioning, pain, fatigue, anxiety, and depression), caregiving burden, and treatment‐related distress in caregivers in the first 6 months after CAR T‐cell therapy, when caregivers were expected to be most involved in providing care. Relationships between patients' clinical course and caregiver outcomes were also explored. Methods: Caregivers completed measures examining QOL and burden before patients' CAR T‐cell therapy and at days 90 and 180. Treatment‐related distress was assessed at days 90 and 180. Patients' clinical variables were extracted from medical charts. Change in outcomes was assessed using means and 99% confidence intervals. Association of change in outcomes with patient clinical variables was assessed with backward elimination analysis. Results: A total of 99 caregivers (mean age 59, 73% female) provided data. Regarding QOL, pain was significantly higher than population norms at baseline but improved by day 180 (p <.01). Conversely, anxiety worsened over time (p <.01). Caregiver burden and treatment‐related distress did not change over time. Worsening caregiver depression by day 180 was associated with lower patient baseline performance status (p <.01). Worse caregiver treatment‐related distress at day 180 was associated with lower performance status, intensive care unit admission, and lack of disease response at day 90 (ps < 0.01). Conclusions: Some CAR T‐cell therapy caregivers experience pain, anxiety, and burden, which may be associated patients' health status. Further research is warranted regarding the experience of CAR T‐cell therapy caregivers. [ABSTRACT FROM AUTHOR]
The hypothesis that biotic interactions strengthen toward lower latitudes provides a framework for linking community‐scale processes with the macroecological scales that define our biosphere. Despite the importance of this hypothesis for understanding community assembly and ecosystem functioning, the extent to which interaction strength varies across latitude and the effects of this variation on natural communities remain unresolved. Predation in particular is central to ecological and evolutionary dynamics across the globe, yet very few studies explore both community‐scale causes and outcomes of predation across latitude. Here we expand beyond prior studies to examine two important components of predation strength: intensity of predation (including multiple dimensions of the predator guild) and impact on prey community biomass and structure, providing one of the most comprehensive examinations of predator–prey interactions across latitude. Using standardized experiments, we tested the hypothesis that predation intensity and impact on prey communities were stronger at lower latitudes. We further assessed prey recruitment to evaluate the potential for this process to mediate predation effects. We used sessile marine invertebrate communities and their fish predators in nearshore environments as a model system, with experiments conducted at 12 sites in four regions spanning the tropics to the subarctic. Our results show clear support for an increase in both predation intensity and impact at lower relative to higher latitudes. The predator guild was more diverse at low latitudes, with higher predation rates, longer interaction durations, and larger predator body sizes, suggesting stronger predation intensity in the tropics. Predation also reduced prey biomass and altered prey composition at low latitudes, with no effects at high latitudes. Although recruitment rates were up to three orders of magnitude higher in the tropics than the subarctic, prey replacement through this process was insufficient to dampen completely the strong impacts of predators in the tropics. Our study provides a novel perspective on the biotic interaction hypothesis, suggesting that multiple components of the predator community likely contribute to predation intensity at low latitudes, with important consequences for the structure of prey communities. [ABSTRACT FROM AUTHOR]
Pérez‐Guzmán, Lumarie, Phillips, Lori A., Seuradge, Brent J., Agomoh, Ikechukwu, Drury, Craig F., and Acosta‐Martínez, Verónica
Abstract
The soil microbial community (SMC) and soil organic matter (SOM) are inherently related and are sensitive to land‐use changes. Microorganisms regulate essential soil functions that are key to SOM dynamics, whereas SOM dynamics define the SMC. To expand our understanding of soil health, we evaluated biological and SOM indicators in long‐term (18‐yr) continuous silage corn (Zea mays L.), continuous soybean [Glycine max (L.) Merr.], and perennial grass ecosystems in Ontario, Canada. The SMC was evaluated via ester‐linked fatty acid methyl ester (EL‐FAME) and amplicon sequencing. Soil organic matter was evaluated via a new combined enzyme assay that provides a single biogeochemical cycling value for C, N, P, and S cycling activity (CNPS), as well as loss‐on‐ignition, permanganate oxidizable C (POXC), and total C and N. Overall, soil health indicators followed the trend of grasses > corn > soybean. Grass systems had up to 8.1 times more arbuscular mycorrhizal fungi, increased fungal/bacteria ratios (via EL‐FAME), and higher microbial diversity (via sequencing). The POXC was highly variable within treatments and did not significantly differ between systems. The novel CNPS activity assay, however, was highly sensitive to management (up to 2.2 and 3.2 times higher under grasses than corn and soybean, respectively) and was positively correlated (ρ > .92) to SOM, total C, and total N. Following the “more is better” model, where higher values of the measured parameters indicate a healthier soil, our study showed decreased soil health under monocultures, especially soybean, and highlights the need to implement sustainable agriculture practices that maintain soil health.Core Ideas: Soil biological health indicators were significantly higher under grasses than annual crops.Continuous soybean had the lowest microbial abundance, diversity, and activity.CNPS activity was responsive to management and correlated with TC and organic matter. [ABSTRACT FROM AUTHOR]
Reactive oxygen species (ROS) are an important contributor to adverse health effects associated with ambient air pollution. Despite infiltration of ROS from outdoors, and possible indoor sources (eg, combustion), there are limited data available on indoor ROS. In this study, part of the second phase of Air Composition and Reactivity from Outdoor aNd Indoor Mixing campaign (ACRONIM‐2), we constructed and deployed an online, continuous, system to measure extracellular gas‐ and particle‐phase ROS during summer in an unoccupied residence in St. Louis, MO, USA. Over a period of one week, we observed that the non‐denuded outdoor ROS (representing particle‐phase ROS and some gas‐phase ROS) concentration ranged from 1 to 4 nmol/m3 (as H2O2). Outdoor concentrations were highest in the afternoon, coincident with peak photochemistry periods. The indoor concentrations of particle‐phase ROS were nearly equal to outdoor concentrations, regardless of window‐opening status or air exchange rates. The indoor/outdoor ratio of non‐denuded ROS (I/OROS) was significantly less than 1 with windows open and even lower with windows closed. Combined, these observations suggest that gas‐phase ROS are efficiently removed by interior building surfaces and that there may be an indoor source of particle‐phase ROS. [ABSTRACT FROM AUTHOR]
Hoogland, Aasha I., Jim, Heather S. L., Gonzalez, Brian D., Small, Brent J., Gilvary, Danielle, Breen, Elizabeth C., Bower, Julienne E., Fishman, Mayer, Zachariah, Babu, and Jacobsen, Paul B.
Background: Increases in fatigue, depressive symptomatology, and cognitive impairment are common after the initiation of androgen deprivation therapy (ADT) for prostate cancer. To date, no studies have examined the potential role of inflammation in the development of these symptoms in ADT recipients. The goal of the current study was to examine circulating markers of inflammation as potential mediators of change in fatigue, depressive symptomatology, and cognitive impairment related to the receipt of ADT. Methods: Patients treated with ADT for prostate cancer (ADT+; n = 47) were assessed around the time of the initiation of ADT and 6 and 12 months later. An age‐ and education‐matched group of men without a history of cancer (CA–; n = 82) was assessed at comparable time points. Fatigue, depressive symptomatology, and cognitive impairment were assessed with the Fatigue Symptom Inventory, the Center for Epidemiological Studies Depression Scale, and a battery of neuropsychological tests, respectively. Circulating markers of inflammation included interleukin 1 receptor antagonist (IL‐1RA), interleukin 6 (IL‐6), soluble tumor necrosis factor receptor II (sTNF‐RII), and C‐reactive protein (CRP). Results: Fatigue, depressive symptomatology, and serum IL‐6 increased significantly over time in the ADT+ group versus the CA– group; rates of cognitive impairment also changed significantly between the groups. No significant changes in IL‐1RA, sTNF‐RII, or CRP over time were detected. Treatment‐related increases in IL‐6 were associated with worsening fatigue but not depressive symptomatology or cognitive impairment. Conclusions: Results of this preliminary study suggest that increases in circulating IL‐6, perhaps due to testosterone inhibition, may play a role in fatigue secondary to receipt of ADT. Additional research is needed to determine whether interventions to reduce circulating inflammation improve fatigue in this population. Androgen deprivation therapy is associated with increases over time in fatigue, depressive symptomatology, and circulating interleukin 6. Increases in interleukin 6 over time are associated with worsening treatment‐related fatigue but not depressive symptomatology or cognitive impairment. [ABSTRACT FROM AUTHOR]
Climate adaptation, climate change, disease management, cave bats, hibernacula microclimate, Pseudogymnoascus destructans, subterranean communities, white-nose syndrome Keywords: cave bats; climate adaptation; climate change; disease management; hibernacula microclimate; Pseudogymnoascus destructans; subterranean communities; white-nose syndrome EN cave bats climate adaptation climate change disease management hibernacula microclimate Pseudogymnoascus destructans subterranean communities white-nose syndrome 1 3 3 05/31/22 20220601 NES 220601 Climate change has become increasingly evident globally, especially in more northerly regions, with warming trends virtually certain to continue over the coming decades (IPCC, 2021). Current efforts and tools for climate adaptation therefore remain far too limited to conserve the diversity of species and communities that will be harmed by climate change. [Extracted from the article]
Doolan, Brent J, Koye, Digsu, Ling, Joanna, Cains, Geoffrey D, Baker, Christopher, Foley, Peter, and Dolianitis, Con
Subjects
*PSORIASIS, *PSORIASIS treatment, *BIOLOGICALS, *DRUG side effects, *AUSTRALASIANS
Abstract
Background: Psoriasis is a chronic inflammatory disease affecting ~2–3% of the Australasian population. Therapeutic options include topical agents, phototherapy, systemic immunomodulators and biologic agents. Biologics present an acceptable short‐ and medium‐term safety profile, derived mainly from randomised controlled trials (RCTs) and, however, may not represent real‐world rates of adverse events (AEs). Methods: A retrospective, observational study of patients enrolled in The Australasian Psoriasis Registry from April 2008 to October 2018 was conducted. Data were collected from 104 sites in Australia and New Zealand. Patient characteristics, treatments and AE data were collected. AEs were classified by MedDRA System events. Results: 2094 patients were included (3765 patient‐treatments), comprising; 1110 phototherapy, 1280 systemic and 1375 biologic therapy patient‐treatments. Treatment arms were not mutually exclusive. The mean ± SD from date of diagnosis of psoriasis to commencement of biologic therapy was 8.9 ± 12.3 years. Methotrexate had the longest exposure time (3740.3 patient‐years), and ustekinumab had the longest median (95% CI) time on treatment, 4.3 years (2.2, 6.6). AE differences on biologic treatment were present between patients who would have been eligible or ineligible for RCTs. Approximately 29% of registry patients would have been excluded from clinical trials enrolment. Patients ineligible for RCTs had increased adjusted hazard ratios (95% CI) of: infections and infestations (2.3, 1.7–3.1; P < 0.001), cardiac (8.2, 3.5–25.6; P < 0.001), gastrointestinal (3.5, 1.52–8.0; P < 0.001), hepatobiliary (5.6 1.7–19.1; P < 0.001), psychiatric (4.7, 1.5–14.1; P = 0.006) and eye disorders (4.8 1.5–15.6; P = 0.008), compared to those eligible for RCTs. Incidence rates in the trial eligible patients were similar to those reported from RCT rates. Conclusions: This study establishes treatment modalities in use for severe psoriasis and the clinical rates of AEs associated with biologic therapy. [ABSTRACT FROM AUTHOR]
Bellinger, Brent J., Cook, Mark I., Hagerthey, Scot E., Newman, Susan, and Kobza, Robert M.
Abstract
Eutrophication of the Florida Everglades, USA, has altered the characteristics of the ecosystem, but management strategies are being implemented to accelerate recovery. In this study, we described lipid compositional similarities and differences between periphyton, fish, and crustaceans, and explored if eutrophication and creation of new open‐water sloughs in phosphorus (P)‐impacted regions of a Northern Everglades impoundment resulted in changes in periphyton biomass and lipid composition, and the lipid composition of a ubiquitous omnivore, Gambusia holbrooki. Lipid biomarker analysis provided insight into microbial community composition, quality of basal resources, and potential resources utilized by consumers. Periphyton biomass and phospholipid fatty acid (PLFA) composition differed in response to eutrophication, but not between P‐impacted control and treatment plots. Shifts in relative abundances of lipids indicative of diatoms and green algae mirrored known taxonomic shifts due to eutrophication. For fauna, PLFA were a small and relatively distinct component of the overall total lipid make‐up, and profiles were similar between control and treatment plots. However, the PLFA profile of G. holbrooki differed between oligotrophic and eutrophic regions. Fish and crustacean lipids contained significantly greater relative abundances of polyunsaturated fatty acids than were found in periphyton, and profiles differed between fish and crustaceans, suggesting organisms were selectively accumulating or elongating and desaturating lipids de novo, to meet physiological needs. This study builds on findings of microbial responses to eutrophication and recent observations that consumer PLFA profiles can also shift with P‐enrichment. [ABSTRACT FROM AUTHOR]
Objectives: To determine whether neuropsychiatric symptoms (NPS) are able to differentiate those with mild cognitive impairment (MCI) and dementia from persons who are cognitively healthy.Methods: Multinomial and binary logistic regressions were used to assess secondary data of a sample (n = 613) of older adults with NPS. Analyses evaluated the ability to differentiate between diagnoses, as well as the influence of these symptoms for individuals with amnestic MCI (MCI-A), non-amnestic MCI (MCI-NA), and dementia compared with those who are cognitively healthy.Results: Persons with MCI were more likely to have anxiety, apathy, and appetite changes compared with cognitively healthy individuals. Persons with dementia were more likely to have aberrant motor behaviors, anxiety, apathy, appetite changes, and delusions compared with those who were cognitively healthy. Individuals with any type of cognitive impairment were more likely to have anxiety, apathy, appetite changes, and delusions. Specifically, anxiety, apathy, appetite changes, and disinhibition were predictors of MCI-A; agitation and apathy were predictors of MCI-NA; and aberrant motor behaviors, anxiety, apathy, appetite changes, and delusions were predictors of dementia. Finally, nighttime behavior disorders were less likely in individuals with dementia.Conclusions: The present study's results demonstrate that specific NPS are differentially represented among types of cognitive impairment and establish the predictive value for one of these cognitive impairment diagnoses. [ABSTRACT FROM AUTHOR]
Liounis, Andrew J., Small, Jeffrey L., Swenson, Jason C., Lyzhoft, Joshua R., Ashman, Benjamin W., Getzandanner, Kenneth M., Moreau, Michael C., Adam, Coralie D., Leonard, Jason M., Nelson, Derek S., Pelgrift, John Y., Bos, Brent J., Chesley, Steven R., Hergenrother, Carl W., and Lauretta, Dante S.
When optical navigation images acquired by the OSIRIS‐REx (Origins, Spectral Interpretation, Resource Identification, and Security‐Regolith Explorer) mission revealed the periodic ejection of particles from asteroid (101955) Bennu, it became a mission priority to quickly identify and track these objects for both spacecraft safety and scientific purposes. The large number of particles and the mission criticality rendered time‐intensive manual inspection impractical. We present autonomous techniques for particle detection and tracking that were developed in response to the Bennu phenomenon but that have the capacity for general application to particles in motion about a celestial body. In an example OSIRIS‐REx data set, our autonomous techniques identified 93.6% of real particle tracks and nearly doubled the number of tracks detected versus manual inspection alone. Key Points: We describe autonomous techniques for the identification and tracking of particles in motion about a celestial bodyWe demonstrate these techniques using images from the OSIRIS‐REx mission to the active asteroid (101955) BennuIn the OSIRIS‐REx dataset, our autonomous algorithms detected 93.6% of real particle tracks, including 244 tracks not identified by manual inspection [ABSTRACT FROM AUTHOR]
Pelgrift, John Y., Lessac‐Chenen, Erik J., Adam, Coralie D., Leonard, Jason M., Nelson, Derek S., McCarthy, Leilah, Sahr, Eric M., Liounis, Andrew, Moreau, Michael, Bos, Brent J., Hergenrother, Carl W., and Lauretta, Dante S.
Subjects
NEAR-earth asteroids, PARTICLES, ASTEROIDS
Abstract
OSIRIS‐REx began observing particle ejection events shortly after entering orbit around near‐Earth asteroid (101955) Bennu in January 2019. For some of these events, the only observations of the ejected particles come from the first two images taken immediately after the event by OSIRIS‐REx's NavCam 1 imager. Without three or more observations of each particle, traditional orbit determination is not possible. However, by assuming that the particles all ejected at the same time and location for a given event, and approximating that their velocities remained constant after ejection (a reasonable approximation for fast‐moving particles, i.e., with velocities on the order of 10 cm/s or greater, given Bennu's weak gravity), we show that it is possible to estimate the particles' states from only two observations each. We applied this newly developed technique to reconstruct the particle ejection events observed by the OSIRIS‐REx spacecraft during orbit about Bennu. Particles were estimated to have ejected with inertial velocities ranging from 7 cm/s to 3.3 m/s, leading to a variety of trajectory types. Most (>80%) of the analyzed events were estimated to have originated from midlatitude regions and to have occurred after noon (local solar time), between 12:44 and 18:52. Comparison with higher‐fidelity orbit determination solutions for the events with sufficient observations demonstrates the validity of our approach and also sheds light on its biases. Our technique offers the capacity to meaningfully constrain the properties of particle ejection events from limited data. Key Points: We show how Bennu's particle ejection events can be reconstructed using only two observationsFor each event, we estimate the particle velocities and ejection locationVelocities ranged from 7 cm/s to 3.3 m/s, and most observed events took place after noon [ABSTRACT FROM AUTHOR]
The application of complex network theory to community ecology has enabled quantification of interactions among large suites of species and clarified patterns of community structure across systems. Past analyses, however, have assumed that ecological networks are temporally static and persistent and spatially homogeneous, which could confound inference if species interactions vary over time and space. To evaluate temporal and spatial variation in mutualistic networks, therefore, we evaluated the consistency of a nectarivory/pollination network across years, by season, and over space. We tracked nectaring interactions among 37 butterfly and 58 flowering plant taxa during an 11‐yr period (2007–2017), across each summer and over a grassland landscape in Pennsylvania, USA. The composition of butterflies, plants, and their interactions varied markedly across years, months, and sites. Despite this compositional variation, one metric of network structure, nestedness, was invariant, with interactions much more nested than random across all years, months, and sites. Together with previous studies, this result suggests ecological interaction networks are generally more nested than expected by chance. Other measures of network structure were more variable, especially over time. Numbers of plants and interactions varied by year, month, and site. Connectance and numbers of butterflies varied annually and seasonally. Temporal variation in specialization was also evident for some species at an annual level and for the community across the season. We further found highly stable species were almost always generalists, while highly specialized species were almost always temporally and spatially variable, with few exceptions. Together, these results suggest communities are comprised of a reliable core of generalist species, accompanied by a changing suite of specialist species that, when participating in the network, primarily interact with the reliable core species. Our finding of nested mutualistic network centered on stable‐generalist species accompanied by a changing suite of sporadic specialists indicates dynamic changes in ecological communities vary with topological position. This finding further suggests that, even as rare species are highly threatened by species invasion, climate change, and other anthropogenic perturbations, network structure may be robust to species loss and compositional change from these perturbations. [ABSTRACT FROM AUTHOR]
Conley, Claire C., Small, Brent J., Christie, Juliette, Hoogland, Aasha I., Augusto, Bianca M., Garcia, Jennifer D., Pal, Tuya, and Vadaparampil, Susan T.
Objective: To examine the patterns and covariates of benefit finding over time among young Black breast cancer (BC) survivors.Methods: Black women (N = 305) with invasive BC diagnosed ≤50 years were recruited an average of 1.9 years post-BC diagnosis. Participants completed self-report questionnaires of benefit finding, social support, and illness intrusions at three time points (M time since BC diagnosis: T2 = 3.1 years, T3 = 4.0 years). Relationships between posttraumatic growth constructs (social support, illness intrusions) and benefit finding over time were examined using mixed models. Models controlled for cultural variables (religiosity, time orientation, and collectivism), receipt of chemotherapy, general health status, and partner status.Results: Participants reported high levels of benefit finding (M = 2.99, SE = 0.04 on a 0-4 scale). When accounting for covariates, benefit finding did not change over time since BC diagnosis (P = .21). Benefit finding scores at BC diagnosis were associated with more illness intrusions, greater religiosity, and having received chemotherapy (all Ps < .04). Social support was associated with change in benefit finding scores over time, such that a 1-point increase in social support was associated with a 0.05 increase in benefit finding per year (P = .02).Conclusions: This study addresses key gaps in knowledge regarding benefit finding among Black cancer survivors. Consistent with findings from majority White samples, social support and illness intrusions appear to play a key role in benefit finding in Black BC survivors. Cultural constructs-including religiosity-must also be considered in future studies of benefit finding among minority populations. [ABSTRACT FROM AUTHOR]
Summary: In this letter, a third‐order wideband voltage‐mode all‐pass filter (APF) is proposed for application as a true time delay (TTD) cell. The advantages of designing a single‐stage higher order filter over cascading several lower order stages are illustrated. The proposed APF circuit is based on a single metal‐oxide‐semiconductor (MOS) transistor and is canonical because it requires one resistor, one inductor, and two capacitors. To the best of the authors' knowledge, this is the first single‐transistor third‐order APF circuit to be reported in the literature. The operation of the proposed APF is validated through post‐layout simulations in a 65‐nm CMOS technology. The simulation results demonstrate a group delay of 59.4 ps across a 13.2‐GHz bandwidth with a maximum delay‐bandwidth product of 0.783, while consuming only 3.54mW from a 1‐V supply voltage. Moreover, the designed circuit achieves an input‐referred IP3 of 19.95 dBm and occupies an area of 161.5μm × 204.8μm. [ABSTRACT FROM AUTHOR]
Background: Desmosomes are intercellular cadherin‐mediated adhesion complexes that anchor intermediate filaments to the cell membrane and are required for strong adhesion for tissues under mechanical stress. One specific component of desmosomes is plakophilin 1 (PKP1), which is mainly expressed in the spinous layer of the epidermis. Loss‐of‐function autosomal recessive mutations in PKP1 result in ectodermal dysplasia‐skin fragility (EDSF) syndrome, the initial inherited Mendelian disorder of desmosomes first reported in 1997. Methods: To investigate two new cases of EDSF syndrome and to perform a literature review of pathogenic PKP1 mutations from 1997 to 2019. Results: Sanger sequencing of PKP1 identified two new homozygous frameshift mutations: c.409_410insAC (p.Thr137Thrfs*61) and c.1213delA (p.Arg411Glufs*22). Comprehensive analyses were performed for the 18 cases with confirmed bi‐allelic PKP1 gene mutations, but not for one mosaic case or 6 additional cases that lacked gene mutation studies. All pathogenic germline mutations were loss‐of‐function (splice site, frameshift, nonsense) with mutations in the intron 1 consensus acceptor splice site (c.203‐1>A or G>T) representing recurrent findings. Skin fragility and nail involvement were present in all affected individuals (18/18), with most cases showing palmoplantar keratoderma (16/18), alopecia/hypotrichosis (16/18) and perioral fissuring/cheilitis (12/15; not commented on in 3 cases). Further observations in some individuals included pruritus, failure to thrive with low height/weight centiles, follicular hyperkeratosis, hypohidrosis, walking difficulties, dysplastic dentition and recurrent chest infections. Conclusion: These data expand the molecular basis of EDSF syndrome and help define the spectrum of both the prototypic and variable manifestations of this desmosomal genodermatosis. [ABSTRACT FROM AUTHOR]
Kobayashi, Lindsay C., Cohen, Harvey Jay, Zhai, Wanting, Zhou, Xingtao, Small, Brent J., Luta, George, Hurria, Arti, Carroll, Judith, Tometich, Danielle, McDonald, Brenna C., Graham, Deena, Jim, Heather S.L., Jacobsen, Paul, Root, James C., Saykin, Andrew J., Ahles, Tim A., and Mandelblatt, Jeanne
Subjects
CANCER survivors, BREAST cancer, COGNITIVE ability, WELL-being, COGNITION disorders
Abstract
Objective: To investigate the relationships between self-reported and objectively measured cognitive function prior to systemic therapy and subsequent well-being outcomes over 24 months in older breast cancer survivors.Methods: Data were from 397 women aged 60 to 98 diagnosed with non-metastatic breast cancer in the Thinking and Living with Cancer Study recruited from 2010-2016. Cognitive function was measured at baseline (following surgery, prior to systemic therapy) using neuropsychological assessments of attention, processing speed, and executive function (APE), learning and memory (LM), and the self-reported FACT-Cog scale. Well-being was measured using the FACT-G functional, physical, social, and emotional well-being domain scales at baseline and 12 and 24 months later, scaled from 0 (low) to 100 (high). Linear mixed-effects models assessed the relationships between each of baseline APE, LM, and FACT-Cog quartiles with well-being scores over 24 months, adjusted for confounding variables.Results: At baseline, older survivors in the lowest APE, LM, and FACT-Cog score quartiles experienced poorer global well-being than those in the highest quartiles. At 24 months, older survivors tended to improve in well-being, and there were no differences according to baseline APE or LM scores. At 24 months, mean global well-being was 80.3 (95% CI: 76.2-84.3) among those in the lowest vs 86.6 (95% CI: 83.1-90.1) in the highest FACT-cog quartile, a clinically meaningful difference of 6.3 points (95% CI: 1.5-11.1).Conclusions: Among older breast cancer survivors, self-reported, but not objective cognitive impairments, were associated with lower global well-being over the first 2 years of survivorship. [ABSTRACT FROM AUTHOR]
Scott, Stacey B., Mogle, Jacqueline A., Sliwinski, Martin J., Jim, Heather S. L., and Small, Brent J.
Subjects
BREAST cancer, CANCER survivors, EVERYDAY life, MEMORY, COGNITIVE ability
Abstract
Objective: Cancer-associated cognitive decline is a concern among cancer survivors. Survivors' memory lapses (eg, location of keys, names, and reason entered room) may negatively impact quality of life. This study used smartphone-based surveys to compare cancer survivors to those without cancer history on frequency of, severity of, and affective response to daily memory lapses.Methods: For 14 evenings, breast cancer survivors (N = 47, M age = 52.9) and women without a cancer history (N = 105, M age = 51.8) completed smartphone-based surveys on memory lapse occurrence and severity and negative and positive affect.Results: Survivors were nearly three times more likely to report a daily memory lapse but did not differ from comparison group on memory lapse severity. Negative affect was significantly higher on days with memory lapses associated with doing something in the future (eg, appointments) but this did not differ across groups. Positive affect was not significantly related to survivorship status or the occurrence of daily memory lapses.Conclusion: Survivors may be at-risk for more frequent memory lapses. Both survivors and women without a history of cancer reported greater negative affect on days when memory lapses occurred, suggesting that daily cognitive functioning may have important implications for quality of life. [ABSTRACT FROM AUTHOR]